DNN (Deep Neural Network) & Human Language Technology 2015


Notes:

Unlike lexical semantics, which focuses on the meanings of individual words, the field of compositional semantics looks at the meanings of sentences and longer utterances.

Resources:

Wikipedia:

References:

See also:

DNN (Deep Neural Network) & Human Language Technology 2014


Sentence-level control vectors for deep neural network speech synthesis O Watts, Z Wu, S King – Proc. Interspeech, 2015 – research.ed.ac.uk … [8] “Deep neural networks employing multi … Xue, O. Abdel-Hamid, H. Jiang, L. Dai, and Q. Liu, “Fast adaptation of deep neural network based on … 2003 Conference of the North American Chap- ter of the Association for Computational Linguistics on Human Language Technology. … Cited by 13 Related articles All 4 versions

Advances in deep neural network approaches to speaker recognition M McLaren, Y Lei, L Ferrer – 2015 IEEE International …, 2015 – ieeexplore.ieee.org … [5] L. Ferrer, Y. Lei, and McLaren M., “Study of senone-based deep neural network approaches for … L. Ferrer, M. McLaren, and N. Scheffer, “Compara- tive study on the use of senone-based deep neural networks for speaker … Workshop on Human Language Technology, 1994, pp … Cited by 16 Related articles All 3 versions

Time delay deep neural network-based universal background models for speaker recognition D Snyder, D Garcia-Romero… – 2015 IEEE Workshop on …, 2015 – ieeexplore.ieee.org TIME DELAY DEEP NEURAL NETWORK-BASED UNIVERSAL BACKGROUND MODELS FOR SPEAKER … for Language and Speech Processing & Human Language Technology Center of … ABSTRACT Recently, deep neural networks (DNN) have been in- corporated into i … Cited by 12 Related articles All 5 versions

Combining heterogeneous deep neural networks with conditional random fields for Chinese dialogue act recognition Y Zhou, Q Hu, J Liu, Y Jia – Neurocomputing, 2015 – Elsevier … Besides simple combination, here we introduce a variant of the classical deep neural networks (DNN). As for the DNN, it is unable to deal with heterogeneous information, so we adjust the DNN model to a new heterogeneous deep neural network (HDNN) to learn and combine … Cited by 6 Related articles All 3 versions

Zara the supergirl: An empathetic personality recognition system P Fung, A Dey, FB Siddique, R Lin, Y Yang, W Yan… – 2015 – aclweb.org … Pascale Fung, Anik Dey, Farhad Bin Siddique, Ruixi Lin, Yang Yang, Wan Yan, Ricky Chan Ho Yin Human Language Technology Center Department of Electronic and Computer Engineering Hong Kong … We train deep neural network (DNN) HMMs with 6 hid- den layers. … Cited by 1

Advances in natural language processing J Hirschberg, CD Manning – Science, 2015 – science.sciencemag.org … companies. As a result, there is now great commercial interest in the deployment of human language technology, especially because natural language represents such a natural interface when interacting with mobile phones. … Cited by 38 Related articles All 9 versions

Semantics in Deep Neural-Network Computing X Sun, X Luo, J Liu, X Jiang… – 2015 11th International …, 2015 – ieeexplore.ieee.org … there when we want to involve deep semantics or knowledge semantics into deep neural network models to … Context-dependent pre- trained deep neural networks for large … Proceedings of the 2015 Human Language Technology Conference of the North American Chapter of the … Related articles All 2 versions

A Convolutional Deep Neural Network for Coreference Resolution via Modeling Hierarchical Features XF Xi, G Zhou, F Hu, B Fu – … Conference on Intelligent Science and Big …, 2015 – Springer … In: Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational … MATH. 7. Yang, N., Liu, S., Li, M., Zhou, M., Yu, N.: Word alignment modeling with context dependent deep neural network. … Related articles

Error Analysis and Improving Speech Recognition for Latvian language A Salimbajevs, J Strigins – RECENT ADVANCES IN – pdfs.semanticscholar.org … Miao, Y., Zhang, H., & Metze, F. (2014). Towards speaker adaptive training of deep neural network acoustic models. Proc. … On Using Written Language Training Data for Spoken Language Modeling. In Proceedings of the Workshop on Human Language Technology (pp. 94–98). … Cited by 1 Related articles All 9 versions

Deep Neural Network Based Continuous Speech Recognition for Serbian Using the Kaldi Toolkit V Delic – Speech and Computer: 17th International Conference, …, 2015 – books.google.com … This paper presents a deep neural network (DNN) based large vocabulary continuous speech recognition … Keywords: Kaldi speech recognition toolkit· Continuous speech recog- nition· Deep neural networks· Serbian 1 … In: ARPA Human Language Technology Workshop, pp. … Related articles

Deep Neural Network Based Continuous Speech Recognition for Serbian Using the Kaldi Toolkit B Popovi?, S Ostrogonac, E Pakoci… – … Conference on Speech …, 2015 – Springer … This paper presents a deep neural network (DNN) based large vocabulary continuous speech recognition (LVCSR) system for … Kaldi speech recognition toolkit Continuous speech recognition Deep neural networks Serbian. … In: ARPA Human Language Technology Workshop, pp … Cited by 2 Related articles All 2 versions

Using sub-word n-gram models for dealing with OOV in large vocabulary speech recognition for Latvian A Salimbajevs, J Strigins – Proceedings of the 20th Nordic Conference of …, 2015 – ep.liu.se … 1–27). El-Desoky Mousa, A., Kuo, HKJ, Mangu, L., & Soltau, H. (2013). Morpheme-based feature-rich language models using Deep Neural Networks for LVCSR of Egyptian Arabic. … In Proceedings of the Workshop on Human Language Technology (pp. 94–98). … Cited by 5 Related articles All 9 versions

Large vocabulary automatic speech recognition for children H Liao, G Pundak, O Siohan, M Carroll, N Coccaro… – 2015 – research.google.com … ARPA Workshop on Human Language Technology, 1994, pp. … E. Arisoy, and B. Ram- abhadran, “Low-rank matrix factorization for deep neural network training with … M. Bac- chiani, “Asynchronous stochastic optimization for sequence train- ing of deep neural networks,” in Proc. … Cited by 9 Related articles All 3 versions

Autoencoder based multi-stream combination for noise robust speech recognition SH Mallidi, T Ogawa, K Vesely, PS Nidadavolu… – Proc. Interspeech …, 2015 – fit.vutbr.cz … Processing, Johns Hopkins University, Baltimore, USA 2 Human Language Technology Center of … We used hidden Markov model-deep neural network (HMM-DNN) system based … and D. Povey, “Sequence- discriminative training of deep neural networks”, Interspeech 2013 [17 … Cited by 4 Related articles All 2 versions

Parameterised sigmoid and ReLU hidden activation functions for DNN acoustic modelling C Zhang, PC Woodland – Proc. Interspeech, 2015 – cslt.org … 14, Florence, Italy, 2014 [11] H. Liao, E. McDermott, and A. Senior, “Large scale deep neural network acoustic modeling … Human Language Technology Workshop. … F. Seide, G. Li, X. Chen, and D. Yu, “Feature engineer- ing in context-dependent deep neural networks for conver … Cited by 7 Related articles All 4 versions

Detecting actionable items in meetings by convolutional deep structured semantic models YN Chen, D Hakkani-T, X He – 2015 IEEE Workshop on …, 2015 – ieeexplore.ieee.org … CDSSM) Here we describe how to train CDSSM for actionable item detection. 2.1. Architecture The model is a deep neural network with convolutional structure, where the architecture is illustrated in Fig. 3 [21, 27, 28, 29]. The … Cited by 3 Related articles All 5 versions

Integrating acoustic and state-transition models for free phone recognition in L2 English speech using multi-distribution deep neural networks K Li, X Qian, S Kang, P Liu, H Meng – Proc. SLaTE, 2015 – Citeseer … classification using the high-level features extracted from deep neural networks,” in Proc. … large corpora,” in Proceedings of the con- ference on Human Language Technology and Empirical … TN Sainath, B. Kingsbury, and B. Ramabhadran, “Deep neural network language models … Cited by 5 Related articles All 4 versions

Semi-supervised maximum mutual information training of deep neural network acoustic models V Manohar, D Povey… – Proceedings of …, 2015 – pdfs.semanticscholar.org … Center for Language and Speech Processing † Human Language Technology Center of … J. Trmal, D. Povey, and S. Khudanpur, “Improv- ing Deep Neural Network Acoustic Models … y, and J. Cernock`y, “Improved Fea- ture Processing for Deep Neural Networks,” in Proceedings … Cited by 7 Related articles All 6 versions

Dialog Management with Deep Neural Networks L Zilka – pdfs.semanticscholar.org Page 1. Dissertation Proposal Dialog Management with Deep Neural Networks Lukáš Zilka Institute of Formal and Applied Linguistics Faculty of Matematics and Physics Charles University in Prague … Page 3. Dialog Management with Deep Neural Networks 3 2 Background … Related articles All 3 versions

A Comparison of RNN LM and FLM for Russian Speech Recognition I Kipyatkova, A Karpov – International Conference on Speech and …, 2015 – Springer … In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. … Tomashenko, N., Khokhlov, Y.: Speaker adaptation of context dependent deep neural networks based on MAP-adaptation and GMM-derived … Cited by 3 Related articles All 2 versions

Introduction to the special section on continuous space and related methods in natural language processing H Li, M Federico, X He, H Meng… – IEEE/ACM Transactions on …, 2015 – dl.acm.org … Building on the success of acoustic and statistical language modeling, research on artificial (deep) neural networks, and continuous space models … Dr. Li is currently the Principal Scientist, Department Head of Human Language Technology in the Institute for Infocomm Research … Cited by 2 Related articles All 13 versions

Integrating Gaussian mixtures into deep neural networks: softmax layer with hidden variables Z Tüske, MA Tahir, R Schlüter… – 2015 IEEE International …, 2015 – ieeexplore.ieee.org INTEGRATING GAUSSIAN MIXTURES INTO DEEP NEURAL NETWORKS: SOFTMAX LAYER WITH HIDDEN … a Human Language Technology and Pattern Recognition, Computer Science Department, RWTH … can be easily integrated into the deep neural network framework. … Cited by 4 Related articles All 3 versions

Empty Category Detection With Joint Context-Label Embeddings X Wang, K Sudoh, M Nagata – aclweb.org … Deep neural networks are capable of learning features from corpus, therefore saves the labor of feature engineering and have proven their ability in lots of NLP task (Collobert et al., 2011; Bengio et al., 2006). … (2011) propose a deep neural network scheme exploring the hid … Related articles All 5 versions

Supplementary Material for: Topic Identification and Discovery on Text and Speech C May, F Ferraro, A McCree, J Wintrode… – aclweb.org … Wintrode, Daniel Garcia-Romero, and Benjamin Van Durme Human Language Technology Center of … Parallel training of deep neural networks with natural gradient and parameter averaging. … Improving deep neural network acoustic models using generalized maxout networks. … All 6 versions

Encoding source language with convolutional neural network for machine translation F Meng, Z Lu, M Wang, H Li, W Jiang, Q Liu – arXiv preprint arXiv: …, 2015 – arxiv.org … This representation, together with target words, are fed to a deep neural network (DNN) to form a stronger NNJM … In Proceedings of the 2003 Conference of the North American Chapter of the Association for Compu- tational Linguistics on Human Language Technology-Volume 1 … Cited by 20 Related articles All 11 versions

Usaar-wlv: Hypernym generation with deep neural nets L Tan, R Gupta?, J Van Genabith? – SemEval-2015, 2015 – aclweb.org … In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Com- putational Linguistics: HLT-NAACL 2004. Adam Pease, Ian Niles, and John Li. … 2013. Learning Hierarchical Category Structure in Deep Neural Networks. … Cited by 9 Related articles All 11 versions

Semantic relation classification via convolutional neural networks with simple negative sampling K Xu, Y Feng, S Huang, D Zhao – arXiv preprint arXiv:1506.07650, 2015 – arxiv.org … In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British … 2014. Relation classification via convolutional deep neural network. … Cited by 27 Related articles All 11 versions

Librispeech: an ASR corpus based on public domain audio books V Panayotov, G Chen, D Povey… – 2015 IEEE International …, 2015 – ieeexplore.ieee.org … Center for Language and Speech Processing & Human Language Technology Center of … 18, 19], and those referred to as DNN, are based on deep neural networks with p … [23] X. Zhaug, J. Trmal, D. Povey, aud S. Khudaupur, “Improving deep neural network acoustic models … Cited by 45 Related articles All 9 versions

An investigation of context clustering for statistical speech synthesis with deep neural network B Chen, Z Chen, J Xu, K Yu – Proc. Interspeech, 2015 – speechlab.sjtu.edu.cn … and M. Schuster, “Statistical parametric speech synthesis using deep neural networks,” in Acoustics … and FK Soong, “On the training as- pects of deep neural network (dnn) for … high accuracy acoustic modelling,” in Proceedings of the workshop on Human Language Technology. … Cited by 2 Related articles All 2 versions

A waveform representation framework for high-quality statistical parametric speech synthesis B Fan, SW Lee, X Tian, L Xie… – 2015 Asia-Pacific Signal …, 2015 – ieeexplore.ieee.org … Science, Northwestern Polytechnical University, Xi’an, China † Human Language Technology Department, Institute … hidden Markov model (HMM)-based SPSS [2], deep neural network (DNN)-based … Statistical para- metric speech synthesis using deep neural networks,” in Proc. … Related articles All 9 versions

A general artificial neural network extension for HTK C Zhang, PC Woodland – Proc. Interspeech, Dresden, 2015 – mi.eng.cam.ac.uk … Human Language Technology Workshop, Plainsboro, NJ, USA: Morgan Kaufman Publishers Inc, 1994. … 8] GE Hinton, L. Deng, D. Yu et al., “Deep neural networks for acoustic … Sainath, and H. Soltau, “Scalable minimum Bayes risk training of deep neural network acoustic models … Cited by 15 Related articles All 4 versions

Deep Reinforcement Learning with an Action Space Defined by Natural Language J He, J Chen, X He, J Gao, L Li… – arXiv preprint arXiv …, 2015 – pdfs.semanticscholar.org … Deep neural networks (DNN) are used to map text strings into embedding vectors in a common finite-dimensional space, where “relevance” is measured … A deep neural network is used as a function approximation in a variant of Q-learning (Watkins & Dayan, 1992), and a couple … Cited by 4 Related articles

Multi-level Evaluation for Machine Translation B Chen, H Guo, R Kuhn – EMNLP 2015, 2015 – anthology.aclweb.org … The second one is DREEM, a new metric based on distributed repre- sentations generated by deep neural networks. … In Proceedings of the Human Language Technology Conference, page 128132, San Diego, CA. Ding Liu and Daniel Gildea. 2005. … Related articles All 12 versions

Feature-space speaker adaptation for probabilistic linear discriminant analysis acoustic models L Lu, S Renals – Proc. of Interspeech, 2015 – pdfs.semanticscholar.org … in Proceedings of the second international conference on Human Language Technology Research. … and H. Soltau, “Scalable minimum Bayes risk training of deep neural network acoustic models … and D. Povey, “Sequence- discriminative training of deep neural networks,” in Proc … Cited by 1 Related articles All 9 versions

Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path X Yan, L Mou, G Li, Y Chen, H Peng, Z Jin – arXiv preprint arXiv: …, 2015 – arxiv.org … Deep neural networks, emerging recently, provide a way of highly automatic feature learn- ing (Bengio et al., 2013), and have exhibited con- siderable potential (Zeng et al., 2014; Santos et al., 2015). However, human engineering—that is, incorporating human knowledge to the … Cited by 3 Related articles All 2 versions

Multi-frame factorisation for long-span acoustic modelling L Lu, S Renals – … Conference on Acoustics, Speech and Signal …, 2015 – ieeexplore.ieee.org … A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, TN Sainath, and B. Kingsbury, “Deep neural networks for acoustic … in (conversational) speech data col- lection,” in Proceedings of the second international conference on Human Language Technology Research. … Related articles All 9 versions

Recognizing entailment and contradiction by tree-based convolution L Mou, M Rui, G Li, Y Xu, L Zhang, R Yan… – arXiv preprint arXiv: …, 2015 – arxiv.org … 2008. A unified architecture for natural lan- guage processing: Deep neural networks with multi- task learning. … 2006. Learning to recognize features of valid textual entailments. In Proceedings of the Human Language Technology Conference of the NAACL, pages 41–48. … Cited by 11 Related articles All 4 versions

The I2R ASR System for IWSLT 2015 TH Dat, JW Dennis, NWZ Terence – workshop2015.iwslt.org … Human Language Technology Department Institute for Infocomm Research, A*STAR, Singapore {hdtran,jonathan-dennis,wztng}@i2r.a-star.edu.sg Abstract … IEEE, 2011. [8] H. Liao, “Speaker adaptation of context dependent deep neural networks,” in Acoustics, Speech and … Related articles

Deep Learning of Mouth Shapes for Sign Language O Koller, H Ney, R Bowden – Proceedings of the IEEE …, 2015 – cv-foundation.org … Oscar Koller, Hermann Ney Human Language Technology & Pattern Recog. … and the Gaussian Mixture Models (GMMs) by learnt con- volutional Deep Neural Networks (DNNs). … is an efficient C++ implementation using the NVIDIA CUDA Deep Neural Network GPU-accellerated … Cited by 4 All 7 versions

English to Japanese spoken lecture translation system by using DNN-HMM and phrase-based SMT N Goto, K Yamamoto… – … : Concepts, Theory and …, 2015 – ieeexplore.ieee.org … We improved ASR with a deep neural network (DNN) and an SMT by adjusting the parameters with MIT lectures. In Section 2, we describe related works to this paper. … Human Language Technology NAACL, Speech Indexing Workshop,pp.9- 12, 2004. … Related articles All 2 versions

The effect of neural networks in statistical parametric speech synthesis K Hashimoto, K Oura, Y Nankaku… – 2015 IEEE International …, 2015 – ieeexplore.ieee.org … and M. Schuster, “Statistical parametric speech synthesis using deep neural networks, ” Proceedings of … Soong, “On the training aspects of deep neural network (DNN) for parametric TTS … acoustic modelling, ” Proceedings of ARPA Workshop on Human Language Technology, … Cited by 19 Related articles

Distance-aware dnns for robust speech recognition Y Miao, F Metze – Sixteenth Annual Conference of the International …, 2015 – cs.cmu.edu … and MJ Gales, “Environmentally robust asr front- end for deep neural network acoustic models … Seltzer, D. Yu, and Y. Wang, “An investigation of deep neural networks for noise … in Proceedings of the sec- ond international conference on Human Language Technology Research. … Cited by 4 Related articles All 6 versions

Query sense disambiguation leveraging large scale user behavioral data M Korayem, C Ortiz, K AlJadda… – Big Data (Big Data), …, 2015 – ieeexplore.ieee.org … Recently, Deep Learning approaches have gained a lot of attention across various domains including NLP and text mining [37], [38]. In [39], a deep neural network is applied to learn entity representations, leveraging a combination of supervised and unsupervised approaches. … Cited by 3 Related articles All 4 versions

Segmental acoustic indexing for zero resource keyword search K Levin, A Jansen, B Van Durme – 2015 IEEE International …, 2015 – ieeexplore.ieee.org … Human Language Technology Center of Excellence, Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD 21218 … [5] G. Chen, C. Parada, and G. Heigold, “Small-footprint keyword spotting using deep neural networks,” in Proc. ICASSP, 2014. … Cited by 13 Related articles All 6 versions

Modeling Phonetic Context with Non-random Forests for Speech Recognition H Xu, G Chen, D Povey, S Khudanpur – Sixteenth Annual Conference …, 2015 – clsp.jhu.edu … [2] SJ Young, JJ Odell, and PC Woodland, “Tree-based state tying for high accuracy acoustic modelling,” in Proceedings of the Workshop on Human Language Technology, ser … 13] X. Zhang, J. Trmal, D. Povey, and S. Khudanpur, “Improving deep neural network acoustic models … Cited by 4 Related articles All 7 versions

Multimodal embedding fusion for robust speaker role recognition in video broadcast M Rouvier, S Delecraz, B Favre… – … IEEE Workshop on …, 2015 – ieeexplore.ieee.org … authors proposed a gen- eralized anchor shot detector based on deep neural networks with a … in broadcast news speech,” in Proceed- ings of the Human Language Technology Conference of … Huang, and Bo Xu, “Anchor shot detection with deep neural network,” in Proceedings … Cited by 2 Related articles All 4 versions

Building Knowledge Bases with Universal Schema: Cold Start and Slot-Filling Approaches B Roth, N Monath, D Belanger, E Strubell… – …, 2015 – pdfs.semanticscholar.org … matrix factorization and universal schemas. In Joint Human Language Technology Con- ference/Annual Meeting of the North Ameri- can Chapter of the Association for Computa- tional Linguistics (HLT-NAACL ’13). … Relation classification via convo- lutional deep neural network. … Cited by 1 Related articles All 2 versions

A Lexicalized Tree Kernel for Open Information Extraction Y Xu, C Ringlstetter, MY Kim, R Goebel… – Volume 2: Short …, 2015 – aclweb.org … A unified architecture for natural language processing: Deep neural networks with multitask learning. … In Joint Human Language Technology Conference/Annual Meeting of the North American Chapter of the Asso- ciation for Computational Linguistics (HLT-NAACL ’13), June. … Cited by 2 Related articles All 9 versions

Exploiting foreign resources for DNN-based ASR P Motlicek, D Imseng, B Potard, PN Garner… – EURASIP Journal on …, 2015 – Springer … SJ Young, JJ Odell, PC Woodland, in Proceedings of the Workshop on Human Language Technology. … Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers, (2013), pp. … Multilingual training of deep neural networks, (2013), pp. … Cited by 5 Related articles All 11 versions

Real-time translation of discrete Sinhala speech to Unicode text MKH Gunasekara… – Advances in ICT for …, 2015 – ieeexplore.ieee.org … [23] Geoffrey Hinton et al., “Deep neural networks for acoustic modeling in speech … [28] Thilini Nadungodage and Ruvan Weerasinghe, “Continuous Sinhala speech recognizer,” in In conference on Human Language Technology for Development, Alexandria, Egypt, 20 II. … Related articles All 4 versions

Reports on the 2015 AAAI Workshop series SV Albrecht, JC Beck, DL Buckeridge, A Botea… – AI …, 2015 – go.galegroup.com … It has most prominently been applied to optimize solvers for hard combinatorial problems (for example, SAT, MIP, ASP, and AI planning) as well as to hyperparameter optimization of flexible machine-learning frameworks (such as deep neural networks, or the space of … All 5 versions

Pre-training of hidden-unit crfs YB Kim, K Stratos, R Sarikaya – ACL. Association for …, 2015 – research.microsoft.com … A unified architecture for natural language processing: Deep neural networks with multitask learning. … In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 134 … Cited by 8 Related articles All 11 versions

Relation extraction: Perspective from convolutional neural networks TH Nguyen, R Grishman – Proceedings of NAACL-HLT, 2015 – pdfs.semanticscholar.org … August. Ronan Collobert and Jason Weston. 2008. A Unied Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In Interna- tional Conference on Machine Learning, ICML, 2008. Ronan … Cited by 20 Related articles All 9 versions

Incorporating Context Information into Deep Neural Network Acoustic Models Y Miao – 2015 – cs.cmu.edu … Copyright c 2015 Yajie Miao Page 2. Keywords: Acoustic Models, Deep Neural Networks, Context Information Page 3. Abstract The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech recognition (ASR) tremendously. … Related articles All 3 versions

Sequence-discriminative training of recurrent neural networks P Voigtlaender, P Doetsch, S Wiesler… – … , Speech and Signal …, 2015 – ieeexplore.ieee.org … 1 Human Language Technology and Pattern Recognition, Computer Science Department, RWTH Aachen … Soltau, “Scalable minimum bayes risk training of deep neural network acoustic models … Daniel Povey, “Sequence-discriminative training of deep neural networks,” in Proc. … Cited by 5 Related articles All 3 versions

Drug Name Recognition: Approaches and Resources S Liu, B Tang, Q Chen, X Wang – Information, 2015 – mdpi.com Drug name recognition (DNR), which seeks to recognize drug mentions in unstructured medical texts and classify them into pre-defined categories, is a fundamental task of medical information extraction, and is a key component of many medical relation extraction systems and applications … Cited by 4 Related articles All 3 versions

Context dependent phone models for LSTM RNN acoustic modelling A Senior, H Sak, I Shafran – 2015 IEEE International …, 2015 – ieeexplore.ieee.org … 6. REFERENCES [1] G. Hinton, L. Deng, D. Yu, GE Dahl, Mohamed A., N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, TN Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition … ARPA Human Language Technology Workshop, 1994. … Cited by 12 Related articles All 3 versions

Deep Reinforcement Learning with an Unbounded Action Space J He, J Chen, X He, J Gao, L Li, L Deng… – arXiv preprint arXiv: …, 2015 – arxiv.org … Deep neural networks (DNN) are used to map text strings into embedding vectors in a common finite-dimensional space, where “relevance” could be … A deep neural network is used as a function approximation in a variant of Q learning (Watkins & Dayan, 1992), and a couple of … Cited by 6 Related articles All 3 versions

Learning Bilingual Distributed Phrase Representations for Statistical Machine Translation C Wang, D Xiong, M Zhang, C KIT – Proceedings of MT Summit XV, 2015 – academia.edu … A unified architecture for natural language processing: Deep neural networks with multitask learning. … In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, 58-54. … Related articles

Incremental recurrent neural network dependency parser with search-based discriminative training M Yazdani, J Henderson – 2015 – archive-ouverte.unige.ch … As in all deep neural network architectures, this chain- ing of nonlinear vector computations gives the model a very powerful mechanism to induce com- plex features from combinations of features in the history, which is difficult to replicate with hand- coded features. … Cited by 7 Related articles All 9 versions

Integrating meta-information into recurrent neural network language models Y Shi, M Larson, J Pelemans, CM Jonker… – Speech …, 2015 – Elsevier … In Mousa et al. (2013), the mixture of words and morphemes along with their features were used as input to Deep Neural Network language models. In Luong et al. … The loop structure in rnnlm is unfolded by bptt to a deep neural network. … Related articles All 7 versions

The LIMSI handwriting recognition system for the HTRtS 2014 contest T Bluche, H Ney, C Kermorvant – Document Analysis and …, 2015 – ieeexplore.ieee.org … Kermorvant+§ *LIMSI CNRS, Spoken Language Processing Group, Orsay, France tRWTH Aachen University, Human Language Technology and Pattern … We trained Deep Neural Networks (DNNs) and Bidirectional Long Short-Term Memory Recurrent Neural Networks (BLSTM … Cited by 4 Related articles All 5 versions

Document summarization based on semantic representations H Zhang, X Zhang, G Gao – 2015 International Conference on …, 2015 – ieeexplore.ieee.org … [14] R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neural networks with multitask … of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1 … Related articles

Unsupervised data selection and word-morph mixed language model for Tamil low-resource keyword search C Ni, CC Leung, L Wang, NF Chen… – 2015 IEEE International …, 2015 – ieeexplore.ieee.org … I-276-I-279. [18] TM Kamm and GGL Meyer, “Selective Sampling of Training Data for Speech Recognition,” in Proc. Human Language Technology Conf., San Diego, CA, 2002. [19] Y. Wu, R. Zhang, and A. Rudnicky, “Data Selection for Speech Recognition,” in Proc. … Cited by 6 Related articles

Temporal Information Extraction Extracting Events and Temporal Expressions A Literature Survey N Gupta – 2015 – cfilt.iitb.ac.in … A Unified Architecture for Natural Language Processing: Deep Neural Network for Multitask Learning, proposed a language model that can utilize large unlabeled … a good set of features on itself. A deep neural network is used in this study. Features for the network are … Related articles

Named entity recognition for chinese social media with jointly trained embeddings N Peng, M Dredze – Proceedings of EMNLP, 2015 – aclweb.org … Nanyun Peng and Mark Dredze Human Language Technology Center of Excellence Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD … A unified architecture for natural language processing: Deep neural networks with multitask learning. … Cited by 7 Related articles All 11 versions

Rebuilding Phrase Table Scores from Monolingual Resources Using Neural Networks Vector Representations AP Aghasadeghi, M Bastan, S Khadivi – pdfs.semanticscholar.org … A unified architecture for natural language processing: Deep neural networks with multitask learning. … at the Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology- Volume 1. Le, QV … Related articles All 2 versions

Soft context clustering for F0 modeling in HMM-based speech synthesis S Khorram, H Sameti, S King – EURASIP Journal on Advances in Signal …, 2015 – Springer … One of these is the use of deep neural networks (DNNs)[38, 39] which are able to approximate complex acoustic feature-to-linguistic context dependencies by employing many hidden layers – contrast this with decision trees that cannot efficiently represent something as simple … Cited by 1 Related articles All 9 versions

Deep feature for text-dependent speaker verification Y Liu, Y Qian, N Chen, T Fu, Y Zhang, K Yu – Speech Communication, 2015 – Elsevier … Le Roux and Bengio, 2008), it is believed that deep neural networks can be … et al., 1996), Artificial Neural Network (ANN), and most recently Deep Neural Network (DNN) (Variani … RSR2015 data corpus, released by the Human Language Technology (HLT) department at Institute … Cited by 23 Related articles All 3 versions

BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies S Ji, SVN Vishwanathan, N Satish… – arXiv preprint arXiv: …, 2015 – arxiv.org … Teh (2012). 2.3 RELATED WORK Many approaches have been proposed to address the difficulty of training deep neural networks with large output spaces. In general, they can be categorized into four categories: • Hierarchical … Cited by 9 Related articles All 2 versions

Machine Learning in Automatic Speech Recognition: A Survey J Padmanabhan… – IETE Technical Review, 2015 – Taylor & Francis Cited by 4 Related articles

Inaugural editorial: embracing new opportunities for growth H Li – IEEE/ACM Transactions on Audio, Speech and …, 2015 – dl.acm.org … 14–22, Jan. 2012. [4] GE Dahl, D. Yu, L. Deng, and A. Acero, “Context-dependent pre- trained deep neural networks for large … Dr. Li is currently the Principal Scientist, Department Head of Human Language Technology in the Institute for Infocomm Research (IR), Singapore. … Related articles All 2 versions

Blackout: Speeding Up Recurrent Neural Net-Work Language Models With Very Large Vo S Ji, SVN Vishwanathan, N Satish, MJ Anderson… – pdfs.semanticscholar.org … Teh (2012). 2.3 RELATED WORK Many approaches have been proposed to address the difficulty of training deep neural networks with large output spaces. In general, they can be categorized into four categories: • Hierarchical … Related articles

Modeling under-resourced languages for speech recognition M Kurimo, S Enarvi, O Tilk, M Varjokallio… – Language Resources …, 2015 – Springer Cited by 3 Related articles All 2 versions

Neural coreference resolution K Clark – 2015 – pdfs.semanticscholar.org … The distributed word representations are used to train deep neural networks for coreference. … Ontonotes: the 90% solution. In Human Language Technology and North American Asso- ciation for Computational Linguistics (HLT-NAACL), pages 57–60, 2006. [18] Andrew Kehler. … Cited by 1 Related articles All 4 versions

Neural Self Talk: Image Understanding via Continuous Questioning and Answering Y Yang, Y Li, C Fermuller, Y Aloimonos – arXiv preprint arXiv:1512.03460, 2015 – arxiv.org … questioning and answer process and we present a primitive “self talk” generation method based on two deep neural network modules. … In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 819–826. … Cited by 5 Related articles All 4 versions

Unsupervised Phrasal Near-Synonym Generation from Text Corpora. D Gupta, JG Carbonell, A Gershman, S Klein, D Miller – AAAI, 2015 – www-cgi.cs.cmu.edu … In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the As- sociation of Computational Linguistics (pp. … A unified architecture for natural language processing: Deep neural networks with mul- titask learning. … Cited by 3 Related articles All 6 versions

Improving evaluation and optimization of MT systems against MEANT C Lo, PC Dowling, D Wu – EMNLP 2015, 2015 – anthology.aclweb.org … Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. … In The second international conference on Human Language Technology Research (HLT’02), San Diego, California, 2002. … Cited by 1 Related articles All 13 versions

Generalized Hough transform for speech pattern classification J Dennis, HD Tran, H Li – IEEE/ACM Transactions on Audio, …, 2015 – ieeexplore.ieee.org … A major advantage of our approach is that each step of the GHT is highly interpretable, particularly compared to deep neural network (DNN) systems which … in particular those using deep learning [1]–[4]. Despite their strong performance, deep neural network … Cited by 1 Related articles All 2 versions

Synthetic triphones from trajectory-based feature distributions J Badenhorst, MH Davel – … of South Africa and Robotics and …, 2015 – ieeexplore.ieee.org … 1 Human Language Technology Research Group, CSIR Meraka, South Africa. … Elastic spectral distortion for low resource speech recognition with deep neural networks,” in Automatic … Cui, V. Goel, and B. Kingsbury, “Data augmentation for deep neural network acoustic modeling … Related articles All 2 versions

Scalable Out-of-Sample Extension of Graph Embeddings Using Deep Neural Networks A Jansen, G Sell, V Lyzinski – arXiv preprint arXiv:1508.04422, 2015 – arxiv.org … Index Terms—Deep neural networks, out-of-sample extension, graph embedding … The authors are with the Human Language Technology Center of Excellence and the Center for … mind, we explore the appli- cation of recent advances in deep neural network training methodology … Related articles All 2 versions

Unsupervised adaptation of a denoising autoencoder by Bayesian Feature Enhancement for reverberant asr under mismatch conditions J Heymann, R Haeb-Umbach, P Golik… – … on Acoustics, Speech …, 2015 – ieeexplore.ieee.org … RWTH Aachen University Human Language Technology and Pattern Recognition Computer … best performing systems employ a Deep Neural Network – Hidden Markov … Studies, “REVERBERANT SPEECH RECOGNITION COMBINING DEEP NEURAL NETWORKS AND DEEP … Cited by 1 Related articles All 7 versions

A learning-based approach to direction of arrival estimation in noisy and reverberant environments X Xiao, S Zhao, X Zhong, DL Jones… – … on Acoustics, Speech …, 2015 – ieeexplore.ieee.org … Sciences Center, Singapore 3School of Computer Engineering, Nanyang Technological University, Singapore 4Department of Human Language Technology, Institute for … training data to a test environment, or using multiple hidden layers (ie, deep neural networks) may further … Cited by 7 Related articles

Submodular data selection with acoustic and phonetic features for automatic speech recognition C Ni, L Wang, H Liu, CC Leung, L Lu… – 2015 IEEE International …, 2015 – ieeexplore.ieee.org Page 1. SUBMODULAR DATA SELECTION WITH ACOUSTIC AND PHONETIC FEATURES FOR AUTOMATIC SPEECH RECOGNITION Chongjia Ni1, Lei Wang1, Haibo Liu2, Cheung-Chi Leung1, Li Lu2, and Bin Ma1 1 Institute … Cited by 4 Related articles All 2 versions

Robust face recognition-based search and retrieval across image stills and video K Brady – Technologies for Homeland Security (HST), 2015 …, 2015 – ieeexplore.ieee.org … Kevin Brady Human Language Technology (HL T) Group MIT Lincoln Laboratory Lexington, Massachusetts USA kbrady@ll.mit.edu … 8], Histogram of Oriented Gradients (HOG) [9], Gabor features [10], and more recently features learned from training deep neural networks [6, 7 … Cited by 1 Related articles All 2 versions

The MITLL-AFRL IWSLT 2015 Systems M Kazi, B Thompson, E Salesky, T Anderson… – workshop2015.iwslt.org … in Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology – Volume 1 … [23] G. Dahl, D. Yu, L. Deng, and A. Acero, “Context- Dependent Pre-trained Deep Neural Networks for Large … Cited by 1 Related articles

An effective neural network model for graph-based dependency parsing W Pei, T Ge, B Chang – Proc. of ACL, 2015 – aclweb.org … Compared with cube function, tanh-cube has three advantages: • The cube function is unbounded, making the activation output either too small or too big if the norm of input l is not properly controlled, especially in deep neural network. On the 316 Page 5. … Cited by 13 Related articles All 6 versions

Robust speech recognition using beamforming with adaptive microphone gains and multichannel noise reduction S Zhao, X Xiao, Z Zhang, TNT Nguyen… – … IEEE Workshop on …, 2015 – ieeexplore.ieee.org … of Technology, Japan 4 School of Computer Engineering, Nanyang Technological University, Singapore 5 Department of Human Language Technology, Institute for … With the state-of-the-art deep neural network (DNN) based acoustic model, our system achieves a word error … Cited by 5 Related articles

Graph-Based Dependency Parsing with Recursive Neural Network P Huang, B Chang – … Linguistics and Natural Language Processing Based …, 2015 – Springer … One of the revolutionary changes coming with the rise of Deep Neural Network(DNN) is the idea of representation learning and … of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, vol. 1, pp. … Related articles All 3 versions

Framewise and CTC training of Neural Networks for handwriting recognition T Bluche, H Ney, J Louradour… – Document Analysis and …, 2015 – ieeexplore.ieee.org … Paris, France tUMSI CNRS, Spoken Language Processing Group, Orsay, France +RWTH Aachen University, Human Language Technology and Pattern … 9] K. Vesely, A. Ghoshal, L. Burget, and D. Povey, “Sequence discriminative training of deep neural networks,” in Interspeech … Cited by 4 Related articles All 8 versions

Stereo-based histogram equalization for robust speech recognition R Al-Wakeel, M Shoman, M Aboul-Ela… – EURASIP Journal on …, 2015 – Springer … Recently, deep neural network (DNN) [35, 36] has provided superior performance than GMM for speech recognition … of ARPA Workshop on Human Language Technology, 1993, pp. … A Senior, V Vanhoucke, P Nguyen, T Sainath, B Kingsbury, Deep neural networks for acoustic … Cited by 1 Related articles All 7 versions

Mapping frames with DNN-HMM recognizer for non-parallel voice conversion M Dong, C Yang, Y Lu, JW Ehnes… – 2015 Asia-Pacific …, 2015 – ieeexplore.ieee.org … Minghui Dong, Chenyu Yang, Yanfeng Lu, Jochen Walter Ehnes, Dongyan Huang, Huaiping Ming, Rong Tong, Siu Wa Lee, Haizhou Li Human Language Technology Department, Institute for Infocomm Research, A-Star, Singapore {mhdong, yangc, luyf, jwehnes, huang … Cited by 1 Related articles All 2 versions

An Investigation of Neural Embeddings for Coreference Resolution V Godbole, W Liu, R Togneri – International Conference on Intelligent Text …, 2015 – Springer … learning tasks from object recognition [1] to paraphrase detection [2]. A compelling result from the research in training deep neural networks is that … In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. … Cited by 1 Related articles

Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011 CN Anagnostopoulos, T Iliou, I Giannoukos – Artificial Intelligence Review, 2015 – Springer … Firoz Shah et al. 2009; Fu et al. 2008a), Probabilistic Neural Networks (Cen et al. 2008), Vector Quantization networks (Wenjing et al. 2009) and Deep Neural Networks (Stuhlsatz et al. 2011). In addition, MLP various architectures … Cited by 49 Related articles All 10 versions

Semeval-2015 task 10: Sentiment analysis in twitter S Rosenthal, P Nakov, S Kiritchenko… – … of SemEval-2015, 2015 – aclweb.org … In several of the subtasks, the top system used deep neural networks and word em- beddings, and some systems benefited from special weighting of the positive and negative examples. Once again, the most important features were those derived from sentiment lexicons. … Cited by 133 Related articles All 12 versions

Recurrent neural network language model adaptation with curriculum learning Y Shi, M Larson, CM Jonker – Computer Speech & Language, 2015 – Elsevier … layer, respectively. The weight matrix between the input layer and hidden layer is estimated by backpropagation-through-time (bptt) (Mikolov et al., 2011), which actually unfolds the loop as the deep neural network. In this paper … Cited by 8 Related articles All 3 versions

Investigation of Segmental Conditional Random Fields for large vocabulary handwriting recognition M Hamdani, MAB Shaik, P Doetsch… – Document Analysis and …, 2015 – ieeexplore.ieee.org … Mahdi Hamdani*, M. Ali Basha Shaik*, Patrick Doetsch* and Hermann Ney*t *Human Language Technology and Pattern Recognition Group – RWTH Aachen … [9] T. Bluche, H. Ney, and C. Kermorvant, “A comparison of sequence trained deep neural networks and recurrent … Related articles All 3 versions

Handwritten Text Recognition Results on the Bentham Collection with Improved Classical N-Gram-HMM methods AH Toselli, E Vidal – Proceedings of the 3rd International Workshop on …, 2015 – dl.acm.org … Collection with Improved Classical N-Gram-HMM methods Alejandro H. Toselli and Enrique Vidal Pattern Recognition and Human Language Technology Research Center Universitat Politècnica de València (Spain) [ahector,evidal]@prhlt.upv.es … Related articles

Gaussian lda for topic models with word embeddings R Das, M Zaheer, C Dyer – Proceedings of the 53nd Annual Meeting of …, 2015 – aclweb.org Page 1. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 795–804, Beijing, China, July 26-31, 2015. cO2015 Association for Computational Linguistics … Cited by 24 Related articles All 9 versions

Building Monolingual Word Alignment Corpus for the Greater China Region F Xu, X Xu, M Wang, M Li – Joint Workshop on Language Technology for …, 2015 – aclweb.org … In Proceedings of Human Language Technology Conference and Con- forence on Empirical Methods in Natural Language Processing, pages 81-88. … 2013. Word Alignment Modeling with Context De- pendent Deep Neural Network. … Related articles All 10 versions

From light to rich ERE: annotation of entities, relations, and events Z Song, A Bies, S Strassel, T Riese, J Mott… – Proceedings of the 3rd …, 2015 – aclweb.org … per 1,000 tokens. Triggers were automatically tagged using a deep neural network based tagger trained on the ACE 2005 annotations (Walker et al., 2006) with orthographic and word embedding features. The word embeddings … Cited by 10 Related articles All 7 versions

Predicting Implicit Discourse Relations with Purely Distributed Representations H Li, J Zhang, C Zong – … and Natural Language Processing Based on …, 2015 – Springer … Furthermore, we explore different algorithms for representation learning, such as deep neural networks (DNN) and principle component analysis (PCA). … In: Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pp. … Cited by 1 Related articles All 2 versions

From feedforward to recurrent LSTM neural networks for language modeling M Sundermeyer, H Ney, R Schlüter – IEEE/ACM Transactions on Audio, …, 2015 – dl.acm.org … Eg, in acoustic modeling it was observed that deep neural networks greatly improve over shallow archi- tectures [11], where in the former case multiple neural network layers are stacked on top of each other, as opposed to the latter case where only a single hidden layer is used. … Cited by 35 Related articles All 4 versions

EXPERT Innovations in Terminology Extraction and Ontology Induction L Tan – Hernani Costa, Anna Zaretskaya, Gloria Corpas Pastor – researchgate.net … Automatically Labeling Semantic Classes. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. … Learning Hierarchical Category Structure in Deep Neural Networks. … Related articles All 6 versions

Learning to rap battle with bilingual recursive neural networks D Wu, K Addanki – Proceedings of the 24th International …, 2015 – pdfs.semanticscholar.org … Dekai Wu and Karteek Addanki Human Language Technology Center Department of Computer Science and Engineering Hong Kong University of Science and Technology … A unified architecture for natural language pro- cessing: Deep neural networks with multitask learning. … Cited by 2 Related articles All 10 versions

Big data small data, in domain out-of domain, known word unknown word: The impact of word representation on sequence labelling tasks L Qu, G Ferraro, L Zhou, W Hou, N Schneider… – arXiv preprint arXiv: …, 2015 – arxiv.org Page 1. Big Data Small Data, In Domain Out-of Domain, Known Word Unknown Word: The Impact of Word Representation on Sequence Labelling Tasks Lizhen Qu1,2, Gabriela Ferraro1,2, Liyuan Zhou1, Weiwei Hou1, Nathan … Cited by 7 Related articles All 11 versions

Text clustering using VSM with feature clusters C Qimin, G Qiao, W Yongliang, W Xianghua – Neural Computing and …, 2015 – Springer … In: Proceedings of the conference on human language technology and empirical methods in natural language processing, pp 755–762. 9. Doucet A … Collobert R, Weston J (2008) A unified architecture for natural language processing: Deep neural networks with multitask learning … Cited by 5 Related articles All 4 versions

Hierarchical learning of grids of microtopics N Jojic, A Perina, D Kim – arXiv preprint arXiv:1503.03701, 2015 – arxiv.org Page 1. arXiv:1503.03701v3 [stat.ML] 13 Nov 2015 Under review as a conference paper at ICLR 2016 HIERARCHICAL LEARNING OF GRIDS OF MICROTOPICS Nebojsa Jojic, Alessandro Perina and Dongwoo Kim Microsoft … Related articles All 4 versions

Word Alignment for Statistical Machine Translation Using Hidden Markov Models AM Bigvand – 2015 – cs.sfu.ca Page 1. Word Alignment for Statistical Machine Translation Using Hidden Markov Models by Anahita Mansouri Bigvand A Depth Report Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Department of Computing Science … Related articles

Speaker adaptive joint training of Gaussian mixture models and bottleneck features P Golik, R Schl, H Ney – 2015 IEEE Workshop on Automatic …, 2015 – ieeexplore.ieee.org … Zoltán Tüske, Pavel Golik, Ralf Schlüter, Hermann Ney Human Language Technology and Pattern Recognition, Computer Science Department, RWTH Aachen University, 52056 Aachen, Germany {tuske, golik, schlueter, ney}@cs.rwth-aachen.de ABSTRACT … Cited by 3 Related articles All 3 versions

Chinese microblog sentiment classification based on convolution neural network with content extension method X Sun, F Gao, C Li, F Ren – Affective Computing and Intelligent …, 2015 – ieeexplore.ieee.org … the abstract space. This transformation of feature expression optimizes the sample parameter feature and streamlines the whole training process. DBN is the deep neural network with cascading Boltzmann model. The task of … Cited by 1 Related articles All 3 versions

Modern Standard Arabic Speech Corpus N Halabi – 2015 – en.arabicspeechcorpus.com … Diacritisation is the process of adding those diacritics to Arabic script. DNN: Deep Neural Network. In simple terms, Neural Networks which have more complicated and … Page 17. 13 2007). However, recently, Deep Neural Networks (DNNs) have been used to synthesise speech … Related articles

A CRF-based system for recognizing chemical entity mentions (CEMs) in biomedical literature S Xu, X An, L Zhu, Y Zhang… – Journal of …, 2015 – jcheminf.springeropen.com Skip to main content. Advertisement. … Cited by 3 Related articles All 10 versions

End-to-end learning of semantic role labeling using recurrent neural networks J Zhou, W Xu – Proceedings of the Annual Meeting of the Association …, 2015 – aclweb.org Page 1. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1127–1137, Beijing, China, July 26-31, 2015. cO2015 Association for Computational Linguistics … Cited by 36 Related articles All 5 versions

Unsupervised and Lightly Supervised Part-of-Speech Tagging Using Recurrent Neural Networks O Zennaki, N Semmar, L Besacier – 2015 – anthology.aclweb.org … 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning, In Proceedings of the Interna- tional Conference on Machine Learning (ICML):160– 167. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. … Cited by 5 Related articles All 9 versions

Distributed Vector Space Models for Semantic MT Evaluation in MEANT PC Dowling – philipp.dowling.io … running experiments. Additionally, I would like to thank everyone at the Human Language Technology Center MT group at HKUST for offering me advice and expertise that was crucial in completing my work. Special thanks … Related articles

Comparing word representations for implicit discourse relation classification C Braud, P Denis – Empirical Methods in Natural Language Processing ( …, 2015 – hal.inria.fr Page 1. Comparing Word Representations for Implicit Discourse Relation Classification Chloé Braud, Pascal Denis To cite this version: Chloé Braud, Pascal Denis. Comparing Word Representations for Implicit Discourse Rela- tion Classification. … Cited by 19 Related articles All 16 versions

Task-oriented learning of word embeddings for semantic relation classification K Hashimoto, P Stenetorp, M Miwa… – arXiv preprint arXiv: …, 2015 – arxiv.org … Concretely, elements in e are ran- domly omitted with a probability of 0.5 at each training step. Recently dropout has been ap- plied to deep neural network models for natu- ral language processing tasks and proven effec- tive (Irsoy and Cardie, 2014; Paulus et al., 2014). … Cited by 10 Related articles All 14 versions

Structural information aware deep semi-supervised recurrent neural network for sentiment analysis W Rong, B Peng, Y Ouyang, C Li, Z Xiong – Frontiers of Computer Science, 2015 – Springer … unlabelled data. As to the neural network, previously proposed deep neural networks with traditional back propagation algorithm did not get satisfied performance, partially due to not being initial- ized properly [19,54]. Traditionally … Cited by 3 Related articles All 4 versions

SensEmbed: learning sense embeddings for word and relational similarity I Iacobacci, MT Pilehvar, R Navigli – Proceedings of ACL, 2015 – anthology.aclweb.org Page 1. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 95–105, Beijing, China, July 26-31, 2015. cO2015 Association for Computational Linguistics … Cited by 31 Related articles All 11 versions

Exploiting resources from closely-related languages for automatic speech recognition in low-resource languages from Malaysia S Juan, S Flora – 2015 – theses.fr … 40 3.4.2 Deep Neural Networks . . . . . … SGMM Subspace Gaussian Mixture Model UBM Universal Background Model DNN Deep Neural Network RBM Restricted Boltzmann Machines DBN Deep Belief Network EM Expected Maximization IPA International Phonetic Alphabets … Related articles All 2 versions

Investigations on phrase-based decoding with recurrent neural network language and translation models T Alkhouli, F Rietig, H Ney – Proc. WMT, 2015 – aclweb.org … Investigations on Phrase-based Decoding with Recurrent Neural Network Language and Translation Models Tamer Alkhouli, Felix Rietig, and Hermann Ney Human Language Technology and Pattern Recognition Group RWTH Aachen … Deep neural network language models. … Cited by 6 Related articles All 15 versions

Regularized minimum variance distortionless response-based cepstral features for robust continuous speech recognition MJ Alam, P Kenny, D O’Shaughnessy – Speech Communication, 2015 – Elsevier In this paper, we present robust feature extractors that incorporate a regularized minimum variance distortionless response (RMVDR) spectrum estimator instead o. Cited by 3 Related articles All 6 versions

Character-based parsing with convolutional neural network X Zheng, H Peng, Y Chen… – Proceedings of the …, 2015 – pdfs.semanticscholar.org Page 1. Abstract We describe a novel convolutional neural network architecture with k-max pooling layer that is able to successfully recover the structure of Chinese sentences. This network can capture active fea- tures for unseen … Cited by 1 Related articles All 7 versions

Integrating word embeddings and traditional NLP features to measure textual entailment and semantic relatedness of sentence pairs J Zhao, M Lan, ZY Niu, Y Lu – 2015 International Joint …, 2015 – ieeexplore.ieee.org Page 1. Integrating Word Embeddings and Traditional NLP Features to Measure Textual Entailment and Semantic Relatedness of Sentence Pairs Jiang Zhao2, Man Lan*1,2, Zheng-Yu Niu3, Yue Lu1,2 1 Shanghai Key Laboratory … Related articles

Analysis of negation cues for semantic orientation classification of reviews in spanish SN Galicia-Haro, A Palomino-Garibay… – … Conference on Artificial …, 2015 – Springer … Recent trends in opinion mining and sentiment analysis focus on the use of deep neural networks, such as Convolutional Neural Network [8]. Bag-of-concepts-based approach is also gaining attention in sentiment analysis context [9–11]. … Related articles

Efficient Second-Order Gradient Boosting for Conditional Random Fields. T Chen, S Singh, B Taskar, C Guestrin – AISTATS, 2015 – jmlr.org Page 1. Efficient Second-Order Gradient Boosting for Conditional Random Fields Tianqi Chen Sameer Singh Ben Taskar Carlos Guestrin Computer Science and Engineering, University of Washington, Seattle WA {tqchen,sameer,taskar,guestrin}@cs.washington.edu Abstract … Cited by 5 Related articles All 10 versions

A unified multilingual semantic representation of concepts J Camacho-Collados, MT Pilehvar… – Proceedings of ACL, …, 2015 – aclweb.org Page 1. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 741–751, Beijing, China, July 26-31, 2015. cO2015 Association for Computational Linguistics … Cited by 14 Related articles All 11 versions

Toward deep learning software repositories M White, C Vendome… – 2015 IEEE/ACM 12th …, 2015 – ieeexplore.ieee.org … From a temporal perspective, this recurrent neural network (RNN) can be viewed as a very deep neural network [23], [39]–[42], where depth is the length of the longest path from an input node to an output node, and the purpose of the depth in this case is to reliably model … Cited by 17 Related articles All 7 versions

Towards a Framework for Winograd Schemas Resolution N Bova, M Rovatsos – essence-network.com Page 1. Towards a Framework for Winograd Schemas Resolution Nicola Bova School of Informatics The University of Edinburgh Edinburgh, UK EH8-9AB Email: nbova@inf.ed.ac.uk Michael Rovatsos School of Informatics The … Related articles

Topic Identification and Discovery on Text and Speech C May, F Ferraro, A McCree, J Wintrode… – cs.jhu.edu … Chandler May, Francis Ferraro, Alan McCree, Jonathan Wintrode, Daniel Garcia-Romero, and Benjamin Van Durme Human Language Technology Center of … The deep neural network (DNN) used to infer the tri- phone state cluster posteriors forming the basis of our speech data … Cited by 3 Related articles All 11 versions

What Makes it Difficult to Understand a Scientific Literature? M Cao, J Tian, D Cheng, J Liu… – 2015 11th International …, 2015 – ieeexplore.ieee.org … and efficient way. Recently, statistics machine learning methods such as topic models [20] and deep neural network (NN) models [16][17][18] have achieved significant progress on many sub NLP tasks [20]. For example, the … Related articles All 4 versions

Culture Clubs Processing Speech by Deriving and Exploiting Linguistic Subcultures DG Brizan – 2015 – gc.cuny.edu Page 1. Culture Clubs Processing Speech by Deriving and Exploiting Linguistic Subcultures by David Guy Brizan Thesis Proposal for the degree of Doctor of Philosophy at the Graduate Center of the City University of New York … Related articles All 2 versions

Selected Topics in Audio-based Recommendation of TV Content SE Shepstone – 2015 – vbn.aau.dk … ASR Automatic Speech Recognition CC Common Condition DCF Detection Cost Function DET Detection Error Tradeoff DNN Deep Neural Network EER Equal Error Rate EM Expectation Maximization EPG Electronic Program Guide GMM Gaussian Mixture Model GVC Group … Related articles All 3 versions

Towards a Computational Framework for Winograd Schemas Resolution N Bova, M Rovatsos – essence-network.com Page 1. Towards a Computational Framework for Winograd Schemas Resolution Nicola Bova School of Informatics The University of Edinburgh Edinburgh, UK EH8-9AB Email: nbova@inf.ed.ac.uk Michael Rovatsos School … Related articles

Terminology and ontology L Tan, J van Genabith, M Zampieri, A Schumann… – 2015 – expert-itn.eu Page 1. Project reference: 317471 Project full title: EXPloiting Empirical appRoaches to Translation D4.2: Terminology and Ontology Authors: Liling Tan (USAAR) Contributors: Josef van Genabith (USAAR), Marcos Zampieri … Cited by 1 Related articles

Approximation-aware dependency parsing by belief propagation MR Gormley, M Dredze, J Eisner – arXiv preprint arXiv:1508.02375, 2015 – arxiv.org … Gormley Mark Dredze Jason Eisner Department of Computer Science Center for Language and Speech Processing Human Language Technology Center of … L2 Distance We can view our inference, decoder, and loss as defining a form of deep neural network, whose topology … Cited by 4 Related articles All 16 versions

Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese ER Fonseca, JLG Rosa… – Journal of the …, 2015 – journal-bcs.springeropen.com Skip to main content. … Cited by 8 Related articles All 6 versions

Graph-based Semi-supervised Acoustic Modeling for Automatic Speech Recognition Y Liu – 2015 – ssli.ee.washington.edu Page 1. Graph-based Semi-supervised Acoustic Modeling for Automatic Speech Recognition Yuzong Liu Mar 30, 2015 Abstract In this thesis proposal, we investigate how to apply graph-based semi-supervised learning to acoustic modeling in speech recognition. … Related articles All 2 versions

Deep learning for sentiment analysis: successful approaches and future challenges D Tang, B Qin, T Liu – Wiley Interdisciplinary Reviews: Data …, 2015 – Wiley Online Library Skip to Main Content. Wiley Online Library. Log in / Register. Log In E-Mail Address Password Forgotten Password? Remember Me. … Cited by 5 Related articles All 4 versions

Topic segmentation on spoken documents using self-validated acoustic cuts H Chen, L Xie, W Feng, L Zheng, Y Zhang – Soft Computing, 2015 – Springer … Some new and effective acoustic representations have been proposed recently, eg, intrinsic spectral analysis (ISA) (Jansen et al. 2012 ), point process model (Kintzley et al. 2012 ) and deep neural network (DNN) posteriorgrams (Zhang et al. 2012 ). … Cited by 1 Related articles All 3 versions

Learning multi-faceted representations of individuals from heterogeneous evidence using neural networks J Li, A Ritter, D Jurafsky – arXiv preprint arXiv:1510.05198, 2015 – arxiv.org Page 1. Learning multi-faceted representations of individuals from heterogeneous evidence using neural networks Jiwei Li Computer Science Department Stanford University jiweil@stanford.edu Alan Ritter Dept. of Computer … Cited by 5 Related articles All 3 versions

Topics, Trends, and Resources in Natural Language Processing (NLP) M Bansal – Citeseer Page 1. Topics, Trends, and Resources in Natural Language Processing (NLP) Mohit Bansal TTI-Chicago (CSC2523, ‘Visual Recognition with Text’, UToronto, Winter 2015 – 01/21/2015) (various slides adapted/borrowed from Dan Klein’s and Chris Manning’s course slides) … Related articles All 2 versions

Sentiment Analysis Using Social Multimedia J Yuan, Q You, J Luo – Multimedia Data Mining and Analytics, 2015 – Springer Cited by 3 Related articles All 3 versions

Bringing machine learning and compositional semantics together P Liang, C Potts – Annu. Rev. Linguist., 2015 – annualreviews.org Cited by 20 Related articles All 4 versions

F0 Modeling For Singing Voice Synthesizers with LSTM Recurrent Neural Networks S Özer – mtg.upf.edu … achieve text-to-speech (TTS) synthesis. DBLSTMs are deep, that is they have multiple hidden layers and deep representation of the input features as deep neural networks (DNNs). Deep-layer architectures allow complex function estimations by creating … Related articles

Opinion mining and sentiment analysis E Breck, C Cardie – cs.cornell.edu Page 1. 39. Opinion mining and sentiment analysis Eric Breck and Claire Cardie Abstract Opinions are ubiquitous in text, and readers of on-line text — from con- sumers to sports fans to news addicts to governments — can benefit from au- … Related articles All 4 versions

Extractive broadcast news summarization leveraging recurrent neural network language modeling techniques KY Chen, SH Liu, B Chen, HM Wang… – … on Audio, Speech, …, 2015 – ieeexplore.ieee.org … 5 virtually unfolds the feedback loop of RNNLM making its model structure bear a close resemblance to the family of so-called deep neural networks [40] and thereby learn to remember word usage information for several time steps that is packed into the hidden layer of RNNLM … Cited by 7 Related articles All 9 versions

Domain adaptation in semantic role labeling using a neural language model and linguistic resources QTN Do, S Bethard, MF Moens – IEEE/ACM Transactions on …, 2015 – ieeexplore.ieee.org … Deep learning techniques based on semi-supervised embeddings have been used to improve a SRL system [5]. This track has been pursued further, using a deep neural network architecture to obtain good word representations in the form of word-embeddings [6]. The neural … Cited by 2 Related articles All 4 versions

Semantic and stylistic text analysis and text summary evaluation H Heuer – aaltodoc.aalto.fi Page 1. Aalto University School of Science Master’s Programme in ICT Innovation Hendrik Heuer Semantic and stylistic text analysis and text summary evaluation Master’s Thesis Stockholm, July 20, 2015 Supervisors: Prof. Jussi Karlgren, KTH Royal Institute of Technology Prof. … Related articles All 2 versions

Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers O Koller, J Forster, H Ney – Computer Vision and Image Understanding, 2015 – Elsevier … Human Language Technology and Pattern Recognition, RWTH Aachen University, Germany. Received 30 October 2014, Revised 22 September 2015, Accepted 23 September 2015, Available online 1 November 2015. Highlights. • … Cited by 10 Related articles All 4 versions

Arabic Text Recognition and Machine Translation I Alkhoury – 2015 – riunet.upv.es Page 1. UNIVERSITAT POLITÈCNICA DE VALÈNCIA DEPARTAMENT DE SISTEMES INFORMÀTICS I COMPUTACIÓ Arabic Text Recognition and Machine Translation Thesis presented by Ihab Al-Khoury supervised by Dr. Alfons Juan Císcar and Dr. Jesús Andrés Ferrer … Related articles

Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes C Zhang, D Wang, L Li, TF Zheng – cslt.riit.tsinghua.edu.cn Page 1. Zhang et al. CSLT TECHNICAL REPORT-20150015 [Monday 24th August, 2015] Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes Chenhao Zhang1, Dong Wang1, Lantian Li1 and Thomas Fang Zheng1* … Related articles

Natural Language Understanding and Prediction Technologies N Duta – ijcai15.org Page 1. 1 IJCAI 2015 Tutorial Nicolae Duta Cloud ML @ Microsoft Natural Language Understanding and Prediction Technologies Page 2. 2 IJCAI 2015 Tutorial Outline •Voice and language technologies: history, examples and technological challenges … Related articles All 2 versions

CODRA: A novel discriminative framework for rhetorical analysis S Joty, G Carenini, RT Ng – Computational Linguistics, 2015 – MIT Press … 2013a), sentiment analysis (Socher et al. 2013b), and various tagging tasks (Collobert et al. 2011), a couple of recent studies in discourse parsing also use deep neural networks (DNNs) and related feature representation methods. Inspired by the work of Socher et al. … Cited by 18 Related articles All 13 versions

Automated Cross-Platform Code Synthesis from Web-Based Programming Resources A Byalik – 2015 – vtechworks.lib.vt.edu Page 1. Automated Cross-Platform Code Synthesis from Web-Based Programming Resources Antuan Byalik Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of … Related articles

Modelling Syntactic and Semantic Tasks with Linguistically Enriched Recursive Neural Networks J Mallinson – 2015 – pdfs.semanticscholar.org Page 1. Modelling Syntactic and Semantic Tasks with Linguistically Enriched Recursive Neural Networks MSc Thesis (Afstudeerscriptie) written by Jonathan Mallinson (born December 29, 1989 in Ipswich, United Kingdom) under … Related articles All 3 versions

[BOOK] Non-Linguistic Analysis of Call Center Conversations SK Kopparapu – 2015 – Springer Page 1. SPRINGER BRIEFS IN ELECTRICAL AND COMPUTER ENGINEERING Sunil Kumar Kopparapu Non-Linguistic Analysis of Call Center Conversations Page 2. SpringerBriefs in Electrical and Computer Engineering Page 3. … Cited by 5 Related articles All 6 versions

A probabilistic framework for representing dialog systems and entropy-based dialog management through dynamic stochastic state evolution J Wu, M Li, CH Lee – IEEE/ACM Transactions on Audio, Speech …, 2015 – ieeexplore.ieee.org Page 1. 2329-9290 (c) 2015 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This … Cited by 4 Related articles All 6 versions

Exploring machine learning design options in discourse parsing W Liao – 2015 – open.library.ubc.ca Learning, knowledge, research, insight: welcome to the world of UBC Library, the second-largest academic research library in Canada. Related articles All 2 versions

Distributed conditional computation N Léonard – 2015 – papyrus.bib.umontreal.ca Page 1. Université de Montréal Distributed Conditional Computation par Nicholas Léonard Département d’informatique et de recherche opérationnelle Faculté des arts et des sciences Mémoire présenté `a la Faculté des arts … Related articles

Learning for Spoken Dialog Systems with Discriminative Graphical Models Y Ma – 2015 – etd.ohiolink.edu … Most recently, due to the significant performance improvement of the ASR systems that use deep neural networks to model the acoustics of speech, developing mobile applications that use spoken natural language user interface to answer user-initiated queries has been one … Related articles All 3 versions