LSTM (Long Short Term Memory) & Dialog Systems 2017


Notes:

Long short-term memory (LSTM) is a recurrent neural network (RNN) architecture, an artificial neural network (ANN). 2017 experienced a veritable explosion in academic publications on LSTM dialog systems.

  • Text generation

Resources:

Wikipedia:

See also:

100 Best Convolutional Neural Network Videos | 100 Best Java Neural Network Videos | 100 Best MATLAB Neural Network Videos | 100 Best Neural Network Training Videos | 100 Best Neural Network Tutorial Videos | 100 Best Recurrent Neural Network Videos | CNN (Convolutional Neural Network) & Dialog Systems 2016 | DNN (Deep Neural Network) & Human Language Technology 2017 | Natural Language Generation, Deep Neural Networks & Dialog Systems 2017 | Neural Conversation Models 2016 | Neural Dialog Models | Neural Dialog Systems | Neural Language Models 2016 | Neural Network & Dialog Systems 2016 | Neural Question Generation 2017 | Neural Summarization | Neural Turing Machines 2016 | NMT (Neural Machine Translation) & Dialog Systems 2016 | NSCA (Neural-Symbolic Cognitive Agent) | PNN (Probabilistic Neural Network) & Dialog Systems | RNN (Recurrent Neural Network) & Dialog Systems 2016 | RNN (Recurrent Neural Network) & Question Answering Systems 2016


A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues.
IV Serban, A Sordoni, R Lowe, L Charlin, J Pineau… – AAAI, 2017 – aaai.org
… Human Evaluation Evaluation of dialogue system re- sponses is a difficult and open problem (Galley and others 2015; Pietquin and Hastie 2013) … VHRED is also strongly preferred over the LSTM baseline model for long contexts, although the LSTM model is preferred over …

Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning
JD Williams, K Asadi, G Zweig – arXiv preprint arXiv:1702.03274, 2017 – arxiv.org
… Moreover, in some practical settings, programmed constraints are es- sential – for example, a banking dialog system would require that a user is logged … to form a feature vector (step 6). This vector is passed to an RNN, such as a long short- term memory (LSTM) (Hochreiter and …

End-to-end joint learning of natural language understanding and dialogue manager
X Yang, YN Chen, D Hakkani-Tür… – … , Speech and Signal …, 2017 – ieeexplore.ieee.org
… Index Terms— language understanding, spoken dialogue systems, end-to-end, dialogue manager, deep … as a multi-tasking framework by sharing bi-directional long short-term memory (biLSTM) layers … LSTM cells are chosen as recurrent units since LSTM can mitigate problems …

Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation.
IV Serban, T Klinger, G Tesauro, K Talamadupula… – AAAI, 2017 – aaai.org
… 2016). The RNNLM model has 2000 hidden units with the LSTM gating function … Evaluation Methods It has long been known that accurate evaluation of dialogue system responses is difficult (Schatzmann, Georgila, and Young 2005). Liu et al …

Emotional chatting machine: emotional conversation generation with internal and external memory
H Zhou, M Huang, T Zhang, X Zhu, B Liu – arXiv preprint arXiv:1704.01074, 2017 – arxiv.org
… Emotional intelligence is one of the key factors to the success of dialogue systems or conversational agents … we compare several models for emotion classification, including a dictionary based classifier (denoted by Dict in Table 2), RNN (Mikolov et al., 2010), LSTM (Hochreiter …

GuessWhat?! Visual object discovery through multi-modal dialogue
H De Vries, F Strub, S Chandar, O Pietquin… – Proc. of …, 2017 – openaccess.thecvf.com
… Although goal-directed dialogue systems are appeal- ing, they remain hard to design … Finally, the embedding of the current natural language question q is computed using an Long Short-Term Memory (LSTM) network [15] where questions are first tokenized by using the word …

A deep reinforced model for abstractive summarization
R Paulus, C Xiong, R Socher – arXiv preprint arXiv:1705.04304, 2017 – arxiv.org
… These models use recurrent neural networks (RNN), such as long-short term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) to encode an input sentence into a fixed vector, and create a new output sequence from that vector using an- other RNN …

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient.
L Yu, W Zhang, J Wang, Y Yu – AAAI, 2017 – aaai.org
… Recently, recurrent neural networks (RNNs) with long short-term memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural lan- guage generation to handwriting generation (Wen et al. 2015; Graves 2013) …

Training end-to-end dialogue systems with the ubuntu dialogue corpus
RT Lowe, N Pow, IV Serban, L Charlin… – Dialogue & …, 2017 – dad.uni-bielefeld.de
… This mimics the training of dialogue systems in practice, where we only have access to data in the past, and want to answer user queries in the … The approaches considered are: TF-IDF, and models using Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) …

Encoder-decoder with focus-mechanism for sequence labelling based spoken language understanding
S Zhu, K Yu – … , Speech and Signal Processing (ICASSP), 2017 …, 2017 – ieeexplore.ieee.org
… In a spoken dialogue system, the Spoken Language Under- standing (SLU) is a key component that parses user utterances into … task, such as simple recurrent neural networks (RNNs) [7, 8, 9], convolu- tional neural networks (CNNs) [10], long short-term memory (LSTM) [11] and …

End-to-end optimization of goal-driven and visually grounded dialogue systems
F Strub, H De Vries, J Mary, B Piot, A Courville… – arXiv preprint arXiv …, 2017 – arxiv.org
… game that will serve as a task for our dialogue system, but refer to [de Vries et al., 2016] for more details regarding the task and the exact con- tent … j 1:i by applying the transition function f: s j i+1 = f(s j i ,w j i ). We use the pop- ular long-short term memory (LSTM) cell [Hochreiter …

Adversarial learning for neural dialogue generation
J Li, W Monroe, T Shi, A Ritter, D Jurafsky – arXiv preprint arXiv …, 2017 – arxiv.org
… 3To be specific, each utterance p or q is mapped to a vector representation hp or hq using LSTM (Hochreiter and Schmidhuber, 1997). Another LSTM is put on sentence level, mapping the entire dialogue sequence to a single representation …

A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue
M Eric, CD Manning – arXiv preprint arXiv:1701.04024, 2017 – arxiv.org
… dialogue systems (Ritter et al., 2011; Li et al., 2015) … In the ta- ble, Seq2Seq refers to our vanilla encoder-decoder architecture with (1), (2), and (3) LSTM layers re- spectively. +Attn refers to a 1-layer Seq2Seq with attention-based decoding … 1997. Long short-term memory …

Affect-lm: A neural language model for customizable affective text generation
S Ghosh, M Chollet, E Laksana, LP Morency… – arXiv preprint arXiv …, 2017 – arxiv.org
… is of great importance to understanding spoken language sys- tems, particularly for emerging applications such as dialogue systems and conversational … wt is predicted from a con- text of all preceding words w1,w2, …, wt-1 with an LSTM (Long Short-Term Memory) neural net …

Variational Autoencoder for Semi-Supervised Text Classification.
W Xu, H Sun, C Deng, Y Tan – AAAI, 2017 – aaai.org
… Ghosh, S.; Vinyals, O.; Strope, B.; Roy, S.; Dean, T.; and Heck, L. 2016. Contextual lstm (clstm) models for large scale nlp tasks. arXiv preprint arXiv:1602.06291. Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural Computation 9(8):1735–1780 …

Attention-based multimodal fusion for video description
C Hori, T Hori, TY Lee, Z Zhang… – … (ICCV), 2017 IEEE …, 2017 – openaccess.thecvf.com
… Then the output sequence (word sequence) is gener- ated from the semantic vector. In this case, both the encoder and the decoder (sentence generator) are usually modeled as Long Short-Term Memory (LSTM) networks. Given …

Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability
T Zhao, A Lu, K Lee, M Eskenazi – arXiv preprint arXiv:1706.08476, 2017 – arxiv.org
… Figure 2: The proposed pipeline for task-oriented dialog systems … Then an Long Short-Term Memory (LSTM) (Hochre- iter and Schmidhuber, 1997) network reads the se- quence turn embeddings in the dialog history via recursive state update si+1 = LSTM(ti+1,hi), in which hi is …

Toward abstraction from multi-modal data: empirical studies on multiple time-scale recurrent models
J Zhong, A Cangelosi, T Ogata – Neural Networks (IJCNN) …, 2017 – ieeexplore.ieee.org
… attempt was the long short- term memory (LSTM) [6] which consists of various gating functions that controlled by simple element-wise operations. Since it was designed, it has achieved satisfaction results in competitions [7] as well as tasks such as dialogue system [8], sentiment …

A Unified Model for Cross-Domain and Semi-Supervised Named Entity Recognition in Chinese Social Media.
H He, X Sun – AAAI, 2017 – aaai.org
… Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Huang, Z.; Xu, W.; and Yu, K. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Kawahara, D., and Uchimoto, K. 2008 …

What Happens Next? Future Subevent Prediction Using Contextual Hierarchical LSTM.
L Hu, J Li, L Nie, XL Li, C Shao – AAAI, 2017 – aaai.org
… Manshadi et al. (2008) learned a prob- abilistic language model of the event sequences. Pichotta and Mooney (2016) described a model for statistical script learning using Long Short-Term Memory (LSTM) … Long-Short Term Memory (LSTM) …

Towards an automatic Turing test: Learning to evaluate dialogue responses
R Lowe, M Noseworthy, IV Serban… – arXiv preprint arXiv …, 2017 – arxiv.org
… We believe this is sufficient for making progress as current dialogue systems often generate inappropriate re- sponses … In this paper, we consider RNNs augmented with long-short term memory (LSTM) units (Hochre- iter and Schmidhuber, 1997) …

Improving neural machine translation with conditional sequence generative adversarial nets
Z Yang, W Chen, F Wang, B Xu – arXiv preprint arXiv:1703.04887, 2017 – arxiv.org
… 1 RNN-based Recurrent neural network has sev- eral different formations, such as the long short- term memory network (LSTM … For RNN-based architecture, we test LSTM and bidirectional LSTM … we plan to test our method in other NLP tasks, like dialogue system and question …

Adversarial ranking for language generation
K Lin, D Li, X He, Z Zhang, MT Sun – Advances in Neural Information …, 2017 – papers.nips.cc
… to many applications such as machine translation [1], image captioning [6], and dialogue systems [26 … In this paper, we design the generative model with the long short-term memory networks (LSTMs) [11 … A LSTM iteratively takes the embedded features of the current token wt plus …

Enhancing lstm rnn-based speech overlap detection by artificially mixed data
G Hagerer, V Pandit, F Eyben, B Schuller – Audio Engineering Society …, 2017 – aes.org
… gh @ audeering.com ) ABSTRACT This paper presents a new method for Long Short – Term Memory Recurrent Neural Network ( LSTM ) based speech … a con – improving speaker diarization and speech recognition vincingly realistic virtual agent or a dialogue system …

DeepTingle
A Khalifa, GAB Barros, J Togelius – arXiv preprint arXiv:1705.03557, 2017 – arxiv.org
… We believe this may be due to the LSTM reaching its maximum capacity at size of 9. Another experiment aimed at testing the robustness of the network, by testing the effect of unknown … Long short-term memory … Stochastic language generation for spoken dialogue systems …

Deconvolutional paragraph representation learning
Y Zhang, D Shen, G Wang, Z Gan… – Advances in Neural …, 2017 – papers.nips.cc
… toward more applied tasks, such as sentiment analysis [1, 2, 3, 4], machine translation [5, 6, 7], dialogue systems [8, 9 … Recent advances in Recurrent Neural Networks (RNNs) [15], especially Long Short-Term Memory (LSTM) [16] and variants [17], have achieved great success in …

Augmenting end-to-end dialog systems with commonsense knowledge
T Young, E Cambria, I Chaturvedi, M Huang… – arXiv preprint arXiv …, 2017 – arxiv.org
… Hence, in this paper we investigate augmenting end-to-end dialog systems with commonsense knowledge as external memory … Dual-LSTM encoder As a version of recurrent neural network, a long short- term memory (LSTM) network (Hochreiter and Schmidhu- ber 1997) is …

Spoken language understanding for a nutrition dialogue system
M Korpusik, J Glass – IEEE/ACM Transactions on Audio …, 2017 – ieeexplore.ieee.org
… KORPUSIK AND GLASS: SPOKEN LANGUAGE UNDERSTANDING FOR A NUTRITION DIALOGUE SYSTEM … In particular, recurrent neural networks and their long short-term memory (LSTM) variant that addresses the vanish- ing/exploding gradients problem [57], [58], have …

Learning to decode for future success
J Li, W Monroe, D Jurafsky – arXiv preprint arXiv:1701.06549, 2017 – arxiv.org
… We train two models, a vanilla LSTM (Sutskever et al., 2014) and an attention-based model (Bah- danau et al., 2015) … For the vanilla LSTM, however, due to its relative inferiority, we observe a more significant improvement from the future outcome prediction approach …

Modulating early visual processing by language
H De Vries, F Strub, J Mary, H Larochelle… – Advances in Neural …, 2017 – papers.nips.cc
… In particular, image captioning [16], visual question answering (VQA)[1, 23] and visually grounded dialogue systems[5, 6] constitute … Popular transition functions, like a long-short term memory (LSTM) cell [10] and a Gated Recurrent Unit (GRU)[4], incorporate gating mechanisms …

Incorporating loose-structured knowledge into conversation modeling via recall-gate LSTM
Z Xu, B Liu, B Wang, C Sun… – Neural Networks (IJCNN) …, 2017 – ieeexplore.ieee.org
… Through a recall mechanism with a specially designed recall-gate, background knowledge as global memory can be motivated to cooperate with local cell memory of Long Short-Term Memory (LSTM), so as to enrich the ability of LSTM to capture the implicit semantic clues in …

Latent intention dialogue models
TH Wen, Y Miao, P Blunsom, S Young – arXiv preprint arXiv:1705.10229, 2017 – arxiv.org
… For exam- ple both goal-oriented dialogue systems (Wen et al., 2017; Bordes & Weston, 2017) and sequence-to-sequence learn- ing chatbots … is the distributed utterance representation, which is formed by encoding the user utterance2 ut with a bidirectional LSTM (Hochreiter & …

Patient subtyping via time-aware LSTM networks
IM Baytas, C Xiao, X Zhang, F Wang, AK Jain… – Proceedings of the 23rd …, 2017 – dl.acm.org
… Long-Short Term Memory (LSTM) has been successfully used in many domains for processing sequential data, and recently applied for analyzing longitudinal patient records … 3 METHODOLOGY 3.1 Time-Aware Long Short Term Memory 3.1.1 Long Short-Term Memory (LSTM) …

How to make context more useful? an empirical study on context-aware neural conversational models
Z Tian, R Yan, L Mou, Y Song, Y Feng… – Proceedings of the 55th …, 2017 – aclweb.org
… 2015. Building end-to-end dialogue systems using generative hier- archical neural network models … 2017. Machine com- prehension using match-LSTM and answer pointer … 2016. Cached long short-term memory neu- ral networks for document-level sentiment classifi- cation …

Topic Aware Neural Response Generation.
C Xing, W Wu, Y Wu, J Liu, Y Huang, M Zhou, WY Ma – AAAI, 2017 – aaai.org
… Although previous research fo- cused on dialog systems, recently, with the large amount of conversation data available on the Internet, chatbots are be- coming … time t and f is a non-linear transformation which can be either a long-short term mem- ory unit (LSTM) (Hochreiter and …

An end-to-end trainable neural network model with belief tracking for task-oriented dialog
B Liu, I Lane – arXiv preprint arXiv:1708.05956, 2017 – arxiv.org
… Young, “Reinforcement learning for parameter estimation in statistical spoken dialogue systems,” Computer Speech … [19] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation … joint semantic frame parsing using bi-directional rnn-lstm,” in Proceedings …

Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Z Meng, L Mou, Z Jin – Proceedings of the 2017 ACM on Conference on …, 2017 – dl.acm.org
… [17] and Li et al. [5], for ex- ample, train sequence-to-sequence neural networks to automati- cally generate replies in an open-domain dialog system … First, we use a long short term memory (LSTM)-based recurrent neural network (RNN) to capture the meaning of each sentence …

Key-Value Retrieval Networks for Task-Oriented Dialogue
M Eric, CD Manning – arXiv preprint arXiv:1705.05414, 2017 – arxiv.org
… Page 2. out the need for explicit training of belief or in- tent trackers as is done in traditional task-oriented dialogue systems … hi = LSTM(?emb(xi),hi-1). (1) where the recurrence uses a long-short-term mem- ory unit, as described by (Hochreiter and Schmid- huber, 1997) …

Online adaptation of an attention-based neural network for natural language generation
M Riou, B Jabaian, S Huet… – Proc. Interspeech …, 2017 – pdfs.semanticscholar.org
… in SIGDIAL, 2015. [12] T.-H. Wen, M. Gašic, N. Mrkšic, P.-H. Su, D. Vandyke, and S. Young, “Semantically conditioned LSTM-based natural lan- guage generation for spoken dialogue systems,” in EMNLP, 2015. [13] T.-H. Wen …

Recent trends in deep learning based natural language processing
T Young, D Hazarika, S Poria, E Cambria – arXiv preprint arXiv …, 2017 – arxiv.org
… tasks at all levels, ranging from parsing and part-of-speech (POS) tagging, to machine translation and dialog systems … This limitation was overcome by various networks such as long short-term memory (LSTM), gated recurrent units (GRUs), and residual networks (ResNets …

Natural language generation for spoken dialogue system using rnn encoder-decoder networks
VK Tran, LM Nguyen – arXiv preprint arXiv:1706.00139, 2017 – arxiv.org
… Natural Language Generation (NLG) plays a crit- ical role in Spoken Dialogue Systems (SDS) with task is to convert a meaning representa- tion … (2015b) subsequently proposed a Semantically Conditioned Long Short-term Memory generator (SC-LSTM) which jointly …

Memory augmented neural networks with wormhole connections
C Gulcehre, S Chandar, Y Bengio – arXiv preprint arXiv:1701.08718, 2017 – arxiv.org
… Gulcehre, Chandar, and Bengio Memory (LSTM) units (Hochreiter and Schmidhuber, 1997) were proposed as an alternative architecture which can handle long range dependencies better than a vanilla RNN … (9) The hidden state of the LSTM controller is computed as follows …

Semantic refinement gru-based neural language generation for spoken dialogue systems
VK Tran, LM Nguyen – arXiv preprint arXiv:1706.00134, 2017 – arxiv.org
… For task-oriented dialogue systems, [6] combined a forward RNN generator, a CNN reranker, and a backward RNN reranker to generate utterances. A semantically conditioned-based Long Short- Term Memory (LSTM) generator was proposed in [7], which introduced a control …

Learning discourse-level diversity for neural dialog models using conditional variational autoencoders
T Zhao, R Zhao, M Eskenazi – arXiv preprint arXiv:1703.10960, 2017 – arxiv.org
… 1 Introduction The dialog manager is one of the key components of dialog systems, which is responsible for mod- eling the … Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a …

Attentive memory networks: Efficient machine reading for conversational search
T Kenter, M de Rijke – arXiv preprint arXiv:1712.07229, 2017 – arxiv.org
… ht = f (x, ht?1;?), (1) based on internal parameters ?. The function f itself can be imple- mented in many ways, for example as an Long Short-Term Mem- ory (LSTM) [12] or Gated Recurrent Unit (GRU) cell [5]. The initial hidden state h0 is usually a 0-vector …

Recurrent neural networks with missing information imputation for medical examination data prediction
HG Kim, GJ Jang, HJ Choi, M Kim… – Big Data and Smart …, 2017 – ieeexplore.ieee.org
… D. Yu, G. Zweig, and Y. Shi, “Spoken language understanding using long short-term memory neural networks … Mrksic, P.-H. Su, D. Vandyke, and S. Young, “Semantically conditioned LSTM-based natural language generation for spoken dialogue systems,” arXiv preprint …

Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation
S Sharma, LE Asri, H Schulz, J Zumer – arXiv preprint arXiv:1706.09799, 2017 – arxiv.org
… The NLG component of the dialogue system used for data collection is templated … that there is sig- nificant word overlap between the generated and the reference sentences and that the NLG task on these datasets can be solved with a simple model such as the LSTM model …

Neural Models for Sequence Chunking.
F Zhai, S Potdar, B Xiang, B Zhou – AAAI, 2017 – aaai.org
… Such sequence labeling forms the basis for many recent deep network based approaches, eg, convolutional neural networks (CNN), recurrent neural networks (RNN) or its variation, long short-term memory networks (LSTM) …

Efficient natural language response suggestion for smart reply
M Henderson, R Al-Rfou, B Strope, Y Sung… – arXiv preprint arXiv …, 2017 – arxiv.org
… Dialog systems must also learn to be consistent throughout the course of a dialog, maintaining some kind of … The sequence-to-sequence (Seq2Seq) framework uses recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of …

Generating natural answers by incorporating copying and retrieving mechanisms in sequence-to-sequence learning
S He, C Liu, K Liu, J Zhao – Proceedings of the 55th Annual Meeting of …, 2017 – aclweb.org
… representation c. For example, we can utilize the basic model: ht = f(xt,ht-1 );c = ?(h1, …,hLX ), where 1htl are the RNN hidden states, c is the context vector which could be as- sumed as an abstract representation of X. In prac- tice, gated RNN variants such as LSTM (Hochre- iter …

Order-planning neural text generation from structured data
L Sha, L Mou, T Liu, P Poupart, S Li, B Chang… – arXiv preprint arXiv …, 2017 – arxiv.org
… structured data (eg, a table) is im- portant for various natural language processing tasks such as question answering and dialog systems … Then we use a recurrent neural network (RNN) with long short term memory (LSTM) units (Hochre- iter and Schmidhuber 1997) to read the …

Visual reference resolution using attention memory for visual dialog
PH Seo, A Lehrmann, B Han, L Sigal – Advances in neural …, 2017 – papers.nips.cc
… Unlike VQA, where every question is asked independently, a visual dialog system needs to … et by applying three different encoders, based on recurrent (RNN with long-short term memory units), hierarchical … QA embedding at each time step is finally fed to another LSTM and the …

Generative Neural Machine for Tree Structures
G Zhou, P Luo, R Cao, Y Xiao, F Lin, B Chen… – arXiv preprint arXiv …, 2017 – arxiv.org
… For example, Zhang et al.[25] proposed treelstm and ldtreelstm via Tree LSTM activation functions in top-down fashion … [26] extended the chain- structured LSTM to tree-structured LSTM, which is shown to be more effective in representing a tree structure as a latent vector …

The use of autoencoders for discovering patient phenotypes
H Suresh, P Szolovits, M Ghassemi – arXiv preprint arXiv:1703.07004, 2017 – arxiv.org
… signals one timestep at a time into a layer of LSTM (Long- Short Term Memory) cells and … LSTM cells are used because of their ability to effectively model varying-length … different natural language processing applications from machine translation [11] to dialogue systems [12] to …

Towards End-to-End Spoken Dialogue Systems with Turn Embeddings
AO Bayer, EA Stepanov, G Riccardi – Annual Conference of the …, 2017 – sisl.disi.unitn.it
… In this paper, we propose a task-oriented spoken dialogue system architecture1 that is based on turn embeddings – a robust representation of user turns … The authors use long-short term memory (LSTM) [8] cells to handle long-range dependencies better …

Revisiting Activation Regularization for Language RNNs
S Merity, B McCann, R Socher – arXiv preprint arXiv:1708.01009, 2017 – arxiv.org
… Long short- term memory. Neural Computation, 1997 … Wen, Tsung-Hsien, Gasic, Milica, Mrksic, Nikola, Su, Pei-Hao, Vandyke, David, and Young, Steve. Semanti- cally Conditioned LSTM-based Natural Language Gen- eration for Spoken Dialogue Systems …

Legalbot: A Deep Learning-Based Conversational Agent in the Legal Domain
AK John, L Di Caro, L Robaldo, G Boella – International Conference on …, 2017 – Springer
… 1 Introduction. Dialogue Systems (DS), aka Conversational Systems (CS), have been a subject of research since mid-60’s (cf. [17]) … Li et. al. [8] used the Long Short-Term Memory (LSTM) to automatically mine user information about entities in the dialogue …

Navigation-orientated natural spoken language understanding for intelligent vehicle dialogue
Y Zheng, Y Liu, JHL Hansen – Intelligent Vehicles Symposium …, 2017 – ieeexplore.ieee.org
… A. Word Dictionary Since this speech interface is designed for a navigation- orientated dialogue system, the word dictionary is a vocabulary … The Long Short-Term Memory (LSTM) [27] cell is an extended form from the RNN basic cell but excels at remembering values for either …

Improved end-of-query detection for streaming speech recognition
M Shannon, G Simko, S Chang… – Proc. Interspeech 2017, 2017 – isca-speech.org
… Optimizing endpointing thresholds using dialogue features in a spoken dialogue system,” in Proc … TN Sainath, O. Vinyals, A. Senior, and H. Sak, “Convolutional, long short-term memory, fully connected … and B. Schuller, “Real-life voice activity detection with LSTM recurrent neural …

Predicting head pose in dyadic conversation
D Greenwood, S Laycock, I Matthews – International Conference on …, 2017 – Springer
… Haag, K., Shimodaira, H.: Bidirectional lstm networks employing stacked bottle- neck features for expressive … Hochreiter, S., Schmidhuber, J.: Long short-term memory … Nishimura, R., Kitaoka, N., Nakagawa, S.: A spoken dialog system for chat-like conversations considering …

Label-dependencies aware recurrent neural networks
Y Dupont, M Dinarelli, I Tellier – arXiv preprint arXiv:1706.01740, 2017 – arxiv.org
… 2.2 Long Short-Term Memory (LSTM) RNNs … while the variant of RNN we propose in this paper is more com- plex than simple RNNs, LSTM and GRU … The ATIS corpus (Air Travel Information System) [26] was collected for building a spoken dialog system able to provide flight …

Neural Matching Models for Question Retrieval and Next Question Prediction in Conversation
L Yang, H Zamani, Y Zhang, J Guo, WB Croft – arXiv preprint arXiv …, 2017 – arxiv.org
… Comparing with CDNN, this model adopts a long short term memory (LSTM) layer for long term dependency … 0.337 0.127 0.633 CNN-Match 0.579 0.428 0.155 0.775 LSTM-CNN-Match … In many question answering and chatbot/dialogue systems, new questions issued by users …

Approximated and domain-adapted LSTM language models for first-pass decoding in speech recognition
M Singh, Y Oualil, D Klakow – Proceedings of the 18th …, 2017 – pdfs.semanticscholar.org
… For this purpose, we introduce two such ways of applying adapted long-short-term-memory (LSTM) based RNNLMs [7 … con- verted to n-grams, which can then be scored using an LSTM … corpus collected during the Metalogue project aims to develop a dialogue system to monitor …

Adversarial generation of natural language
S Rajeswar, S Subramanian, F Dutil, C Pal… – arXiv preprint arXiv …, 2017 – arxiv.org
… Recurrent Neural Networks (RNNs), particu- larly Long short-term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent Networks (Cho et al., 2014) are power … yt?1 at time t?1 and the hidden state ht at time t as show in the LSTM equations below …

Interactive narrative personalization with deep reinforcement learning
P Wang, J Rowe, W Min, B Mott… – Proceedings of the …, 2017 – pdfs.semanticscholar.org
… A similar problem arises in the domain of spoken dialogue systems, where user simulations serve an important role in training … We select one type of recurrent neural network, long short- term memory (LSTM) [Hochreiter and Schmidhuber, 1997] to construct the player simulation …

A Knowledge Enhanced Generative Conversational Service Agent
Y Long, J Wang, Z Xu, Z Wang… – … the 6th Dialog System …, 2017 – workshop.colips.org
Abstract In this paper, we describe our attempt at generating natural and informative responses for customer service oriented dialog incorporating external knowledge. Our system captures external knowledge for a given dialog using a search engine. Then a

Yeah, Right, Uh-Huh: A Deep Learning Backchannel Predictor
R Ruede, M Müller, S Stüker, A Waibel – arXiv preprint arXiv:1706.01340, 2017 – arxiv.org
… We also extended this setup by the use of Long Short-Term Memory (LSTM) networks which have shown to outperform feed-forward based setups on various tasks … there is a growing interest in dialog systems that are not only utilitarian (to answer questions or carry out tasks …

Adversarial evaluation for open-domain dialogue generation
E Bruni, R Fernández – Proceedings of the 18th Annual SIGdial Meeting …, 2017 – aclweb.org
… We use stacking LSTM with 2 bidirectional layers, each with 2048 cells, and 500-dimensional embeddings … Long short-term memory … How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Met- rics for Dialogue Response Generation …

Iterative policy learning in end-to-end trainable task-oriented neural dialog models
B Liu, I Lane – arXiv preprint arXiv:1709.06136, 2017 – arxiv.org
… In section 2, we discuss related work on end-to-end trainable task-oriented dialog systems and RL policy learning methods … State of the dialog agent is maintained in the LSTM [36] state and being updated after the processing of each turn …

Affective Neural Response Generation
N Asghar, P Poupart, J Hoey, X Jiang, L Mou – arXiv preprint arXiv …, 2017 – arxiv.org
… with long short term memory (LSTM)-based recurrent neural net- works (RNNs)—generates a response conditioned on one or several previous utterances. Latest advances in this di- rection have demonstrated its efficacy for both task-oriented dialogue systems (Wen et al …

Dialogue Act Sequence Labeling using Hierarchical encoder with CRF
H Kumar, A Agarwal, R Dasgupta, S Joshi… – arXiv preprint arXiv …, 2017 – arxiv.org
… 2014) is in building a natural language dialogue system, where knowing the DAs of the past utterances helps in the prediction of … The main contributions of this paper are as follows: • We propose a Hierarchical Bi-LSTM-CRF (Bi-directional Long Short Term Memory with CRF …

Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization
W Zhang, B Zhou – arXiv preprint arXiv:1709.06493, 2017 – arxiv.org
… In reinforcement learning, RNN is also fre- quently used as policy networks for action sequence gen- eration in dialogue system (Li et al … LSTM (Hochreiter and Schmidhuber 1997) are turing com- plete and could theoretically simulate any function (Siegel- mann and Sontag 1995 …

Sequential Dialogue Context Modeling for Spoken Language Understanding
A Bapna, G Tur, D Hakkani-Tur, L Heck – Proceedings of the 18th …, 2017 – aclweb.org
… et al., 2016) show improved performance on an informational dialogue agent by incorporating knowledge base context into their dialogue system … The second layer uses Long Short Term Mem- ory (LSTM) (Hochreiter and Schmidhuber, 1997) cells with 256 dimensions (128 in …

Sequence to Sequence Modeling for User Simulation in Dialog Systems
P Crook, A Marin – Proceedings of the 18th Annual Conference of …, 2017 – isca-speech.org
… of length 100, 512 unit GRU layers, dense ReLU layer and LSTM layer, soft … Bengio, AC Courville, and J. Pineau, “Building end-to-end dialogue systems using generative … org/abs/1412.3555 [17] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation …

Joint, incremental disfluency detection and utterance segmentation from speech
J Hough, D Schlangen – Proceedings of the 15th Conference of the …, 2017 – aclweb.org
… Julian Hough and David Schlangen Dialogue Systems Group // CITEC // Faculty of Linguistics and Literature Bielefeld University firstname.lastname … for our sequence labelling task– the El- man Recurrent Neural Network (RNN) and the Long Short-Term Memory (LSTM) RNN …

Clinical Intervention Prediction and Understanding using Deep Networks
H Suresh, N Hunt, A Johnson, LA Celi… – arXiv preprint arXiv …, 2017 – arxiv.org
… We use long short-term memory networks (LSTM) (Hochreiter and … Previously, LSTMs have achieved state-of-the-art results in many different applications, such as machine translation (Hermann et al., 2015), dialogue systems (Chorowski et al., 2015) and image …

Do neural nets learn statistical laws behind natural language?
S Takahashi, K Tanaka-Ishii – PloS one, 2017 – journals.plos.org
… Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf’s law and Heaps’ law … in various natural language processing tasks such as machine translation [1], text summarization [2], dialogue systems [3], and …

Identifying latent beliefs in customer complaints to trigger epistemic rules for relevant human-bot dialog
C Anantaram, A Sangroya – Control, Automation and Robotics …, 2017 – ieeexplore.ieee.org
… The test accuracy of complaint categorization with Vanilla LSTM (Long Short Term Memory) was 68.47 … Although a number of attempts have been made to build dialog systems [4, 5, 6], the use of epistemic rules in driving the dialog in a consistent way with the beliefs has not yet …

Deep reinforcement learning: An overview
Y Li – arXiv preprint arXiv:1701.07274, 2017 – arxiv.org
… For example, we can combine spoken dialogue systems, machine translation and text sequence prediction as a single section about language models … Long short term memory networks (LSTM) and gated recurrent unit (GRU) were proposed to address such issues …

Hybrid dialog state tracker with asr features
M Vodolán, R Kadlec, J Kleindienst – arXiv preprint arXiv:1702.06336, 2017 – arxiv.org
… transcriptions, together with annotation on the level of dialog acts and user goals on slot-filling tasks where dialog system tries to … In the last part of evaluation we studied impor- tance of the bidirectional LSTM layer B by en- sembling models with linear … Long short-term memory …

Towards a top-down policy engineering framework for attribute-based access control
M Narouei, H Khanpour, H Takabi, N Parde… – Proceedings of the …, 2017 – dl.acm.org
… in domain-independent conversations [26], we propose a model based on a recurrent neural network, long short term memory (LSTM), that bene … a variety of text processing applications, from sentiment analysis [41] to conversational text processing for dialogue systems [22, 48] …

Investigating Scalability in Hierarchical Language Identification System
S Irtza, V Sethu, E Ambikairajah, H Li – Proc. Interspeech 2017, 2017 – isca-speech.org
… to be used as an auxiliary technology for many applications eg speech recognition and dialogue systems [1, 2]. To … Deep learning approaches eg Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short Term Memory (LSTM) have also shown …

Reinforced mnemonic reader for machine comprehension
M Hu, Y Peng, X Qiu – CoRR, abs/1705.02798, 2017 – pdfs.semanticscholar.org
… distance contextual interaction between parts of the context, by only using long short-term mem- ory network (LSTM) (Hochreiter and … granularity, the encoder also embeds each word w by encoding its character sequence with a bidirectional long short-term memory network (BiL …

Iterative multi-document neural attention for multiple answer prediction
C Greco, A Suglia, P Basile, G Rossiello… – arXiv preprint arXiv …, 2017 – arxiv.org
… LSTM 6.5 27.1 … A deep Recurrent Neural Network with Long Short-Term Memory units is presented in [7], which solves CNN/Daily Mail datasets by designing two different attention mechanisms called … Evaluating prerequisite qualities for learning end-to-end dialog systems …

Controlling linguistic style aspects in neural language generation
J Ficler, Y Goldberg – arXiv preprint arXiv:1707.02633, 2017 – arxiv.org
… professional style). Our model is based on a well-established technology – conditioned language models that are based on Long Short-Term Memory (LSTM), which was proven as strong and effective sequence model. We perform …

Speaker role contextual modeling for language understanding and dialogue policy learning
TC Chi, PC Chen, SY Su, YN Chen – arXiv preprint arXiv:1710.00164, 2017 – arxiv.org
… Under the sce- nario of dialogue systems and the communication patterns, we take the tourist as a … We apply a bidirectional long short-term memory (BLSTM) model (Schuster and Paliwal, 1997) to integrate … Multi-domain joint semantic frame parsing using bi-directional rnn-lstm …

Long Text Generation via Adversarial Training with Leaked Information
J Guo, S Lu, H Cai, W Zhang, Y Yu, J Wang – arXiv preprint arXiv …, 2017 – arxiv.org
… The MANAGER is a long short- term memory network (LSTM) (Hochreiter and Schmidhu- ber 1997) and serves as a mediator … the WORKER first encodes current generated words with another LSTM, then combines the output of the LSTM and the … 2015), dialogue system (Li et al …

Endpoint Detection using Grid Long Short-Term Memory Networks for Streaming Speech Recognition
SY Chang, B Li, TN Sainath, G Simko… – Proc. Interspeech …, 2017 – pdfs.semanticscholar.org
… B. Langner, AW Black, and M. Eskenazi, “Doing research on a deployed spoken dialogue system: one year … [13] ASHS Tara N. Sainath, O. Vinyals, “Convolutional, long short-term memory, fully connected … [14] TN Sainath and B. Li, “Modeling time-frequency patterns with LSTM vs …

Long short-term memory description and its application in text processing
L Skovajsová – Communication and Information Technologies …, 2017 – ieeexplore.ieee.org
… Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745, 2015 … End-to-end sequence labeling via bi- directional lstm-cnns-crf … Long short-term memory-networks for machine reading …

Spoken language understanding and interaction: machine learning for human-like conversational systems
M Gaši?, D Hakkani-Tür, A Celikyilmaz – 2017 – Elsevier
… of prior knowledge, Bayesian committee machines and multi-agent learning, facilitate extensible and adaptable dialogue systems … to other learning techniques including deep learning and they applied the proposed technique to long short term memory (LSTM) networks and …

Long short-term memory networks for automatic generation of conversations
T Fujita, W Bai, C Quan – Software Engineering, Artificial …, 2017 – ieeexplore.ieee.org
… In this research, we developed a ‘chatting bot’ by applying LSTM … Joelle Pineau, “The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi- Turn Dialogue Systems”, SIGDIAL 2015 … [10] Sepp Hochreiter; Jürgen Schmidhuber, “Long short-term memory” …

Dialogue Intent Classification with Long Short-Term Memory Networks
L Meng, M Huang – National CCF Conference on Natural Language …, 2017 – Springer
… Dialogue intent analysis is an important task that dialogue systems need to perform in order to understand the user’s utterance in the … We present a hierarchical long short-term memory (HLSTM) network for dialogue intent classification, where a word-level LSTM is used to …

Turn-taking Estimation Model Based on Joint Embedding of Lexical and Prosodic Contents
C Liu, C Ishi, H Ishiguro – Proc. Interspeech 2017, 2017 – pdfs.semanticscholar.org
… Furthermore, recog- nizing whether a phrase is backchannel is critical for a dialog system since it means a user has no intention to interrupt and requires … Thus, we did not test the more complex Long- Short-Term Memory (LSTM [21]) units that can exploit long- term dependency …

Dialogue Response Generation using Neural Networks with Attention and Background Knowledge
S Kosovan, J Lehmann, A Fischer – jens-lehmann.org
… There was considered a task of selecting best next response using TF-IDF, Recurrent Neural networks (RNN) and Long Short- Term Memory (LSTM) … Generative models were used for building open domain, conversational dialogue systems based on large dialogue corpora [16 …

Dialogue Breakdown Detection Considering Annotation Biases
J Takayama, E Nomoto, Y Arase – workshop.colips.org
… As chat-oriented dialogue systems, which are known as chat- bots, implemented by generation-based and example-based ap … We propose three models as detectors employing DNNs: one uses two series Long Short-Term Memory (LSTM) en- coders, another uses two parallel …

Social Signal Detection in Spontaneous Dialogue Using Bidirectional LSTM-CTC
H Inaguma, K Inoue, M Mimura… – Proc. Interspeech …, 2017 – isca-speech.org
… Therefore, detection of them would be useful for dialogue systems to infer … with various settings demonstrate that CTC based on bidirectional LSTM outperforms the … Index Terms: Social signals, connectionist temporal classifi- cation, long-short term memory, human-computer …

Deep Learning for Acoustic Addressee Detection in Spoken Dialogue Systems
A Pugachev, O Akhtiamov, A Karpov… – Conference on Artificial …, 2017 – Springer
… Text classification: Best choice for a dialogue system in a specific domain … DNN3. 0.78. 0.78. 0.69. For the combination of pitch and RMSE, we applied a Bidirectional Long Short-Term Memory (BLSTM) [14] model with the input layer consisting of two neurons, two LSTM (Fig …

Emotional Human-Machine Conversation Generation Based on Long Short-Term Memory
X Sun, X Peng, S Ding – Cognitive Computation, 2017 – Springer
… generation. For both encoding and decoding, we adopt a long short-term memory (LSTM) [10, 13] neural network … 20]. Several attempts have been made to endow dialog systems or conversational agents with emotion [2, 33]. Kadish et al …

A Context Based Dialog System with a Personality
A Choudhary, V Kalingeri – pdfs.semanticscholar.org
… As discussed, a dialog system in its simplistic sense can be treated as a question answering system where a response has to be generated for the question … Both the encoder and the decoder use a variant of recurrent neural networks called the Long Short Term Memory(LSTM) …

Dialogue System for Restaurant Reservations using Hybrid Code Network
C Akin-David, D Xue, E Mei – stanford.edu
… Rather than a generic dialogue system, the system that we are building is task-oriented … The HCN model is long short-term memory (LSTM) based (Figure 2). The feature vector is a concatenation of five components extracted from user input: a bag of words feature vector, an utter …

Evaluating LSTM Networks, HMM and WFST in Malay Part-of-Speech Tagging
TP Tan, B Ranaivo-Malançon… – Journal of …, 2017 – journal.utem.edu.my
… tienping@usm.my Abstract—Long short term memory (LSTM) networks have been gaining popularity in modeling sequential data such as phoneme recognition, speech translation, language modeling, speech synthesis, chatbot-like dialog systems and others …

CCG Supertagging via Bidirectional LSTM-CRF Neural Architecture
R Kadari, Y Zhang, W Zhang, T Liu – Neurocomputing, 2017 – Elsevier
… Long Short-Term Memory (LSTM) units were first proposed by Hochreiter and Schmidhuber [24] to come up with the difficulty to train simple RNN and overcome the gradient vanishing/exploding problems. The main idea is to …

Towards Deep End-of-Turn Prediction for Situated Spoken Dialogue Systems
A Maier, J Hough, D Schlangen – … of INTERSPEECH 2017, 2017 – pub.uni-bielefeld.de
… We argue the reactive approach limits a dialogue system’s potential fluidity in interaction … We use a common deep learning architecture, the Long Short-Term Memory (LSTM) recurrent neural network to investigate this, posing the follow- ing research questions …

Towards a General, Continuous Model of Turn-taking in Spoken Dialogue using LSTM Recurrent Neural Networks
G Skantze – Proceedings of the 18th Annual SIGdial Meeting on …, 2017 – aclweb.org
… is potentially useful for a number of different types of predic- tions and decisions that are relevant for spoken dialogue systems … To address this problem, and make as few assumptions as possible, we train the model using Long Short-Term Memory (LSTM) Recurrent Neural …

Neural sentence embedding using only in-domain sentences for out-of-domain sentence detection in dialog systems
S Ryu, S Kim, J Choi, H Yu, GG Lee – Pattern Recognition Letters, 2017 – Elsevier
… information to prevent erroneous domain switching, but whenever developers expand the domain of a dialog system they must … particularly for natural language understanding using recurrent neural networks (RNNs) [16,33,36] and long short-term memory (LSTM) networks [5,22 …

Cascaded LSTMs Based Deep Reinforcement Learning for Goal-Driven Dialogue
Y Ma, X Wang, Z Dong, H Chen – National CCF Conference on Natural …, 2017 – Springer
… This paper proposes a deep neural network model for jointly modeling Natural Language Understanding and Dialogue Management in goal-driven dialogue systems. There are three parts in this model. A Long Short-Term Memory (LSTM) at the bottom of the network encodes …

Dialogue Breakdown Detection using Hierarchical Bi-Directional LSTMs
Z Xie, G Ling – workshop.colips.org
… Dialogue system inter- acts with user in sequential order, and every system utterance is generated by considering the history of the … Page 3. Figure 2: Long Short-Term Memory Unit Structure … Thus we use another Bi-LSTM over the utterance encoder for dialogue context encoding …

Improvisational Storytelling Agents
LJ Martin, P Ammanabrolu, X Wang, S Singh… – researchgate.net
… Recurrent neural networks such as sequence-to-sequence networks [6], using long-short term memory (LSTM) [7] cells, treat story generation as … Semantic slot filling is a common practice in dialog systems research that maintains coherence and state by extracting specifics and …

Incremental Joint Modelling for Dialogue State Tracking
AD Trinh, RJ Ross, JD Kelleher – Proc. SEMDIAL 2017 (SaarDial) …, 2017 – isca-speech.org
… AK Hypotheses + Other Belief Updating Model. In Procs. of AAAI Workshop on Statistical and Empirical Methods in Spoken Dialogue Systems 2006 … 1997. Long short-term memory. Neural computation, 9(8):1735–1780 … 2015. Incremen- tal LSTM-based dialog state tracker …

Minimum Semantic Error Cost Training of Deep Long Short-Term Memory Networks for Topic Spotting on Conversational Speech
Z Meng, BHF Juang – Proc. Interspeech 2017, 2017 – isca-speech.org
… The response of a spoken-dialog system is often guided by the topic category of the … Therefore, we introduce the deep bi-directional long short-term memory (BLSTM)-HMM for acoustic model- ing … The LSTM network, a special kind of RNN with purpose- built memory cells to store …

A Survey of Task-oriented Dialogue Systems
K Mo – 2017 – cse.ust.hk
… to-End Dialogue System 1:SLU 2:DST 3: Policy Learning (DPL) 4:NLG General Person alized General Personalized None RL None TL CRF (Wang and Acero 2006; Raymond and Riccardi 2007) RNN (Yao et al. 2013; Mesnil et al. 2013, 2015; Liu and Lane 2015) LSTM (Yao et …

Modeling Conversations to Learn Responding Policies of E2E Task-oriented Dialog System
Z Bai, B Yu, G Chen, B Wang, Z Wang – workshop.colips.org
… learning system in the evaluation campaign of the end-to-end goal oriented di- alog learning track of Dialog System Technology Challenges (DSTC 6). This paper presents the key modules of our system, including a hierarchical Long Short-Term Memory (LSTM) based ranking …

A practical approach to dialogue response generation in closed domains
Y Lu, P Keung, S Zhang, J Sun, V Bhardwaj – arXiv preprint arXiv …, 2017 – arxiv.org
… For example, LSTM embeddings find var- ious kinds of responses to customer greetings even when the … We demonstrate that even in the absence of a fully automated dialogue system, it is … [9] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation …

Interaction Quality Estimation Using Long Short-Term Memories
N Rach, W Minker, S Ultes – Proceedings of the 18th Annual SIGdial …, 2017 – aclweb.org
… The increasing complexity of Spoken Dialogue Systems (SDS) and the requirements that come with this progress made automatized recognition and modeling of user states crucial to ensure … (1997) introduced an architecture, called Long Short-Term Memory (LSTM) that allows …

Are You Addressing Me? Multimodal Addressee Detection in Human-Human-Computer Conversations
O Akhtiamov, D Ubskii, E Feldina, A Pugachev… – … Conference on Speech …, 2017 – Springer
… Spoken dialogue systems (SDSs) have become significantly more complex and flexible over recent years and are now capable of solving a … The study [7] describes several applications of a recurrent (RNN), a Long Short-Term Memory (LSTM), and a feedforward net for a text …

Information Navigation System with Discovering User Interests
K Yoshino, Y Suzuki, S Nakamura – … of the 18th Annual SIGdial Meeting …, 2017 – aclweb.org
… a vector of a document with word2vec (Mikolov et al., 2013) and the long- short term memory neural network … Kyoto prefec- ture sightseeing Web site3 was used as the training data of word2vec, LSTM-based encoder-decoder model, and content of the dialogue system …

A Part-of-Speech Enhanced Neural Conversation Model
C Luo, W Li, Q Chen, Y He – European Conference on Information …, 2017 – Springer
… D., Mrksic, N., Gasic, M., Rojas-Barahona, LM, Pei-Hao, S., Ultes, S., Young, S.: A network-based end-to-end trainable task-oriented dialogue system … Luan, Y., Ji, Y., Ostendorf, M.: LSTM based conversation models … Hochreiter, S., Schmidhuber, J.: Long short-term memory …

Collaborative Response Content Recommendation for Customer Service Agents
C Ma, P Guo, X Xin, X Ma, Y Liang, S Xing, L Li… – … Symposium on Neural …, 2017 – Springer
… Using dialog system to automatize customer service is a common practice in many business fields … To deal with such problems, we propose a LSTM (Long Short-Term Memory) Neuron Tensor Network architecture to encode the common features of all shops’ data and model the …

Neural-based Context Representation Learning for Dialog Act Classification
D Ortega, NT Vu – arXiv preprint arXiv:1708.02561, 2017 – arxiv.org
… Automatic DA classification is an impor- tant pre-processing step in natural language under- standing tasks and spoken dialog systems … 2016), recurrent neural networks (RNNs) (Lee and Der- noncourt, 2016; Ji et al., 2016) and long short-term memory (LSTM) models (Shen …

A Generative Attentional Neural Network Model for Dialogue Act Classification
QH Tran, G Haffari, I Zukerman – … of the 55th Annual Meeting of the …, 2017 – aclweb.org
… f can be any non-linear function, ie, the sim- ple sigmoid applied to elements of a vec- tor, or the more complex Long-Short-Term- Memory unit (LSTM) (Graves, 2013 … Furthermore, DA classi- fication itself can be seen as a preprocessing step in a dialogue system’s pipeline …

Legalbot: A Deep Learning-Based Conversational Agent in the Legal Domain
G Boella – … Language Processing and Information Systems: 22nd …, 2017 – books.google.com
… Keywords: Chatbot· Conversational Recurrent neural networks· agent Long short-term memory· 1 Introduction Dialogue Systems (DS), aka Conversational … al.[8] used the Long Short-Term Memory (LSTM) to auto- matically mine user information about entities in the dialogue …

” Having 2 hours to write a paper is fun!”: Detecting Sarcasm in Numerical Portions of Text
L Kumar, A Somani, P Bhattacharyya – arXiv preprint arXiv:1709.01950, 2017 – arxiv.org
… We also develop Long-short Term Memory (LSTM) network which is able to handle sequences of any length and capture long- term dependencies. We compare our approaches with four past works, and show an improvement. 1This sentence is only an example …

Question answering system based on sentence similarity
M Kashif, C Arora – 2017 – repository.iiitd.edu.in
… 19 6.3 Long Short-Term Memory (LSTM) … 22 6.4 Manhattan LSTM – a siamese adaptation of LSTM. Source: [] … 3.1 Introduction Sentence Similarity measures are widely used in text-related research, text-mining, Web page retrieval and dialogue systems …

Jointly Modeling Intent Identification and Slot Filling with Contextual and Hierarchical Information
L Wen, X Wang, Z Dong, H Chen – National CCF Conference on Natural …, 2017 – Springer
… 1 Introduction. Natural Language Understanding (NLU), which refers to the targeted understanding of human language directed at machines [1], is a critical component in dialogue systems … Yao et al. [4] investigated Long Short-Term Memory (LSTM) methods for slot filling …

Joint Learning of Response Ranking and Next Utterance Suggestion in Human-Computer Conversation System
R Yan, D Zhao – Proceedings of the 40th International ACM SIGIR …, 2017 – dl.acm.org
… become meaningless. To this end, we propose a dual recurrent neural network chains with Long-Short Term Memory (LSTM) units for the new conversation task, namely Dual-LSTM Chain Model (Dual-LSTM). The model formulates …

Towards an Automated Production of Legal Texts Using Recurrent Neural Networks
W Alschner, D Skougarevskiy – 2017 – papers.ssrn.com
… 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780 … 2015. Semantically conditioned LSTM-based Natural language generation for spoken dialogue systems. In Conference on Empirical Methods in Natural Language Processing. 1711–1721 …

KIT Dialogue System for NTCIR-13 STC Japanese Subtask
H Nakatani, S Nishiumi, T Maeda, M Araki – research.nii.ac.jp
… The NTCIR-13 STC Japanese Subtask is a challenge of response generation in non-task oriented dialogue systems [7]. In this … 8] is a neural network composed of the input layer (encoder) and output layer (decoder) of Long short-term memory networks (LSTM) [3]. This …

Achieving Fluency and Coherency in Task-oriented Dialog
R Gangadharaiah, BM Narayanaswamy, C Elkan – alborz-geramifard.com
… There is still no good alternative to evaluate dialog systems, and so we continue to … of two components, an encoder and a decoder, typically modeled using Long Short Term Memory Units (LSTMs … Figure 1, orange-solid-square boxes represent the embedding and LSTM cells of …

Diversifying Neural Conversation Model with Maximal Marginal Relevance
Y Song, Z Tian, D Zhao, M Zhang, R Yan – Proceedings of the Eighth …, 2017 – aclweb.org
… Long short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRUs) (Cho et al., 2014) could further enhance the RNNs to … Long short-term memory … Build- ing end-to-end dialogue systems using generative hi- erarchical neural network models …

Symbol sequence search from telephone conversation
M Suzuki, G Kurata, A Sethy… – Proc. Interspeech …, 2017 – isca-speech.org
… For the confidence scoring, We propose a long short-term memory (LSTM) based approach that inputs word before and after fragments … are already many successful applications of spoken interaction, such as IoT applications, dictation, voice search, and spoken dialog systems …

Sequence-to-sequence prediction of personal computer software by recurrent neural network
Q Yang, Z He, F Ge, Y Zhang – Neural Networks (IJCNN), 2017 …, 2017 – ieeexplore.ieee.org
… language process (NLP) such as language model, machine translation and dialogue systems. This paper examines the most popular DNNs approaches: LSTM, Encoder-Decoder … adding information to enrich embedding input of Long-Short Term Memory, adding classifier to …

Improving Frame Semantic Parsing with Hierarchical Dialogue Encoders
A Bapna, G Túr, D Hakkani-Túr, L Heck – arXiv preprint arXiv:1705.03455, 2017 – arxiv.org
… The goal of the conversational language under- standing module of our dialogue system is to map each user utterance into frame based semantics that can be processed by … Multi- domain joint semantic frame parsing using bi- directional RNN-LSTM … Long short-term memory …

Dialogue Act Recognition via CRF-Attentive Structured Network
Z Chen, R Yang, Z Zhao, D Cai, X He – arXiv preprint arXiv:1711.05568, 2017 – arxiv.org
… Many applications have benefited from the use of automatic dialogue act recognition such as dialogue systems, machine transla- tion, automatic speech … turn to struc- tured prediction algorithm along with deep learning tactics such as DRLM-Conditional [17], LSTM-Softmax [21 …

An Empirical Study on Incorporating Prior Knowledge into BLSTM Framework in Answer Selection
Y Li, M Yang, T Zhao, D Zheng, S Li – National CCF Conference on …, 2017 – Springer
… Furthermore, LSTM (Long Short-Term Memory Neural Network (Hochreiter and Schmidhuber 1997)), which is a variant of RNN, has … of the input vector is fed into every gate in tweet-level LSTM network … examination of this issue in other NLP tasks such as MT, dialogue system etc …

A Survey of Design Techniques for Conversational Agents
K Chandrasekaran – … Conference, ICICCT 2017, New Delhi, India …, 2017 – books.google.com
… As a result of possessing all these capa- bilities LSTM performs way better than … Wiley, New Jersey (2001) Hochreiter, S., Schmidhuber, J.: Long short-term memory … In: Proceedings of the 2010 Workshop on Companionable Dialogue Systems, Association for Computational …

Speaker Dependency Analysis, Audiovisual Fusion Cues and A Multimodal BLSTM for Conversational Engagement Recognition
Y Huang, E Gilmartin, N Campbell – Proc. Interspeech 2017, 2017 – isca-speech.org
… We also propose a novel multimodal bi-directional Long short term memory (LSTM) for engagement recognition by … BL FSVM LSTM BLSTM MBLSTM SIF 0.420 0.538 0.583 0.603 0.617 SD 0.683 … conversational strategies in the service of a socially- aware dialog system,” in 17th …

Towards Implicit Content-Introducing for Generative Short-Text Conversation Systems
L Yao, Y Zhang, Y Feng, D Zhao, R Yan – Proceedings of the 2017 …, 2017 – aclweb.org
… ht = f(xt,ht?1);C = hT (2) where ht is the hidden state of encoder RNN at time t and f is a non-linear transformation which can be a long-short term memory unit (L- STM) (Hochreiter and Schmidhuber, 1997) or a gated recurrent unit (GRU) (Cho et al., 2014) …

A computational model for automatic generation of domain-specific dialogues using machine learning
A Vázquez, D Pinto, D Vilariño – … of the XVIII International Conference on …, 2017 – dl.acm.org
… based on a semantically controlled Long Short-term Memory (LSTM) structure which is a type of a neural networks which reports interesting results. In Su et al.’s research [6], they describe a two-step approach for managing dialogues in task-oriented oral dialogue systems …

Building Effective Goal-Oriented Dialogue Agents
D Chen – pdfs.semanticscholar.org
… One of the four tasks use GRU units [2] whereas the remaining three use LSTM units [4] where each loop through the LSTM acts as a layer of a neu … Separately, the work of [10] introduces the idea of using such context when designing dialogue systems … Long short-term memory …

Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering
S Yoon, J Shin, K Jung – arXiv preprint arXiv:1710.03430, 2017 – arxiv.org
… They also tried using long short-term memory (LSTM) (Hochreiter and Schmidhuber 1997), bi-directional LSTM and ensemble method with all of those neural network architectures and achieved the best results on the Ubuntu Dialogues Corpus dataset …

A Survey of Design Techniques for Conversational Agents
K Ramesh, S Ravishankaran, A Joshi… – International Conference …, 2017 – Springer
… As a result of possessing all these capabilities LSTM performs way better than other available … Hochreiter, S., Schmidhuber, J.: Long short-term memory … In: Proceedings of the 2010 Workshop on Companionable Dialogue Systems, Association for Computational Linguistics, pp …

Improvised theatre alongside artificial intelligences
K Mathewson, P Mirowski – AAAI Conference on Artificial …, 2017 – pdfs.semanticscholar.org
… called ALEx (Artificial Language Experiment) was built using recurrent neural networks (RNN) with long-short term memory (LSTM) (Mikolov and … This facilitates curating the vocabulary produced by the dialogue system and thus immediately replace or remove offensive words …

A study on integrating distinct classifiers with bidirectional LSTM for Slot Filling task
KP Do – 2017 – dspace.jaist.ac.jp
… To deal with those daunting problems, in our experiment, a more sophisticated model, Long Short-Term Memory Networks (LSTMs) [11] is utilized for sequence representation. Apart from CRFs merged with LSTM, we also pay our attention to the incorporation of other classifiers …

Dialogue Act Segmentation for Vietnamese Human-Human Conversational Texts
TL Ngo, KL Pham, MS Cao, SB Pham… – arXiv preprint arXiv …, 2017 – arxiv.org
… It has been widely applied in many fields such as dialogue systems, automatic machine translation, automatic speech recognition, and … entropy (ME) and conditional random fields (CRFs); (2) deep learning approach using bidi- rectional Long Short-Term Memory (LSTM) with a …

Evaluating Attention Networks for Anaphora Resolution
J Pilault, N Pappas, L Miculicich Werlen… – 2017 – infoscience.epfl.ch
… processing tasks, including — but not limited to -– information retrieval, neural machine translation, and text understanding in dialog systems … The core of both models is a Long Short-Term Memory (LSTM) recurrent neural network (Hochreiter and Schmidhuber, 1997) which …

Recurrent Neural Network to Deep Learn Conversation in Indonesian
A Chowanda, AD Chowanda – Procedia Computer Science, 2017 – Elsevier
… for NLP is the Reccurent Neural Network (RNN)16,17,15, particularly Long Short-Term Memory (LSTM) model … in Indonesian Language models were trained using dual encoder LSTM with pre … L., Pineau, J.. A survey of available corpora for building data-driven dialogue systems …

Multiple-Weight Recurrent Neural Networks
Z Cao, L Wang, G De Melo – Proceedings of the 26th International Joint …, 2017 – ijcai.org
… For dialogue systems, contextual information and dialogue interactions be- tween speakers are important signals … in- formation, several enhanced RNN cells have been proposed, among which the most well-known ones are Long Short Term Memory (LSTM) [Hochreiter and …

Let’s Chat about Brexit! A Politically-Sensitive Dialog System Based on Twitter Data
A Khatua, E Cambria, A Khatua… – Data Mining Workshops …, 2017 – ieeexplore.ieee.org
… have attempted to design a dialog system by using Ubuntu Dialogue Corpus (which comprises of 1 million multi-turn dialogue) to train their neural conversion language model [16]. This project has considered both RNN and long short-term memory (LSTM) to generate the next …

Recognizing emotions in spoken dialogue with acoustic and lexical cues
L Tian, JD Moore, C Lai – Proceedings of the 1st ACM SIGCHI …, 2017 – dl.acm.org
… Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user’s emotions … In par- ticular, the Long Short-Term Memory Recurrent Neural Network (LSTM) have attracted growing interest in emotion recognition research …

Music Recommendation using Recurrent Neural Networks
A Choudhary, M Agarwal – pdfs.semanticscholar.org
… Long Short Term Memory network is a Recur- rent Neural Network (RNN) architecture that un- like traditional … Given a sequence of inputs X = 1×1,x2, …, xnX l an LSTM associates each … Seq-to-seq models have been used extensively in machine translation and dialog systems …

Exploring ASR-free end-to-end modeling to improve spoken language understanding in a cloud-based dialog system
Y Qian, R Ubale, V Ramanaryanan… – … (ASRU), 2017 IEEE, 2017 – ieeexplore.ieee.org
… 3. SPOKEN DIALOG SYSTEM AND TASK … 5.3. ASR-free E2E To overcome vanishing gradient problem occurred in RNN- based machine learning, long short-term memory (LSTM) [39] RNN is used for RNN encoder-decoder …

Unsupervised Automatic Text Style Transfer Using LSTM
M Han, O Wu, Z Niu – National CCF Conference on Natural Language …, 2017 – Springer
… recognition [18], and dialogue systems [27] producing promising results. A Seq2Seq model is a recurrent neural network (RNN) [21], which can take a sequence as input and generate a desired sequence. Two most widely used RNN are long short-term memory (LSTM) [12] and …

Input-to-Output Gate to Improve RNN Language Models
S Takase, J Suzuki, M Nagata – arXiv preprint arXiv:1709.08907, 2017 – arxiv.org
… 1997. Long Short-Term Memory. Neural Computation 9(8):1735–1780. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016 … 2015. Se- mantically Conditioned LSTM-based Natural Lan- guage Generation for Spoken Dialogue Systems …

Deep neural networks for anger detection from real life speech data
J Deng, F Eyben, B Schuller… – Affective Computing and …, 2017 – ieeexplore.ieee.org
… the-art deep learning algorithms, we propose a variant of Deep Long Short-Term Memory (LSTM) Recurrent Neural … Neural Networks (CNNs) with 3 × 3 kernels, and LSTM RNNs combined … Despite this, contemporary human machine dialog systems always speak with the same …

End-to-End Large Vocabulary Speech Recognition for the Serbian Language
B Popovi?, E Pakoci, D Pekar – International Conference on Speech and …, 2017 – Springer
… 4) bidirectional long short-term memory layers (LSTM) … In the future, other LSTM configurations will be explored … part by the Ministry of Education, Science and Technological Development of the Republic of Serbia, within the project “Development of Dialogue Systems for Serbian …

Are you asking the right questions? Teaching Machines to Ask Clarification Questions
S Rao – Proceedings of ACL 2017, Student Research …, 2017 – aclweb.org
… neural: Concatenate the post LSTM representation, the question LSTM rep- resentation and the answer LSTM representation and … Long short-term memory … How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response …

Joint Learning of Dialog Act Segmentation and Recognition in Spoken Dialog Using Neural Networks
T Zhao, T Kawahara – Proceedings of the Eighth International Joint …, 2017 – aclweb.org
… Therefore DA segmentation becomes essential for spoken dialog systems … rectional Long Short-Term Memory (BiLSTM) – a variant of RNN. LSTM (Hochreiter and Schmid- huber, 1997) can better avoid the vanishing gra- dient problem compared with normal RNNs, thus it is …

KSU Team’s Dialogue System at the NTCIR-13 Short Text Conversation Task 2
Y Ishibashi, S Sugimoto, H Miyamori – pdfs.semanticscholar.org
… text. How can the visual infor- mation be used without accepting visual information as the input to the dialogue system? Based … text. For extracting context vector Ct, the attention mechanism was used. st = LSTM(Ct, st?1, yt?1) (4) P …

A Survey on Dialogue Systems: Recent Advances and New Frontiers
H Chen, X Liu, D Yin, J Tang – arXiv preprint arXiv:1711.01731, 2017 – arxiv.org
… applying deep learning in machine translation, namely Neu- ral Machine Translation, spurs the enthusiasm of researches in neural generative dialogue systems … is the hidden state at time step t, f is a non-linear function such as long-short term memory unit (LSTM) [18] and …

Parallel Hierarchical Attention Networks with Shared Memory Reader for Multi-Stream Conversational Document Classification
N Sawada, R Masumura… – Proc. Interspeech …, 2017 – pdfs.semanticscholar.org
… 17]. A representative method is parallel LSTM that integrates outputs of multiple LSTMs [16] … documents. First, simple LSTM was introduced for modeling individual streams although conversational documents include multiple ut- terances …

Multi-task Learning in Prediction and Correction for Low Resource Speech Recognition
D Bukhari, J Yi, Z Wen, B Liu, J Tao – National Conference on Man …, 2017 – Springer
… Recently, Long short-term memory recurrent neural networks (LSTM-RNNs) [29] and Convolutional neural networks … Prediction and Correction (PAC) previously used the LSTM RNN and DNN … intent classification in goal oriented human-machine spoken dialog systems which is …

Knowledge Guided Short-Text Classification for Healthcare Applications
S Cao, B Qian, C Yin, X Li, J Wei… – Data Mining (ICDM) …, 2017 – ieeexplore.ieee.org
… the intent, as the medical type of “mitral valve prolapse” is a key indicator to the dialog system … Specifically, we employ a bidirectional Long-Short Term Memory (BI- LSTM) [8] network as the underlying architecture to capture the semantic relation amongst words in the texts …

A real-time ensemble classification algorithm for time series data
X Zhu, S Zhao, Y Yang, H Tang… – Agents (ICA), 2017 …, 2017 – ieeexplore.ieee.org
… The Long Short-Term Memory (LSTM) is another popular algorithm for such problem in … H. Wen, M. Gasic, N. Mrksic, P.-H. Su, D. Vandyke, and S. Young, Semantically conditioned LSTM-based natural language generation for spoken dialogue systems, arXiv:1508.01745 …

Multi-Task Deep Learning for User Intention Understanding in Speech Interaction Systems
Y An, Y Wang, H Meng – 2017 – aaai.org
… from speech, as illustrated in Figure 2. In particu- lar, we use long short-term memory (LSTM) to model … repre- sentations of input data before we fed the input to the LSTM hidden layers … cues such as F0 shape and duration to classify communicative inten- tions in dialog systems …

Dialogue Act Semantic Representation and Classification Using Recurrent Neural Networks
P Papalampidi, E Iosif, A Potamianos – SEMDIAL 2017 SaarDial, 2017 – academia.edu
… 1 Introduction Dialogue Act (DA) classification constitutes a ma- jor processing step in Spoken Dialogue Systems (SDS) assisting the … Khan- pour et al.(2016) employed a deep Long Short Term Memory (LSTM)(Hochreiter and Schmid- huber, 1997) structure with pre-trained …

Automated Assistance in E-commerce: An Approach based on Category-Sensitive Retrieval
A Majumder, A Pande, K Vonteru, A Gangwar, S Maji… – cse.iitkgp.ac.in
… as obtained by a multi-instance classifier, to enhance the existing LSTM-based retrieval … J.: The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems … (2014) 3104–3112 6. Hochreiter, S., Schmidhuber, J.: Long short-term memory …

Conversational/Multiturn Question Understanding
G Ren, M Malik, X Ni, Q Ke, N Bhide – scai.info
… A high level diagram of the model is shown in figure 2. Long short term memory (LSTM) is the RNN used, and the input text is first converted into word embeddings … 2000. Re- inforcement Learning for Spoken Dialogue Systems …

Learning Generative End-to-end Dialog Systems with Knowledge
T Zhao – 2017 – cs.cmu.edu
… An extension to DQN is a Deep Recurrent Q-Network (DRQN) which introduces a Long Short-Term Memory (LSTM) layer [38] on top of the convolutional … Our work [114] is the first one that applied DRQN to solve E2E dialog systems and learn dialog policy that can interact with …

Named Entity Recognition with Gated Convolutional Neural Networks
C Wang, W Chen, B Xu – … and Natural Language Processing Based on …, 2017 – Springer
… a popular NLP task and plays a vital role for downstream systems, such as machine translation systems and dialogue systems … Recently, recurrent neural networks (RNNs), together with its variants long short-term memory (LSTM) [10] and gated recurrent unit (GRU) [5], have …

Dialog System & Technology Challenge 6 Overview of Track 1-End-to-End Goal-Oriented Dialog learning
J Perez, YL Boureau, A Bordes – workshop.colips.org
… Modeling Conver- sations to Learn Responding Policies of E2E Task-oriented Dialog System Team 2 a hierarchical Long Short- Term Memory (LSTM) based ranking module, a Conditional Random Field (CRF) based slot confirming module, and a heuristic scoring module …

Integrating both Visual and Audio Cues for Enhanced Video Caption
W Hao, Z Zhang, H Guan, G Zhu – arXiv preprint arXiv:1711.08097, 2017 – arxiv.org
… 2016) pro- posed a multimodal Long Short-Term Memory (LSTM) for speaker identification, which referred to locating a person who has the same … long temporal depen- dency, such as visual question answering (Xiong, Merity, and Socher 2016) and dialog systems (Dodge et al …

Graph Enhanced Memory Networks for Sentiment Analysis
Z Xu, R Vial, K Kersting – Joint European Conference on Machine Learning …, 2017 – Springer
… Typical examples include tree structure of a sentence and knowledge graph in a dialogue system … For the input module, we use word embedding [24, 28, 32] and long short-term memory (LSTM) [9, 11, 16, 40], ie, the LSTM with pre-trained word vectors as the freezed …

Multiple relations extraction among multiple entities in unstructured text
J Liu, H Ren, M Wu, J Wang, H Kim – Soft Computing, 2017 – Springer
… 123 Page 2. J. Liu et al. the semantic relations among entities, mine latent relations among entities, and perform other complex NLP work such as spoken dialog systems and conversational agents as so on … Dynamic long short-term memory (LSTM) is also adopted in the model …

Concept Transfer Learning for Adaptive Language Understanding
S Zhu, K Yu – arXiv preprint arXiv:1706.00927, 2017 – arxiv.org
… The ability for LU ap- proaches to cope with changing domains and lim- ited training data is of particular interest for the deployment of commercial dialogue system … Yao et al. (2014) introduced LSTM (long-short memory net- works) and deep LSTM architecture for this task and …

Multi-Task Learning for Speaker-Role Adaptation in Neural Conversation Models
Y Luan, C Brockett, B Dolan, J Gao… – arXiv preprint arXiv …, 2017 – arxiv.org
… The emergence of these agents has been paralleled by burgeoning interest in train- ing natural-sounding dialog systems from conver- sational exchanges between humans (Ritter et al., 2011; Sordoni et … Wen et al., 2015) introduced a Dialog-Act component into the LSTM cell to …

On comparison of deep learning architectures for distant speech recognition
R Sustika, AR Yuliani, E Zaenudin… – … Systems and Electrical …, 2017 – ieeexplore.ieee.org
… investigate the robustness of various deep learning architectures: DBN-DNN (Deep Belief Network Deep Neural Network), LSTM (Long Short Term Memory), TDNN (Time … be used as a standalone output as in meeting diarisation, or as inputs to spoken dialogue systems such as …

Text Generation Using Different Recurrent Neural Networks
P Taneja, KG Verma – 2017 – dspace.thapar.edu
… And Long Short-term memory came out as a potential successor. LSTM has ability to forget which means it can decide whether to forget the previous hidden state or to keep it. It also has … Long Short-Term Memory (LSTM) is a special type of RNN that was designed to model …

YJTI at the NTCIR-13 STC Japanese Subtask
T Shimizu – research.nii.ac.jp
… In this work, we demonstrate that a retrieval-based dialog system can be effective and that the combinations of two el- ements, a large-scale neural model … 2. OUR SYSTEM Considering our usage of the long short-term memory re- current neural networks (LSTM-RNNs) [4 …

Disfluency Detection using a Noisy Channel Model and a Deep Neural Language Model
PJ Lou, M Johnson – Proceedings of the 55th Annual Meeting of the …, 2017 – aclweb.org
… Moreover, disfluen- cies pose a major challenge to natural language processing tasks, such as dialogue systems, that rely on speech … a Noisy Channel Model (NCM) to generate n-best candi- date disfluency analyses, and a Long Short-Term Memory (LSTM) language model to …

Towards a Response Selection System for Spoken Requests in a Physical Domain
A Partovi, I Zukerman, Q Tran – pdfs.semanticscholar.org
… A combination of deep learning and reinforcement learning has been used in end-to-end dialogue systems that query a … Our RNN model is based on the Long-Short-Term-Memory (LSTM) architecture [Hochreiter and Schmidhuber, 1997], which can capture long-range …

Interaction and Transition Model for Speech Emotion Recognition in Dialogue
R Zhang, A Atsushi, S Kobashikawa… – Proc. Interspeech …, 2017 – pdfs.semanticscholar.org
… Convolutional Neu- ral Network (CNN) [10] and Long Short-Term Memory Re- current Neural Network (LSTM-RNN … Both EIT and ET model used 1-layer LSTM with 64 hidden units … D. Hakkani-Tür, “Using context to improve emotion detection in spoken dialog systems,” in Proceed …

Variational Neural Conversational Model
X Tong, Y Li, CM Yen – cs.cmu.edu
… To Sequence model is first introduced in (Cho et al., 2014), and since then, has become the standard model for dialogue systems (Vinyals & … Nested-LSTM structure Long short term memory network is a one of the most com- mon extention of canonical recurrent neural network …

Steering output style and topic in neural response generation
D Wang, N Jojic, C Brockett, E Nyberg – arXiv preprint arXiv:1709.03010, 2017 – arxiv.org
… Page 3. a recurrent activation unit that we employ in the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997). The decoder, which is also implemented as an RNN, generates one word at a time, based on the context vector set returned by the encoder …

Regularized Neural User Model for Goal Oriented Spoken Dialogue Systems
M Serras, MI Torres, A del Pozo – pdfs.semanticscholar.org
… Page 5. Regularized Neural User Model for Goal Oriented Spoken Dialogue Systems 5 … The encoding layer is composed of a bidirectional Long Short Term Memory (LSTM) [9], whose output is the dialogue history encoded as hf forward and as hb backward …

End-to-End Dialogue with Sentiment Analysis Features
A Rinaldi, O Oseguera, J Tuazon, AC Cruz – International Conference on …, 2017 – Springer
… Sequence-to-sequence learning Dialogue system Conversational agent Chatbot Recurrent neural network Sentiment analysis. Download fulltext PDF … to improve the performance of the model on long sequences of words by using Long Short Term Memory (LSTM) cells while …

Non-Markovian Control with Gated End-to-End Memory Policy Networks
J Perez, T Silander – arXiv preprint arXiv:1705.10993, 2017 – arxiv.org
… This method uses a recurrent network, namely an Long Short Term Memory (LSTM) [HS97], to add a … Recently, recurrent models like LSTM have been inves- tigated to incorporate such … reward maximization like natural language translation [BCB14] or end-to-end dialog systems …

EncodingWord Confusion Networks with Recurrent Neural Networks for Dialog State Tracking
G Jagfeld, NT Vu – arXiv preprint arXiv:1707.05853, 2017 – arxiv.org
… Task-oriented dialog systems are often imple- mented in a modular architecture to break up the complex task of conducting dialogs … 2Apart from GRUs, long short-term memory (LSTM) cells (Hochreiter and Schmidhuber, 1997) are a more tradi- tional way to extend the recurrent …

Convolutional recurrent neural network for question answering
MMA Zaman, SZ Mishu – Electrical Information and …, 2017 – ieeexplore.ieee.org
… NLP) can be presented as a question answering problem [1]. Moreover, QA can be used to develop dialogue systems and chatbots … has been solved by Hochreiter and Schmidhuber [28] who invented a new way to calculate recurrent unit called long short term memory (LSTM) …

Speech Emotion Recognition based on Gaussian Mixture Models and Deep Neural Networks
IJ Tashev, ZQ Wang, K Godin – Information Theory and …, 2017 – ieeexplore.ieee.org
… With better understanding of the human and the emotion in spoken query, the spoken dialog system can thus achieve a better … according to the one giving largest likelihood at the test stage [4]. Recurrent Neural Networks (RNN) with long-short term memory (LSTM) may also be …

Exploiting end of sentences and speaker alternations in language modeling for multiparty conversations
H Ashikawa, N Tawara, A Ogawa… – Asia-Pacific Signal …, 2017 – ieeexplore.ieee.org
… Long short-term memory language models (LSTMLMs) [18] were also exploited in the present study to capture contexts that … 23.7 LSTM CE E EOS EOS 23.8 LSTM CEA EOS & SA 23.7 LSTM EA EOS & … [5] W. Xu and A. Rudnicky, “Language modeling for dialog system,” in Proc …

Image Description Using Deep Neural Network
AP Deshmukh, AS Ghotkar – 2017 – ijsrst.com
… and is core to a wide range of NLP applications such as machine translation, summarizing, dialogue systems and machine … For the encoder, we learn a joint image-sentence embedding where sentences are encoded using long short-term memory (LSTM) recurrent neural …

A “small-data”-driven approach to dialogue systems for natural language human computer interaction
T Boros, SD Dumitrescu – Speech Technology and Human …, 2017 – ieeexplore.ieee.org
… A scenario is the equivalent of a frame in a in frame-based dialogue system, but by default we don’t log any … Several re- search has been carried around text-categorization with Long- Short-Term-Memory (LSTM) Networks [11], Gated Recurrent Units (GRUs) [12] and Deep …

Toward Continual Learning for Conversational Agents
S Lee – arXiv preprint arXiv:1712.09943, 2017 – arxiv.org
… This extension allows us to simulate some common situations where a developer builds a dialog system which covers task-specific utterances well but fails on unanticipated generic utterances. Training Details To implement the state tracker, we use three LSTM-RNNs with – 25 …

Feature-based Compositing Memory Networks for Aspect-based Sentiment Classification in Social Internet of Things
R Ma, K Wang, T Qiu, AK Sangaiah, D Lin… – Future Generation …, 2017 – Elsevier
… Instead of feature engineering, neural network models based on Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) generate continuous text representation to capture the relation between aspect terms and context words [14] …

Predicting Users’ Negative Feedbacks in Multi-Turn Human-Computer Dialogues
X Wang, J Wang, Y Liu, X Wang, Z Wang… – Proceedings of the Eighth …, 2017 – aclweb.org
… Abstract User experience is essential for human- computer dialogue systems … A GRU stores context information in the in- ternal memory structure. It performs compara- bly with long short-term memory (LSTM) and has lower complexity (Chung et al., 2014) …

Predicting Success in Goal-Driven Human-Human Dialogues
M Noseworthy, JCK Cheung, J Pineau – Proceedings of the 18th Annual …, 2017 – aclweb.org
… Complementary work has been done that shares a common goal of extending dialogue systems to open-ended domains … In our implementation, we use Long Short Term Memory (LSTM) units (Hochreiter and Schmid- huber, 1997) to account for long-term dependen- cies …

Unbounded cache model for online language modeling with open vocabulary
E Grave, MM Cisse, A Joulin – Advances in Neural Information …, 2017 – papers.nips.cc
… We train recurrent neural networks with 256 LSTM hidden units, using the Adagrad algorithm with a learning rate of 0.2 and 10 epochs … Long short-term memory … Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, 2016 …

Incomplete Follow-up Question Resolution using Retrieval based Sequence to Sequence Learning
V Kumar, S Joshi – Proceedings of the 40th International ACM SIGIR …, 2017 – dl.acm.org
… Sequence to sequence learning has also been applied in dialogue systems for user modeling [2, 41, 52] … is problem is addressed by using either a long short-term memory (LSTM) [18] or a gated recurrent unit (GRU) [9] cell …

Clinical Intervention Prediction and Understanding with Deep Neural Networks
H Suresh, N Hunt, A Johnson, LA Celi… – Machine Learning …, 2017 – proceedings.mlr.press
… We use long short-term memory networks (LSTM) (Hochreiter and … in timeseries data (Bengio et al., 1994) and have achieved state-of-the-art results in many different applications: eg machine translation (Hermann et al., 2015), dialogue systems (Chorowski et al., 2015 …

Table-to-text Generation by Structure-aware Seq2seq Learning
T Liu, K Wang, L Sha, B Chang, Z Sui – arXiv preprint arXiv:1711.09724, 2017 – arxiv.org
… The structure-aware seq2seq architecture we proposed exploits encoder-decoder framework using long short-term memory (LSTM) (Hochreiter and Schmidhuber 1997) units with local and global addressing on the structured table …

Enhanced semantic refinement gate for RNN-based neural language generator
VK Tran, VT Nguyen, LM Nguyen – Knowledge and Systems …, 2017 – ieeexplore.ieee.org
… P.-H. Su, D. Vandyke, and S. Young, “Semantically conditioned lstm-based natural … [22] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation … S. Young, “Multi-domain neural network language gen- eration for spoken dialogue systems,” arXiv preprint …

Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention
H Tachibana, K Uenoyama, S Aihara – arXiv preprint arXiv:1710.08969, 2017 – arxiv.org
… 7962–7966. [9] Y. Fan et al., “TTS synthesis with bidirectional LSTM based recurrent neural networks,” in Proc … 1964–1968. [10] H. Zen and H. Sak, “Unidirectional long short-term memory recurrent neural … [20] IV Serban et al., “Building end-to-end dialogue systems us- ing …

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
M Lapata, P Blunsom, A Koller – Proceedings of the 15th Conference of …, 2017 – aclweb.org
… Character-Word LSTM Language Models Lyan Verwimp, Joris Pelemans, Hugo Van hamme and Patrick Wambacq … A Network-based End-to-End Trainable Task-oriented Dialogue System Tsung-Hsien Wen, David Vandyke, Nikola Mrkšic, Milica Gasic, Lina M. Rojas Barahona …

Detection of social signals for recognizing engagement in human-robot interaction
D Lala, K Inoue, P Milhorat, T Kawahara – arXiv preprint arXiv:1709.10257, 2017 – arxiv.org
… In previous stud- ies we have developed ERICA’s dialogue system (Lala et al … We used a long short-term memory (LSTM) network (Hochreiter and Schmidhuber 1997) for the head nodding model, which can be readily applied to gesture recognition (Ordó˜nez and Roggen 2016 …

Large-Scale Simple Question Generation by Template-Based Seq2seq Learning
T Liu, B Wei, B Chang, Z Sui – National CCF Conference on Natural …, 2017 – Springer
… Res. 249–256 (2010)Google Scholar. 9. Hochreiter, S., Schmidhuber, J.: Long short-term memory … Wen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., Young, S.: Semantically conditioned LSTM-based natural language generation for spoken dialogue systems …

Nonrecurrent Neural Structure for Long-Term Dependence
S Zhang, C Liu, H Jiang, S Wei, L Dai… – IEEE/ACM Transactions …, 2017 – ieeexplore.ieee.org
… For example, the long short term memory (LSTM) model [22], [23] is an enhanced RNN architecture to implement the recurrent feedbacks using various learnable gates, which ensure that the gradients can flow back to the past more effec- tively …

Clinical event prediction and understanding with deep neural networks
H Suresh – 2017 – dspace.mit.edu
… term memory networks (LSTM) [311, which have been shown to effectively model complicated dependencies in timeseries data [3]. Pre- viously, LSTMs have achieved state-of-the-art results in many different applications, such as machine translation [28], dialogue systems [121 …

May I take your order? A Neural Model for Extracting Structured Information from Conversations
B Peng, M Seltzer, YC Ju, G Zweig… – Proceedings of the 15th …, 2017 – aclweb.org
… Nallapati et al. (2016) proposed using sequence to sequence model to summarize source code into natural language; they used a LSTM as encoder and another attentional LSTM and decoder to jointly learn content selec- tion and realization. Dong and Lapata (2016) pre-

Towards Debate Automation: a Recurrent Model for Predicting Debate Winners
P Potash, A Rumshisky – Proceedings of the 2017 Conference on …, 2017 – aclweb.org
… 3.2 Recurrent Architecture Our RNN model uses a long short-term mem- ory (LSTM) (Hochreiter and Schmidhuber, 1997) component … h0 = tanh(Wpreapre + bpre) (8) We choose tanh for the activation function be- cause it is the same activation function used by the LSTM cell …

Extended Hybrid Code Networks for DSTC6 FAIR Dialog Dataset
J Ham, S Lim, KE Kim – workshop.colips.org
… LSTM dense + softmax … 7] IV Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau, “Building end-to-end dialogue systems using generative … Available: http://arxiv.org/abs/1702.03274 [13] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol …

Response selection from unstructured documents for human-computer conversation systems
Z Yan, N Duan, J Bao, P Chen, M Zhou, Z Li – Knowledge-Based Systems, 2017 – Elsevier
… NASM [29] uses an enriching long short term memory (LSTM) with a latent stochastic attention mechanism to model similarity between QR pairs. AB-CNN [30] is an attention-based CNN which calculates a similarity matrix and takes it as a new channel of the CNN model …

Dialog for natural language to code
S Chaurasia – 2017 – repositories.lib.utexas.edu
… the generation of fully executable code from their initial description. 1.3 Dialog Systems Another line of research that has recently garnered increasing attention is … Dialog System 3.1 Chapter Overview We propose a text-based dialog system with which users can converse using …

Ask Me Otherwise: Synonym-Based Memory Networks for Reading Comprehension
B Srivatsan – bharathsrivatsan.com
… input document simultaneously (and dynamically) using Long Short Term Memory networks … 1 For the purposes of this paper, I do not explore RNN and LSTM constructions in much depth, though it is worth mentioning that these neural network architectures are very commonly …

CI-Bot: A Hybrid Chatbot Enhanced by Crowdsourcing
X Liang, R Ding, M Lin, L Li, X Li, S Lu – … ) Joint Conference on Web and Big …, 2017 – Springer
… map the input sequence into a fixed dimension vector by using Long Short Term Memory(LSTM) layer, then … The actual model is implemented by multi-layer LSTM neural network … Huang, THK, Lasecki, WS, Bigham, JP: Guardian: a crowd-powered spoken dialog system for web …

End-to-End Information Extraction without Token-Level Supervision
RB Palm, D Hovy, F Laws, O Winther – arXiv preprint arXiv:1707.04913, 2017 – arxiv.org
… The first layer is a Bi-directional Long Short Term Memory network (Hochreiter and Schmidhuber, 1997) (Bi-LSTM) and the … The last hidden state of the summarizer LSTM is then concate- nated to each input to the … Asgard: A portable architecture for multilingual dialogue systems …

Find the Conversation Killers: a Predictive Study of Thread-ending Posts
Y Jiao, C Li, F Wu, Q Mei – arXiv preprint arXiv:1712.08636, 2017 – arxiv.org
… In ConverNet, we use BiLSTM as the basic building blocks of its architecture. LSTM (Long Short-term Memory) [13] units are widely used to build an RNN model and BiLSTM [12] is one of its extensions. Below we briefly introduce the basic formulation of a LSTM layer …

Text Generation Based on Generative Adversarial Nets with Latent Variable
H Wang, Z Qin, T Wan – arXiv preprint arXiv:1712.00170, 2017 – arxiv.org
… It is also essential to machine translation, text summarization, question answering and dialogue system [1]. One popular ap- proach for … Long short-term memory (LSTM) is an improved version of recurrent neural network considering long-term dependency in order to over- come …

Variational Attention for Sequence-to-Sequence Models
H Bahuleyan, L Mou, O Vechtomova… – arXiv preprint arXiv …, 2017 – arxiv.org
… like to transform source information to target information, eg, ma- chine translation, dialogue systems, and text … h (tar) j?1,yj?1). In our experiments, we use long short-term memory units (Hochreiter … We used LSTM-RNNs with 100 hidden units for both the encoder and decoder …

Natural Language Inference with External Knowledge
Q Chen, X Zhu, ZH Ling, D Inkpen – arXiv preprint arXiv:1711.04289, 2017 – arxiv.org
… 2 Page 3. 2016), and dialogue system (Chen et al., 2016) … Long short-term memory. Neural Computation, 9(8): 1735–1780, 1997 … Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using bidirectional LSTM model and inner-attention …

End-to-end Adversarial Learning for Generative Conversational Agents
O Ludwig – arXiv preprint arXiv:1711.10122, 2017 – arxiv.org
… The proposed model uses two Long Short-term Memory net- works (LSTM) [21], both with the same … where f? (·) (with ? ? ?) represents the LSTM that encodes the incomplete answer (y1 … Y. Bengio, AC Courville, J. Pineau, Building end-to-end dialogue systems using generative …

Natural language inference over interaction space
Y Gong, H Luo, J Zhang – arXiv preprint arXiv:1709.04348, 2017 – arxiv.org
… its introduction: machine translation(Bahdanau et al., 2014), abstractive summa- rization(Rush et al., 2015), Reading Comprehension(Hermann et al., 2015), dialog system(Mei et al … (2016) proposes long short-term memory-network(LSTMN … Re-read LSTM proposed by Sha et al …

Deep-Learning Based Automatic Spontaneous Speech Assessment in a Data-Driven Approach for the 2017 SLaTE CALL Shared Challenge
YR Oh, HB Jeon, HJ Song, BO Kang, YK Lee… – Proc. 7th ISCA … – slate2017.org
… related texts and a common-domain LM using general-domain texts are generated as follows: • Mdomain,3gram: CALL-domain, 3-gram LM • Mdomain,5gram: CALL-domain, 5-gram LM • Mdomain,lstm: CALL-domain, a two-layer long short-term memory (LSTM) recurrent neural …

The importance of multimodality in sarcasm detection for sentiment analysis
MS Razali, AA Halin, NM Norowi… – … (SCOReD), 2017 IEEE …, 2017 – ieeexplore.ieee.org
… 44], the author use a combination of a few algorithms, namely CNN (Convolutional Neural Network), LSTM (Long Short-Term Memory) and DNN … [4] J. Tepperman, D. Traum, and S. Narayanan, “’Yeah right’: sarcasm recognition for spoken dialogue systems.”, INTERSPEECH, p …

Multilingual spoken dialog systems for handheld devices
BML Srivastava – 2017 – researchgate.net
… Language Identification LLE Locally Linear Embedding LPCC Linear Prediction Cepstral Coefficients LSTM-RNN Long Short Term Memory – Recurrent Neural … Neural Network Language Model SDC Shifted Delta Coefficients SDS Spoken Dialog System SGD Stochastic …

Native Language Identification of Spoken Language Using Recurrent Neural Networks
KC Huang, J Lu, W Lu – stanford.edu
… The main application for the system was a spoken dialogue system giving information about venues in San Francisco across two domains about … Three main varieties of RNN cells exist: regu- lar RNN cells, long short-term memory (LSTM), and gated recurrent units (GRU) …

Decoding with value networks for neural machine translation
D He, H Lu, Y Xia, T Qin, L Wang, T Liu – Advances in Neural …, 2017 – papers.nips.cc
… in which function f is the recurrent unit such as Long Short-Term Memory (LSTM) unit [12] or … the decoder RNN hidden representation at step t, similarly computed by an LSTM or GRU … it to other sequence-to-sequence learning tasks, such as image captioning and dialog systems …

Conversational Agents Embodying a Character Using Neural Networks
M Ilie, T Rebedea – rochi.utcluj.ro
… In an open dialogue system, there isn’t a well-defined purpose or intention for the conversation, therefore the user can drive the … This model consists of two layers of recurrent neural networks using Long Short-Term Memory (LSTM) cells, an encoder and a decoder [3]. The input …

Knowledge acquisition for visual question answering via iterative querying
Y Zhu, JJ Lim, L Fei-Fei – The IEEE Conference on …, 2017 – openaccess.thecvf.com
… the most popular choices is to use CNN to encode images and LSTM to encode … dating back to the 1970s, is SHRDLU [36], which provided a dialog system for users … An early prominent innovation is long short-term memory [15], which introduces memory cells to vanilla recurrent …

End-to-End Architectures for Speech Recognition
Y Miao, F Metze – New Era for Robust Speech Recognition, 2017 – Springer
… Most CTC-trained ASR systems have been built using stacked layers of long short-term memory networks (LSTMs) [30], eg, in [25, 26, 61] … In many practical implementations, LSTM networks (LSTMs, [30]) have acted as the building block in the encoder/decoder network …

Domain Transfer for Deep Natural Language Generation from Abstract Meaning Representations
N Dethlefs – IEEE Computational Intelligence Magazine, 2017 – ieeexplore.ieee.org
… To do this, we model our natural language generator as a Long Short-Term Memory (LSTM) encoder-decoder, in which two LSTMs are jointly trained to learn a probability dis- tribution that conditions a sequence of words on a sequence of semantic symbols …

Customized Nonlinear Bandits for Online Response Selection in Neural Conversation Models
B Liu, T Yu, I Lane, OJ Mengshoel – arXiv preprint arXiv:1711.08493, 2017 – arxiv.org
… we focus on online learning of response selection in retrieval-based dialog systems. We propose a con- textual multi-armed bandit model with a nonlinear reward function that uses distributed representation of text for on- line response selection. A bidirectional LSTM is used to …

Modeling Situations in Neural Chat Bots
S Sato, N Yoshinaga, M Toyoda… – Proceedings of ACL 2017 …, 2017 – aclweb.org
… open problem in chat dialogue modeling, and this difficulty has partly forced us to focus on task-oriented dialogue systems (Williams and … We used a long-short term memory (LSTM) (Zaremba et al., 2014) as the RNN encoder and decoder, sampled softmax (Jean et al., 2015) to …

A knowledge-grounded neural conversation model
M Ghazvininejad, C Brockett, MW Chang… – arXiv preprint arXiv …, 2017 – arxiv.org
… A traditional dialog system would use pre- defined slots to fill conversational backbone (bold text) with content; here, we present a more … conversational SEQ2SEQ models, except that we use gated recur- rent units (GRU) (Chung et al., 2014) instead of LSTM (Hochreiter and …

A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models
K Goyal, G Neubig, C Dyer… – arXiv preprint arXiv …, 2017 – arxiv.org
… tanh activation function) for the input sequence x, and an LSTM decoder (1 … Matthews, and NA Smith, “Transition-based dependency parsing with stack long short-term memory,” arXiv preprint … A. Courville, and J. Pineau, “Building end-to-end dialogue systems using generative …

Emergence of language with multi-agent games: learning to communicate with sequences of symbols
S Havrylov, I Titov – Advances in Neural Information Processing …, 2017 – papers.nips.cc
… We trained language model p?(m) using an LSTM recurrent neural network … Learning dialogue systems for collaborative activities between machine and human were previously considered by Lemon et al. (2002) … Long short-term memory …

Sequence Adversarial Training and Minimum Bayes Risk Decoding for End-to-end Neural Conversation Models
W Wang, Y Koji, BA Harsham, T Hori… – … of the 6th Dialog System …, 2017 – merl.com
… From the 6th challenge, the focus of DSTC has been expanded to broader areas of dialog system technology … multiple sequence-to-sequence models, and minimum Bayes risk (MBR) decoding, where the multiple models are a long short-term memory (LSTM) encoder decoder …

A context-aware speech recognition and understanding system for air traffic control domain
Y Oualil, D Klakow, G Szaszák… – … (ASRU), 2017 IEEE, 2017 – ieeexplore.ieee.org
… Everitt et al. [5] proposed a dialogue system for gyms, which, based on … In practice, this model is a Long-Short Term Memory (LSTM) neural network [12, 13] trained on landing sequences of commands, which are reconstructed from data collected in Prague or Vienna air- ports …

Chinese Zero Pronoun Resolution with Deep Memory Network
Q Yin, Y Zhang, W Zhang, T Liu – … of the 2017 Conference on Empirical …, 2017 – aclweb.org
… First, we represent the AZP zp by uti- lizing its contextual information, that is, propos- ing the ZP-centered LSTM that encodes zp into its distributed vector representation (ie vzp in Figure 1). We then regard vzp as the initial rep- resentation of zp, and feed it as the input to the first …

Robust children and adults speech identification and confidence measure based on DNN posteriorgram
H Kamiyama, A Ando, S Kobashikawa… – Asia-Pacific Signal …, 2017 – ieeexplore.ieee.org
… The model uses the LSTM-RNN (Long Short Term Memory-Recurrent Neural Network) to learn … Under TV noise, our proposed LSTM-based confidence measure, which is modeling DNN … Saruwatari and K. Shikano, “Noise robust real world spoken dialogue system using GMM …

Adversarial generation of natural language
S Subramanian, S Rajeswar, F Dutil, C Pal… – Proceedings of the 2nd …, 2017 – aclweb.org
… Recurrent Neural Networks (RNNs), particu- larly Long short-term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) and … generated at the word and character-level by our LSTM and CNN … in other domains of NLP such as non goal-oriented dialog systems where a …

Constructing a Natural Language Inference dataset using generative neural networks
J Starc, D Mladeni? – Computer Speech & Language, 2017 – Elsevier
… Luong et al., 2015), summarization (Clarke and Lapata, 2008; Rush et al., 2015) and conversational dialog systems (Serban et al … We use two variants of RNNs—Long short term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) and an attention-based extension …

A hierarchical neural model for learning sequences of dialogue acts
QH Tran, I Zukerman, G Haffari – Proceedings of the 15th Conference of …, 2017 – aclweb.org
… (DAs). This task is particularly useful for dialogue systems, as knowing the DA of an utterance sup- ports its interpretation, and the generation of an … The LSTM basis function is calculated with a fixed vector g0 instead of the previous time step’s vec- tor … Long short-term memory …

A joint deep model of entities and documents for cumulative citation recommendation
L Ma, D Song, L Liao, Y Ni – Cluster Computing, 2017 – Springer
… DSCNN consists of a convolutional layer built on top of long short-term memory (LSTM) net- works. For a single sentence, the LSTM network processes the sequence of word embeddings to capture long-distance dependencies within the sentence …

Integrating Extractive and Abstractive Models for Long Text Summarization
S Wang, X Zhao, B Li, B Ge… – Big Data (BigData …, 2017 – ieeexplore.ieee.org
… success in various natural language processing tasks, including but not limited to machine translation [2], voice recognition [3] and dialogue systems [22], etc … In practice, it is found that gated RNN alternatives such as LSTM [9] or GRU [4] often performs better than basic RNN …

Dual Learning for Cross-domain Image Captioning
W Zhao, W Xu, M Yang, J Ye, Z Zhao, Y Feng… – Proceedings of the 2017 …, 2017 – dl.acm.org
… adaptation. Mo et al. [17] proposed a transfer learning framework based on POMDP to learn a personalized dialogue system. The … words. We use long short-term memory (LSTM) cell as the basic RNN unit. CNN encoder. Similar …

Improving Deep Neural Network Based Speech Synthesis through Contextual Feature Parametrization and Multi-Task Learning
Z Wen, K Li, Z Huang, CH Lee, J Tao – Journal of Signal Processing …, 2017 – Springer
… i o c h Figure 2 DNN based speech synthesis. Left: restricted Boltzmann Machine (RBM); right: long short term memory (LSTM) and bidirectional recurrent neural network (BRNN). Classification Label Soft-Max Regression Layer Classification Layer Speech Parameters …

Deep Reinforcement Learning for Conversational AI
M Jadeja, N Varia, A Shah – arXiv preprint arXiv:1709.05067, 2017 – arxiv.org
… e LSTM (Long-Short Term Memory) based ap- proach vectors a dialogue system. It then includes various layers of vectored architectures to get the output. DNNs are able to per- form parallel computing in a very optimised way which thus makes dialogue generation easier …

Classification-based spoken text selection for LVCSR language modeling
V Chunwijitra, C Wutiwiwatchai – EURASIP …, 2017 – asmp-eurasipjournals.springeropen …
… the classical entropy-based sentence selection methods previously proposed, in this paper, modern machine learning algorithms including support vector machines (SVM), conditional random fields (CRF), and long short-term memory neural network (LSTM) are comparatively …

Exploring neural text simplification models
S Nisioi, S Štajner, SP Ponzetto, LP Dinu – … of the 55th Annual Meeting of …, 2017 – aclweb.org
… used in many applications (Graves, 2012), from speech and signal processing to text processing or dialogue systems (Serban et al … We use the OpenNMT framework (Klein et al., 2017) to train and build our architecture with two LSTM layers (Hochreiter … Long short-term memory …

UE-HRI: a new dataset for the study of user engagement in spontaneous human-robot interactions
A Ben-Youssef, C Clavel, S Essid, M Bilac… – Proceedings of the 19th …, 2017 – dl.acm.org
… language modelling and speaking style to detect user frustration with a telephone-based dialog system interface … inappropriate utterances that lead to dialogue breakdowns, the best performance was found using LSTM- RNN (Long Short-Term Memory – Recurrent Neural …

TrumpBot: Seq2Seq with Pointer Sentinel Model
F Zivkovic, D Chen – pdfs.semanticscholar.org
… “Framewise phoneme classification with bidirectional LSTM and other neural network architectures” … “Long short-term memory” … “How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation” …

General Pipeline Architecture for Domain-Specific Dialogue Extraction from different IRC Channels
A Abouzeid – 2017 – content.grin.com
… about Ubuntu-related technical problems, that makes it suitable for a goal-oriented Dialogue System dedicated for technical issues in Ubuntu … learning architectures were applied, the Recurrent Neural Network (RNN) and the Long Short Term Memory Network (LSTM) …

Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning
B Peng, X Li, L Li, J Gao, A Celikyilmaz, S Lee… – Proceedings of the …, 2017 – aclweb.org
… Recent ad- vances in deep learning have inspired many deep reinforcement learning based dialogue systems that eliminate the need for … Our composite task-completion dialogue agent consists of four components: (1) an LSTM- based language understanding module (Hakkani …

Learning utterance-level representations for speech emotion and age/gender recognition using deep neural networks
ZQ Wang, I Tashev – Acoustics, Speech and Signal Processing …, 2017 – ieeexplore.ieee.org
… important for many applications related to human computer interactions, especially for spoken dialogue systems … Recently, recurrent neural networks with long- short term memory (LSTMs) are utilized for this task … utterance-level label is assigned to every frame for LSTM training …

Summarizing Dialogic Arguments from Social Media
A Misra, S Oraby, S Tandon, P Anand… – arXiv preprint arXiv …, 2017 – arxiv.org
… Shubhangi Tandon, Sharath TS, Pranav Anand and Marilyn Walker UC Santa Cruz Natural Language and Dialogue Systems Lab 1156 N … We use GloVe embeddings to initialize our Long Short-Term Memory (LSTM) models as glove embeddings have been trained on web data …

Abstractive document summarization with a graph-based attentional neural model
J Tan, X Wan, J Xiao – Proceedings of the 55th Annual Meeting of the …, 2017 – aclweb.org
… widely used in machine translation (Bahdanau et al., 2014) and dialog systems (Mou et al., 2016), etc … state after the sentence encoder receives “” is treated as the representation of the input document c = h?1 . We use the Long Short-Term Memory (LSTM) (Hochreiter and …

Neural-based natural language generation in dialogue using rnn encoder-decoder with semantic aggregation
VK Tran, LM Nguyen – arXiv preprint arXiv:1706.06714, 2017 – arxiv.org
… Natural Language Generation (NLG) plays a crit- ical role in a Spoken Dialogue System (SDS), and its task is to convert a meaning represen- tation produced by the … (2015a) proposed a Long Short- Term Memory-based (HLSTM … (2015b) proposed a LSTM-based generator (SC …

How Generic Can Dialogue Breakdown Detection Be? The KTH entry to DBDC3
J Lopes – workshop.colips.org
… context of task oriented systems [2, 3, 4, 5, 6, 7]. The problem proposed proposed in the 3rd Dialogue Breakdown Detection Challenge [8] is to detect these breakdown points in chat-based dialogue systems which are … In this case we have used Long-Short Term Memory (LSTM) …

Deep keyphrase generation
R Meng, S Zhao, S Han, D He, P Brusilovsky… – arXiv preprint arXiv …, 2017 – arxiv.org
… Previous studies (Bahdanau et al., 2014; Cho et al., 2014) indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler struc- ture than other Long Short-Term Memory net- works (Hochreiter and Schmidhuber, 1997) …

Sequential short-text classification with neural networks
F Dernoncourt – 2017 – dspace.mit.edu
… representation s. RNN-based short-text representation We use a variant of RNN called Long Short Term Memory (LSTM) [51]. For the tth word in the short-text, an LSTM takes as input xt, ht_ 1, ci-1 and produces ht, ct based on the following formulas: it = u(Wixt + Ujht_1 + bi) …

Bringing Semantic Structures to User Intent Detection in Online Medical Queries
C Zhang, N Du, W Fan, Y Li, CT Lu, PS Yu – arXiv preprint arXiv …, 2017 – arxiv.org
… Page 6. suitable to learn dependencies from a long input sequence in practical. To address the gradients decay or exploding problem over long sequences, the Gated Recurrent Unit (GRU) [8] is proposed as a variation of the Long Short-term Memory (LSTM) unit [20] …

Addressee and Response Selection in Multi-Party Conversations with Speaker Interaction RNNs
R Zhang, H Lee, L Polymenakos, D Radev – arXiv preprint arXiv …, 2017 – arxiv.org
… The task requires modeling multi-party conversations and can be directly used to build retrieval- based dialog systems (Lu and Li 2013; Hu et al. 2014; Ji, Lu, and Li 2014; Wang et al. 2015) … 2 Related Work We follow a data-driven approach to dialog systems. Singh et al …

Handling long-term dependencies and rare words in low-resource language modelling
M Singh – 2017 – publikationen.sulb.uni-saarland.de
… long-term information (Sundermeyer et al. (2012)). To fix this instability issue, long-short-term- memory (LSTM) based units are applied to RNNs and LSTM- based models improve the performance significantly over RNNs. Further improve- ments (Oualil et al …

Speaker and Language Recognition and Characterization: Introduction to the CSL Special Issue
E Lleida, LJ Rodriguez-Fuentes – 2017 – Elsevier
… On the other hand, spoken language recognition (SLR) has also witnessed a remarkable interest from the community as an auxiliary technology for speech recognition (Gonzalez-Dominguez et al., 2015b), dialogue systems (Lopez-Cozar and Araki, 2005) and multimedia …

Convolutional Neural Network using a threshold predictor for multi-domain dialogue.
G Xu, H Lee – uni-leipzig.de
… I. INTRODUCTION The spoken language understanding (SLU) is one of the core components of an end-to-end dialogue system [1]. The SLU is aimed at extracting semantic meaning of … A well designed recurrent neural network models like long short term memory (LSTM) is …

Towards Micro-video Understanding by Joint Sequential-Sparse Modeling
M Liu, L Nie, M Wang, B Chen – Proceedings of the 2017 ACM on …, 2017 – dl.acm.org
… Regarding sequence modeling, Recurrent Neural Network- s (RNNs) using Long Short Term Memory (LSTM) have been … 2 RELATED WORK 2.1 LSTM Recurrent Neural Network LSTM introduced in … such as language modeling [12, 29], translation [26], dialog system [11], time …

Synthesising uncertainty: the interplay of vocal effort and hesitation disfluencies
E Székely, J Mendelson, J Gustafson – 18th Annual Conference of …, 2017 – isca-speech.org
… At the same time, it provides insight into what extent spoken dialogue systems using a syn- thetic voice would be capable of … system was set up to include 4 feed-forward (TANH) layers each containing 1024 hidden units, followed by a long short-term memory (LSTM) layer with …

End-to-End Speech Recognition with Auditory Attention for Multi-Microphone Distance Speech Recognition
S Kim, I Lane – Proc. Interspeech 2017, 2017 – isca-speech.org
… applications, including telecon- ferencing, robotics, and in-car spoken dialog systems, must deal … [29] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation … and A.-r. Mohamed, “Hybrid speech recogni- tion with deep bidirectional lstm,” in Automatic …

Diversity driven attention model for query-based abstractive summarization
P Nema, M Khapra, A Laha, B Ravindran – arXiv preprint arXiv …, 2017 – arxiv.org
… To account for the complete history (or all previous context vectors) we also pro- pose an extension of this idea where we pass the sequence of context vectors through a LSTM (Hochreiter and Schmidhuber, 1997) and ensure that the current state produced by the LSTM is …

Early prediction for physical human robot collaboration in the operating room
T Zhou, JP Wachs – Autonomous Robots, 2017 – Springer
… Machine learning techniques have been applied to recognize turn-taking events automatically, mainly for spoken dialog systems … We propose the usage of Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber 1997), a type of recurrent neu- ral network, for early turn …

Challenges in data-to-document generation
S Wiseman, SM Shieber, AM Rush – arXiv preprint arXiv:1707.08052, 2017 – arxiv.org
… similar to Yang et al. (2016)).3 Our source data-records are then represented as ˜s = {˜rj}J j=1. Given ˜s, we use an LSTM decoder with atten- tion and input-feeding, in the style of Luong et al. (2015), to compute the probability …

Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings
H He, A Balakrishnan, M Eric, P Liang – arXiv preprint arXiv:1704.07130, 2017 – arxiv.org
… a new symmetric collaborative dialogue setting and a large dialogue corpus that pushes the boundaries of existing dialogue systems; (ii) DynoNet … We embed and generate utterances using Long Short Term Memory (LSTM) networks that take the graph embeddings into account …

Multi-sense based neural machine translation
Z Yang, W Chen, F Wang, B Xu – Neural Networks (IJCNN) …, 2017 – ieeexplore.ieee.org
… Here f and g are nonlinear transform functions, which can be implemented as long short term memory network(LSTM) or gated recurrent unit (GRU), and ci is a distinct context vector at time step i, which is calculated as a weighted sum of the input annotations hj: ci = Tx ? …

How may i help you?: Modeling twitter customer serviceconversations using fine-grained dialogue acts
S Oraby, P Gundecha, J Mahmud, M Bhuiyan… – Proceedings of the …, 2017 – dl.acm.org
… and interpretation on intent. Modern intelligent con- versational [1, 31] and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe …

The Emotional Impact of Audio-Visual Stimuli
TP Thomas – 2017 – search.proquest.com
… Figure 16: A simple transfer learning implementation. 3.8. Long-Short Term Memory Neural Networks. Long Short-Term Memory (LSTM) neural networks are modern variants of. Recurrent Neural Networks (RNNs) that improve the ability to reason over temporal. sequences …

Learning to attend, copy, and generate for session-based query suggestion
M Dehghani, S Rothe, E Alfonseca, P Fleury – arXiv preprint arXiv …, 2017 – arxiv.org
… It creates sequence of hidden states, [ ?? h 1, ?? h 2,…, h n], where ?? h i = RNN(xi , ?? h i?1) is a dy- namic function for which we can use for example an LSTM [20] or a GRU [7]. e RNN backward pass reads X in the reverse direction, ie ?? h i = RNN(xi , ?? h i+1 …

Label-dependency coding in Simple Recurrent Networks for Spoken Language Understanding
M Dinarelli, V Vukotic, C Raymond – Interspeech, 2017 – hal.inria.fr
… is the concept recognition in the perspective of us- ing SLU in spoken dialog systems … course, better architectures may be easily pro- posed: for example, using LSTM as hidden … Yu, G. Zweig, and Y. Shi, “Spo- ken language understanding using long short-term memory neu- ral …

End-to-End Trainable Chatbot for Restaurant Recommendations
A Strigér – 2017 – diva-portal.org
… Despite this, it is unclear whether these results would be the same for goal-oriented dialog systems [3] as these kinds of studies do not seem to exist to the same extent in a goal-oriented … The LSTM learns how its memory should behave by learning the parameters for the gates …

Maximum-likelihood augmented discrete generative adversarial networks
T Che, Y Li, R Zhang, RD Hjelm, W Li, Y Song… – arXiv preprint arXiv …, 2017 – arxiv.org
Page 1. Maximum-Likelihood Augmented Discrete Generative Adversarial Networks Tong Che * 1 Yanran Li * 2 Ruixiang Zhang * 3 R Devon Hjelm 1 4 Wenjie Li 2 Yangqiu Song 3 Yoshua Bengio 1 Abstract Despite the successes …

Enhancing Backchannel Prediction Using Word Embeddings
R Ruede, M Müller, S Stüker… – Proc. Interspeech …, 2017 – pdfs.semanticscholar.org
… We do this by using Long- short term memory layers (LSTM) instead of dense feed forward layers … All of the results in Table 2 use the following setup: LSTM; Con- figuration … 5] M. Takeuchi, N. Kitaoka, and S. Nakagawa, “Timing detection for realtime dialog systems using prosodic …

Neural personalized response generation as domain adaptation
W Zhang, T Liu, Y Wang, Q Zhu – arXiv preprint arXiv:1701.02073, 2017 – arxiv.org
… conditioned lstm-based natural language generation for spoken dialogue systems, Computer Science … [24] A. Graves, Long short-term memory, Neural Computation 9 (8) (1997) 1735– 1780 … with lstm, Neural Computation 2 (10) (1999) 2451–71. 17 Page 18 …

Learning proactive behavior for interactive social robots
P Liu, DF Glas, T Kanda, H Ishiguro – Autonomous Robots, 2017 – Springer
… 2010) are often used for tasks like language processing, and Long Short-Term Memory (LSTM) (Hulme et al … This assumption has been made in HRI (Thomaz and Chao 2011; Chao and Thomaz 2011) and other spoken dialogue systems as well (Raux and Eske- nazi 2008) …

Sequential modeling, generative recurrent neural networks, and their applications to audio
S Mehri – 2017 – papyrus.bib.umontreal.ca
… Deep Learning DNN Deep Neural Network FNN Feedforward Neural Network GD Gradient Descent GMM Gaussian Mixture Model HMM Hidden Markov Model GRU Gated Recurrent Unit iid Independent and Identically Distributed LSTM Long Short-Term Memory MDN Mixture …

Convolutional Neural Network using a threshold predictor for multi-label speech act classification
G Xu, H Lee, MW Koo, J Seo – Big Data and Smart Computing …, 2017 – ieeexplore.ieee.org
… I. INTRODUCTION The spoken language understanding (SLU) is one of the core components of an end-to-end dialogue system [1]. The SLU is aimed at extracting semantic meaning of user … A well designed recurrent neural network models like long short term memory (LSTM) is …

Novel alignment method for DNN TTS training using HMM synthesis models
S Suzi?, T Deli?, D Pekar… – Intelligent Systems and …, 2017 – ieeexplore.ieee.org
… The research was conducted within the project “Development of Dialogue Systems for Serbian and Other South Slavic Languages” (TR32035), financed by … The last hidden layer uses long short-term memory (LSTM) units, while the output layer uses the linear activation function …

Automated Crowdturfing Attacks and Defenses in Online Review Systems
Y Yao, B Viswanath, J Cryan, H Zheng… – Proceedings of the 2017 …, 2017 – dl.acm.org
… 3.2 RNN Training and Text Generation Training Process. For all experiments, we use a Long Short-Term Memory (LSTM) model [16], an RNN variant that has shown better performance in practice [22]. We examine multiple …

Deliberation networks: Sequence generation beyond one-pass decoding
Y Xia, F Tian, L Wu, J Lin, T Qin, N Yu… – Advances in Neural …, 2017 – papers.nips.cc
… promising progress for many se- quence generation tasks, including machine translation, text summarization, dialog system, image captioning, etc … with beam size 8. The experimental results of applying deliberation network to the deep LSTM model are … Long short-term memory …

Speech Intention Classification with Multimodal Deep Learning
Y Gu, X Li, S Chen, J Zhang, I Marsic – Canadian Conference on Artificial …, 2017 – Springer
… Using an LSTM to learn contextual features would also better discover features in … J., Zweig, G.: Fast and easy language understanding for dialog systems with Microsoft … R., Manning, CD: Improved semantic representations from tree-structured long short-term memory networks …

Active Learning for Visual Question Answering: An Empirical Study
X Lin, D Parikh – arXiv preprint arXiv:1711.01732, 2017 – arxiv.org
… The model encodes an image into a feature vector using the VGG-net [32] CNN, encodes a question into a feature vector by learning a Long Short Term Memory (LSTM) RNN, and then learns a multi-layer perceptron on top that combines the image feature and the question …

Why We Need New Evaluation Metrics for NLG
J Novikova, O Dušek, AC Curry, V Rieser – arXiv preprint arXiv …, 2017 – arxiv.org
… This is rarely the case, as shown by various studies in NLG (Stent et al., 2005; Belz and Reiter, 2006; Reiter and Belz, 2009), as well as in related fields, such as dialogue systems (Liu et al., 2016), machine translation … (2015) uses a Long Short-term Memory (LSTM) network to …

Edina: Building an Open Domain Socialbot with Self-dialogues
B Krause, M Damonte, M Dobre, D Duma… – arXiv preprint arXiv …, 2017 – arxiv.org
… Data-driven dialogue systems for social agents. In International Workshop on Spoken Dialogue Systems, 2017 … Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. B. Krause, I. Murray, S. Renals, and L. Lu. Multiplicative LSTM for sequence modelling …

Business Applications of Deep Learning
A Vieira – Ubiquitous Machine Learning and Its Applications, 2017 – books.google.com
… RNNs are generally trained with Long Short Term Memory (LSTM) algorithm proposed by Schmidhuber (Schmidhuber … 2016), the authors combined a CNN and a LSTM for jointly … Chatbots Chatbots, also called Conversational Agents or Dialog Systems, are algorithms designed …

Significance of neural phonotactic models for large-scale spoken language identification
BML Srivastava, H Vydana, AK Vuppala… – … Joint Conference on, 2017 – ieeexplore.ieee.org
… module for a wide range of multilingual applications like, call centers, multilingual spoken dialog systems, emergency services … Sequential models like Recurrent Neural Networks (RNN), Long Short Term Memory (LSTM) which have exhibited a superior performance in phone …

Dynamic Time-Aware Attention to Speaker Roles and Contexts for Spoken Language Understanding
PC Chen, TC Chi, SY Su, YN Chen – arXiv preprint arXiv:1710.00165, 2017 – arxiv.org
… attributes shown in Figure 1. We apply a bidirec- tional long short-term memory (BLSTM) model [21 … Gao, and Asli Celikyilmaz, “End-to-end task-completion neural dialogue systems,” in Proceedings of … joint semantic frame parsing us- ing bi-directional rnn-lstm.,” in Proceedings …

Deep Memory Networks for Natural Conversations
??? – 2017 – s-space.snu.ac.kr
Page 1. ?????-???-???? 2.0 ???? ???? ??? ??? ??? ??? ??? ???? ? ? ???? ??, ??, ??, ??, ?? ? ??? ? ????. ??? ?? ??? ??? ???: ? ???, ? ???? …

Building a generalized model for multi-lingual vocal emotion conversion
S Vekkot – Affective Computing and Intelligent Interaction (ACII) …, 2017 – ieeexplore.ieee.org
… Requirement of human-like dialogue systems dictate the development of text-to-speech … to-sequence conversion of F0 and spectrum using deep bidirectional Long Short-term Memory method which … Deep Bidirectional LSTM Modeling of Timbre and Prosody for Emotional Voice …

Learning to A end, Copy, and Generate for Session-Based ery Suggestion
M Dehghani, S Rothe, E Alfonseca, P Fleury – 2017 – pdfs.semanticscholar.org
… It creates sequence of hidden states, [ ?? h 1, ?? h 2,…, h n], where ?? h i = RNN(xi , ?? h i?1) is a dy- namic function for which we can use for example an LSTM [20] or a GRU [7]. e RNN backward pass reads X in the reverse direction, ie ?? h i = RNN(xi , ?? h i+1 …

BUILDING GENERALIZE QA SYSTEM, SLR
M Zoaib, H Raza, H Shabbir, M Suleman, HA Asghar – researchgate.net
… S.NO QA system Dataset/Corpus 1 Match-Lstm[43],End-to-End Answer Chunk Extraction [88],Dynamic … ”Long short-term memory recurrent neural network architectures for large scale acoustic modeling.” Fifteenth Annual Conference of the International Speech Communication …

Automatic Neural Question Generation using Community-based Question Answering Systems
T Baghaee – 2017 – uleth.ca
… The longer the gap, the harder it is for the RNN to connect the information. 2.5.2 LSTM Networks To address the problem of long-term dependencies, Long Short-Term Memory or LSTM networks have been introduced by Hochreiter and Schmidhuber (1997). LSTM networks …

Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework
AS White, P Rastogi, K Duh, B Van Durme – Proceedings of the Eighth …, 2017 – aclweb.org
… For our model, we use the LSTM-based neural RTE model described by Bowman et al … Then, two LSTM neural networks (Hochreiter and Schmidhuber, 1997) independently encode the text and hypothesis sentences into 100 dimen- sional vectors …

Exploiting imbalanced textual and acoustic data for training prosodically-enhanced RNNLMs
M Hentschel, A Ogawa, M Delcroix… – Asia-Pacific Signal …, 2017 – ieeexplore.ieee.org
… [2] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation … [3] M. Sundermeyer, R. Schlüter, and H. Ney, “LSTM neural networks … Mori, “Cache neural network language models based on long-distance dependencies for a spoken dialog system,” in 2012 …

Toward Human Parity in Conversational Speech Recognition
W Xiong, J Droppo, X Huang, F Seide… – … on Audio, Speech …, 2017 – ieeexplore.ieee.org
… as- sessment for conversational speech transcription 2) The description of a novel spatial regularization method which significantly boosts our bidirectional long short- term memory (BLSTM) acoustic model performance 3) The use of long short-term memory (LSTM) rather than …

Building CMU Magnus from User Feedback
S Prabhumoye, F Botros, K Chandu… – Alexa Prize …, 2017 – nzini.com
… 6 Conclusion It is hard to build a spoken dialog system without conversations from real users … Lstm-based deep learning models for non-factoid answer selection … A long short-term memory model for answer sentence selection in question answering …

Distinguishing between facts and opinions for sentiment analysis: Survey and challenges
I Chaturvedi, E Cambria, RE Welsch, F Herrera – Information Fusion, 2017 – Elsevier
… LSTM, Long Short-Term Memory; MPQA, Multi Party Question Answering; BOW, Bag of Words; CNN, Convolutional Neural Network; WSD, Word Sense … 30] forecasting, e-health [31] and e-tourism [32], human communication comprehension [33] and dialogue systems [34], etc …

Scaffolding Networks for Teaching and Learning to Comprehend
A Celikyilmaz, L Deng, L Li, C Wang – arXiv preprint arXiv:1702.08653, 2017 – arxiv.org
… n). The embedding vector at time t is used as input to an long short-term memory (LSTM) [13] model … follows: ˜o2 t ? o1 t + h q,2 t ? Iq (8) o2 t ? LSTM(˜o2 t … dialog datasets [12] with 5 different tasks of completing a restaurant reservation conversational dialog system between a …

Deep learning based recommender system: A survey and new perspectives
S Zhang, L Yao, A Sun – arXiv preprint arXiv:1707.07435, 2017 – arxiv.org
… remember former computations. Variants such as Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) network are o en deployed in practice to overcome the vanishing gradient problem. • Deep Semantic Similarity …

Learning Algorithms for Broad-Coverage Semantic Parsing
S Swayamdipta – 2017 – cs.cmu.edu
… 2.2.1 Stack Long Short-Term Memory (LSTM) LSTMs are recurrent neural networks equipped with specialized memory components in addition to a hidden state (Hochre- iter and Schmidhuber, 1997; Graves, 2013) to model sequences …

Referenceless Quality Estimation for Natural Language Generation
O Dušek, J Novikova, V Rieser – arXiv preprint arXiv:1708.01759, 2017 – arxiv.org
… replacing GRU cells with LSTM (Hochreiter & Schmidhuber, 1997), • using word … However, our work is also related to QE research in other areas, such as MT (Specia et al., 2010), dialogue systems (Lowe et al., 2017) or grammatical error correction (Napoles et al., 2016) …

Bootstrapping Chatbots for Novel Domains
P Babkin, MFM Chowdhury, A Gliozzo… – Workshop at NIPS on …, 2017 – hirzels.com
… While the resulting dialogue system is able to classify user utterances with reasonable accuracy, it only understands a limited vocabulary and forces the … Multi-domain joint semantic frame parsing using bi-directional rnn- lstm … Long short-term memory over recursive structures …

A Sequential Matching Framework for Multi-turn Response Selection in Retrieval-based Chatbots
Y Wu, W Wu, C Xing, C Xu, Z Li, M Zhou – arXiv preprint arXiv:1710.11344, 2017 – arxiv.org
… Dialog systems focus on helping people complete specific tasks in vertical domains (Young et al … (4) where wr,k is the embedding of the k-the word in r, and RNN(·) is either a vanilla RNN (Elman 1990) or an RNN with long short-term memory (LSTM) units (Hochreiter and …

Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold
S Swayamdipta, S Thomson, C Dyer… – arXiv preprint arXiv …, 2017 – arxiv.org
… to the token and span embeddings above, we learn an embedding vf for each frame f, and an embedding vl for each lexical unit l. To represent the target in context, we use htok over the target span t, as well as the neighboring token on each side, as an input to a forward LSTM …

Negotiation of Antibiotic Treatment in Medical Consultations: A Corpus based Study
N Wang – Proceedings of ACL 2017, Student Research …, 2017 – aclweb.org
… Current research for dialogue systems offer an alternative ap- proach … Using our corpus, an LSTM model can be trained to achieve the same goal as static classifiers for practice type classi- fication, and to model the sequential relationship between turns … Long short-term memory …

Can We Speculate Running Application With Server Power Consumption Trace?
Y Li, H Hu, Y Wen, J Zhang – IEEE transactions on cybernetics, 2017 – ieeexplore.ieee.org
… be promising. In this paper, we propose a novel distance measurement and build a time series classification algorithm hybridizing nearest neighbor and long short term memory (LSTM) neural network. More specifically, first …

Sentence?Chain Based Seq2seq Model for Corpus Expansion
E Chung, JG Park – ETRI Journal, 2017 – Wiley Online Library
… An enhanced RNN was proposed in [26], which is an LSTM that improves upon the RNN by solving numerous problems that are not solvable by previous RNN methodologies, such as the vanishing gradient issue. The LSTM …

Learning an Executable Neural Semantic Parser
J Cheng, S Reddy, V Saraswat, M Lapata – arXiv preprint arXiv …, 2017 – arxiv.org
… Utterance Encoding. Utterance x is encoded with a bidirectional LSTM architecture. A bidirectional LSTM comprises of a forward LSTM and a backward LSTM … For simplicity, we denote the recurrent computa- tion of the forward LSTM as: ?? ht = ???? LSTM(xt, ??? ht?1) (4) …

Dimensional Affect Recognition from HRV: an Approach Based on Supervised SOM and ELM
LA Bugnon, RA Calvo… – IEEE Transactions on …, 2017 – ieeexplore.ieee.org
… variations of the long-short-term memory recurrent neural network (LSTM) [57], [58], [59], [60], [61], relevance vector machines (RVM) [62], ensembles of random forests (RF) and neural networks [63], and more recently, an end-to- end approach using convolutional and recurrent …

Broad Discourse Context for Language Modeling
M Torres Garcia – 2017 – research-collection.ethz.ch
… An- other example are dialogue systems, where discourse understanding is needed to produce valid utterances for a given conversation context … This is achieved by modifying the LSTM cell to allow multiple hidden state updates per time step. Inan et al …

Creating New Language and Voice Components for the Updated MaryTTS Text-to-Speech Synthesis Platform
I Steiner, SL Maguer – arXiv preprint arXiv:1712.04787, 2017 – arxiv.org
… or even integrating MaryTTS as a component into more complex applications, such as TTS web services, accessibility software, or spoken dialog systems (SDSs) … Long Short-Term Memory (LSTM)-based G2P module us- ing an approach comparable to that of, eg, van Esch et al …

Dataset for a Neural Natural Language Interface for Databases (NNLIDB)
F Brad, R Iacob, I Hosu, T Rebedea – arXiv preprint arXiv:1707.03172, 2017 – arxiv.org
… Early solutions proposed us- ing dictionaries, grammars and dialogue systems for guiding the user articulate the query in natural lan- guage on a step by step … Both the encoder and the decoder are long short-term memory (LSTM) cells with two hidden layers and 500 neurons …

Refining Word Embeddings Using Intensity Scores for Sentiment Analysis
LC Yu, J Wang, KR Lai, X Zhang – researchgate.net
… deep averaging network (DAN) [36] for a multi-layer architecture, long-short term memory (LSTM) [37] for a sequential architecture, and Tree-LSTM [38], [39] for a … and synonymy relations into vector representations to improve the capability of dialog systems for distinguishing …

Anjishnu Kumar Amazon. com anjikum@ amazon. com
S Tucker, B Hoffmeister, M Dreyer, S Peshterliev… – alborz-geramifard.com
… Courville, and Joelle Pineau, “Building end-to-end dialogue systems using generative … Zweig, and Yangyang Shi, “Spoken language understanding using long short-term memory neural networks,” in … joint semantic frame parsing using bi directional rnn lstm.,” in INTERSPEECH …

Combining Domain Knowledge and Deep Learning Makes NMT More Adaptive
L Ding, Y He, L Zhou, Q Liu – China Workshop on Machine Translation, 2017 – Springer
… Thus, prior knowledge is well retained and helps benefit many NLP task, such as dictionary compilation, sentiment classification, machine translation and dialogue system etc … The encoder of the NMT system is Bi-LSTM (Bidirectional Long Short Term Memory) …

Reinforced video captioning with entailment rewards
R Pasunuru, M Bansal – arXiv preprint arXiv:1708.02300, 2017 – arxiv.org
… Page 2. Ent CIDEr LSTM LSTM LSTM LSTM LSTM … .. CIDEnt Reward … (2015) architecture, where we encode input frame level video features {f1:n} via a bi-directional LSTM-RNN and then generate the caption w1:m using an LSTM-RNN with an attention mechanism …

Actionable Email Intent Modeling with Reparametrized RNNs
CC Lin, D Kang, M Gamon, M Khabsa… – arXiv preprint arXiv …, 2017 – arxiv.org
… Thread encoder and predictor. The message embeddings are passed onto the thread-level LSTM to produce a thread embeddings vector … For an LSTM model, ?R can be formulated as the concatenated vector of input, output, forget and cell gate parameters [Wi, Wo, Wf , Wc] …

Robust lecture speech translation for speech misrecognition and its rescoring effect from multiple candidates
K Sahashi, N Goto, H Seki, K Yamamoto… – Advanced …, 2017 – ieeexplore.ieee.org
… much broader topics and more conventional than that seen in speech translation tasks such as spoken dialog systems for travel … select automatically the best candidate based on likelihood of 3 different language models (3gram, 5gram, LSTM:Long Short Term Memory Based LM …

Obj2text: Generating visually descriptive language from object layouts
X Yin, V Ordonez – arXiv preprint arXiv:1707.07102, 2017 – arxiv.org
… We propose OBJ2TEXT, a sequence-to- sequence model that encodes a set of ob- jects and their locations as an input se- quence using an LSTM network, and de- codes this representation using an LSTM language model … (c) LSTM Language Model Decoder …

Monday, May 15, 2017
SL Datasets, BD Analytics – ieeexplore.ieee.org
… 839 Yinyan Zhang, Shuai Li, Xin Luo and Ming-sheng Shang LSTM with Working Memory [#0222] …. 845 Andrew Pulver and Siwei Lyu … Learning to Reproduce Stochastic Time Series using Stochastic LSTM [#0416] …

Voice-transformation-based data augmentation for prosodic classification
R Fernandez, A Rosenberg, A Sorin… – … , Speech and Signal …, 2017 – ieeexplore.ieee.org
… we found these to work as well or better than Long Short-Term Memory units [17 … of delta pitch for speaker-change predic- tion in conversational dialogue systems,” in Proc … FA Gers, NN Schraudolph, and J. Schmidhuber, “Learning precise timing with LSTM Recurrent Networks …

MACA: A Modular Architecture for Conversational Agents
HP Truong, P Parthasarathi, J Pineau – arXiv preprint arXiv:1705.00673, 2017 – arxiv.org
… 3 ‘kwargs’ : { ‘n epochs’ : 500, ‘shuffle batch’ : False } 4 }, … 5 ‘agent’ : { 6 ‘class’ : RetrievalModelAgent, 7 ‘args’ : [ ‘twitter dataset/W twitter bpe.pkl’ ], 8 ‘kwargs’ : { 9 ‘model fname’ : ‘model.pkl’, 10 ‘mode’ : system modes.TRAINING, 11 ‘model params’ : { 12 ‘encoder’ : ‘lstm’, 13 ‘ …

Recommending social platform content using deep learning
J JAXING, A HÅKANSSON, M GORETSKYY… – publications.lib.chalmers.se
… Popular unit implementations are Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU). The LSTM unit was introduced to solve the problems of inefficiency in the backpropagation for recurrent neural networks [27] …

Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
B Harrison, U Ehsan, MO Riedl – arXiv preprint arXiv:1702.07826, 2017 – arxiv.org
… Encoder-decoder networks, which have primarily been used in machine translation and dialogue systems, are a generative architecture … The encoder and decoder networks are long short-term memory (LSTM) recurrent neural networks where each LSTM node has a hidden …

Maximum-a-Posteriori-Based Decoding for End-to-End Acoustic Models
N Kanda, X Lu, H Kawai – IEEE/ACM Transactions on Audio …, 2017 – ieeexplore.ieee.org
… Color versions of one or more of the figures in this paper are available online at http://ieeexplore. ieee.org. Digital Object Identifier 10.1109/TASLP.2017.2678162 recurrent neural networks [10]–[13], long short-term memory networks [14]–[21], and their combinations [22], [23] …

Novel Methods for Natural Language Generation in Spoken Dialogue Systems
O Dušek – 2017 – dspace.cuni.cz
… Ond?ej Dušek Novel Methods for Natural Language Generation in Spoken Dialogue Systems Institute of Formal and Applied Linguistics Supervisor: Ing … iii Page 4. Page 5. Title: Novel Methods for Natural Language Generation in Spoken Dialogue Systems Author: Ond?ej Dušek …

Neural Models for Information Retrieval
B Mitra, N Craswell – arXiv preprint arXiv:1705.01509, 2017 – arxiv.org
Page 1. Neural Models for Information Retrieval Bhaskar Mitra Microsoft, UCL? Cambridge, UK bmitra@microsoft.com Nick Craswell Microsoft Bellevue, USA nickcr@microsoft.com Abstract Neural ranking models for information …

Slim Embedding Layers for Recurrent Neural Language Models
Z Li, R Kulhanek, S Wang, Y Zhao, S Wu – arXiv preprint arXiv:1711.09873, 2017 – arxiv.org
… 2015), but uses a different sharing scheme. Random Parameter Sharing at Input and Output Embedding Layers We use deep Long short-term memory (LSTM) as our neural language model. In each time stamp t, the word vector h0 t is used as the input …

A spoken query system for the agricultural commodity prices and weather information access in Kannada language
TG Yadava, HS Jayanna – International Journal of Speech Technology, 2017 – Springer
… The authors have proposed a method to develop a system which is a combination of deep bidirectional Long Short Term Memory (LSTM) recurrent neural network architecture and the CTC objective function … Tamil market: A spoken dialog system for rural india …

A surprisingly effective out-of-the-box char2char model on the E2E NLG Challenge dataset
S Agarwal, M Dymetman – Proceedings of the 18th Annual SIGdial …, 2017 – aclweb.org
… ACL. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Andrej Karpathy. 2015 … 2015. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. In Proc. EMNLP …

A class-specific copy network for handling the rare word problem in neural machine translation
F Wang, W Chen, Z Yang, X Zhang… – Neural Networks (IJCNN …, 2017 – ieeexplore.ieee.org
… Here f and g are nonlinear transform functions, which can be implemented as long short term memory network (LSTM) or gated … In the future, we will try to apply the class-specific copy network in other NLP tasks, such as the dialogue system and the question answering …

Can a machine generate humanlike language descriptions for a remote sensing image?
Z Shi, Z Zou – IEEE Transactions on Geoscience and Remote …, 2017 – ieeexplore.ieee.org
… a technique that has been used for various practical applications such as summarization [37] and dialog systems [38]. Some recent works aim to generate sentences with language models automatically learned from image data, such as long short-term memory (LSTM) [12], [13 …

Advanced data exploitation in speech analysis: An overview
Z Zhang, N Cummins, B Schuller – IEEE Signal Processing …, 2017 – ieeexplore.ieee.org
… For speech processing, crowdsourcing has been widely employed for a range of tasks, including speech data col- lection/acquisition, speech annotation, speech perception, assessment of speech synthesis, and dialog system evalua- tion [15], [26] …

Computer Vision and Natural Language Processing: Recent Approaches in Multimedia and Robotics
P Wiriyathammabhum, D Summers-Stay… – ACM Computing …, 2017 – dl.acm.org
… Recent methods use a CNN to detect visual features and using Recurrent Neural Networks (RNNs) [Karpathy and Fei-Fei 2015a] or Long-Short Term Memory (LSTM) [Vinyals et al. 2015] to generate the sentence description …

End-to-End Online Speech Recognition with Recurrent Neural Networks
K Hwang – 2017 – s-space.snu.ac.kr
… employed for real-time applications such as spoken dialog systems or real-time auto- matic captioning … time delays between the input and output. Especially, the long short-term memory 7 Page 19. (LSTM) RNN is known to solve the problems with long time lag very successfully …

Definition Modeling: Learning to Define Word Embeddings in Natural Language.
T Noraset, C Liang, L Birnbaum, D Downey – AAAI, 2017 – aaai.org
… All of the models utilize the same set of fixed, pre-trained word embeddings from the Word2Vec project,3 and a 2- layer LSTM network as an RNN component (Hochreiter and Schmidhuber 1997). The embedding and LSTM hidden lay- ers have 300 units each …

Towards Natural Language Understanding using Multimodal Deep Learning
S Bos – pdfs.semanticscholar.org
Page 1. Towards Natural Language Understanding using Multimodal Deep Learning Steven Bos Delft Un iversity of T echnolog y Page 2. Page 3. Towards Natural Language Understanding using Multimodal Deep Learning THESIS …

Automated Speech Recognition System–A Literature Review
M Manjutha, J Gracy, P Subashini… – COMPUTATIONAL … – researchgate.net
… of the major growing applications are Language Identification, Speech Enhancement, Spoken Dialog System, Speaker Recognition … 206 recognition experimented with Connectionist Temporal Classification (CTC) trained Long Short-Term Memory (LSTM) approach which is …

Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples
P Vougiouklis, H Elsahar, LA Kaffee, C Gravier… – arXiv preprint arXiv …, 2017 – arxiv.org
… is returned at a user’s query (eg the Google Knowledge Graph1 and the Wikidata Reasonator2), or dialogue systems in commercial … gated Recurrent Neural Net- work (RNN) variants, such as the Gated Recurrent Unit (GRU) [5] and the Long Short-Term Memory (LSTM) …

Neural machine translation and sequence-to-sequence models: A tutorial
G Neubig – arXiv preprint arXiv:1703.01619, 2017 – arxiv.org
Page 1. Neural Machine Translation and Sequence-to-sequence Models: A Tutorial Graham Neubig Language Technologies Institute, Carnegie Mellon University 1 Introduction This tutorial introduces a new and powerful set …

A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a Discourse
S Kobayashi, N Okazaki, K Inui – arXiv preprint arXiv:1709.01679, 2017 – arxiv.org
… Page 3. word w. The function —? RNN is often replaced with LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Choetal., 2014) to improve perfor- mance … The baseline model was a typical LSTM RNN language model with 512 units …

Exploring Cells and Context Approaches for RNN Based Conversational Agents
S Johnsrud, S Christensen – 2017 – brage.bibsys.no
… We delve into different RNN architectures and compare the quality of the outputs from nine distinct agents. The baseline is an Encoder-Decoder model using Long Short-Term Memory (LSTM) cells, which is fed with question-response pairs …

Towards efficient Neural Machine Translation for Indian Languages
R Agrawal – 2017 – pdfs.semanticscholar.org
… making neural models an appropriate choice for other tasks like chatbots, speech recognition, dialogue systems, time series … encoder architectures are Hyperbolic tangent (tan) , Convolutional Network (CNN) , Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM). 10 …

Computational Linguistic Creativity: Poetry generation given visual input
M Loller-Andersen – 2017 – brage.bibsys.no
… The poetry generation system consists of a Convolutional Neural Network for image object classification, a module for finding related words and rhyme words, and a Long Short-Term Memory (LSTM) Neural Network trained on a song lyrics data set compiled specifically for this …

Denoised Bottleneck Features From Deep Autoencoders for Telephone Conversation Analysis
K Janod, M Morchid, R Dufour… – … /ACM Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 25, NO. 9, SEPTEMBER 2017 1505 Denoised Bottleneck Features From Deep Autoencoders for Telephone Conversation Analysis …

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY A CONTENT ANALYSIS OF THE RESEARCH APPROACHES IN …
T Özseven, M Dü?enci, A Durmu?o?lu – ijesrt.com
Page 1. ISSN: 2277-9655 [Ozseven* et al., 7(1): January, 2018] Impact Factor: 4.116 IC™ Value: 3.00 CODEN: IJESS7 http: // www.ijesrt.com © International Journal of Engineering Sciences & Research Technology [1] IJESRT …

Advances in Neural Networks-ISNN 2017: 14th International Symposium, ISNN 2017, Sapporo, Hakodate, and Muroran, Hokkaido, Japan, June 21–26, 2017 …
F Cong, A Leung, Q Wei – 2017 – books.google.com
… 223 Bulent Ayhan, Chiman Kwan, and Steven Liang Parameter Estimation of Linear Systems with Quantized Innovations. . . . . 234 Changchang Hu LSTM with Matrix Factorization for Road Speed Prediction …

Deep Reinforcement Learning in Natural Language Scenarios
J He – 2017 – digital.lib.washington.edu
… good/bad endings. Another example is a human-computer dialog system, where the action … binatorial action space. We address the first problem by introducing a bi-directional LSTM … to give the model capacity for controlling when to remember/forget, long short-term memory …

Neural network methods for natural language processing
Y Goldberg – Synthesis Lectures on Human Language …, 2017 – morganclaypool.com
… Semantic Role Labeling Martha Palmer, Daniel Gildea, and Nianwen Xue 2010 Spoken Dialogue Systems Kristiina Jokinen and Michael McTear 2009 Introduction to Chinese Natural Language Processing Kam-Fai Wong, Wenjie Li, Ruifeng Xu, and Zheng-sheng Zhang 2009 …

Deep Learning for Distant Speech Recognition
M Ravanelli – arXiv preprint arXiv:1712.06086, 2017 – arxiv.org
… video classification), machine translation, as well as in natural language processing (for dialogue systems, question answering, image captioning … for instance, the development of real-time speech recognizers or low- latency dialogue systems …

Robust Task Clustering for Deep Many-Task Learning
M Yu, X Guo, J Yi, S Chang, S Potdar… – arXiv preprint arXiv …, 2017 – arxiv.org
… An LSTM-based meta-learner [14] learns the exact optimization algorithm used to train another learner neural-network classifier for the few … 2. Diverse Real-World Tasks: User Intent Classification for Dialog System The second dataset is from an on-line service which trains and …

Linguistic Knowledge Transfer for Enriching Vector Representations
JK Kim – 2017 – rave.ohiolink.edu
… Also, we show that word embeddings enriched with thesauruses can be utilized to improve the performance of Bidirectional Long Short-Term Memory (BLSTM)–models … beddings, build bidirectional LSTM for intent detection. Our experiments on ATIS and a …

Modelling semantic context of oov words in large vocabulary continuous speech recognition
I Sheikh, D Fohr, I Illina… – IEEE/ACM Transactions on …, 2017 – ieeexplore.ieee.org
Page 1. 598 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 25, NO. 3, MARCH 2017 Modelling Semantic Context of OOV Words in Large Vocabulary Continuous Speech Recognition …

(Visited 40 times, 1 visits today)