Neural Turing Machines 2016


Notes:

Neural Turing machines (NTMs) combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers.

Resources:

Wikipedia:

References:

See also:

100 Best DeepMind Videos | Neural Network & Dialog Systems 2016Neural Network Meta Guide


Meta-learning with memory-augmented neural networks
A Santoro, S Bartunov, M Botvinick… – International …, 2016 – proceedings.mlr.press
… Architec- tures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the abil- ity to quickly encode and retrieve new informa- tion, and hence can potentially obviate the down- sides of conventional models. …

Dynamic memory networks for visual and textual question answering
C Xiong, S Merity, R Socher – arXiv, 2016 – jmlr.org
… Other recent neural ar- chitectures with memory or attention which have proposed include neural Turing machines (Graves et al., 2014), neu- ral GPUs (Kaiser & Sutskever, 2015) and stack-augmented RNNs (Joulin & Mikolov, 2015). …

Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes
C Gulcehre, S Chandar, K Cho, Y Bengio – arXiv preprint arXiv: …, 2016 – arxiv.org
Abstract: In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This scheme maintains for each memory cell two separate vectors, content and address vectors. This

“Ask, attend and answer: Exploring question-guided spatial attention for visual question answering”
H Xu, K Saenko – European Conference on Computer Vision, 2016 – Springer
… The related Neural Turing Machine (NTM) [18] couples a neural network to external memory and interacts with it by attentional processes to infer simple algorithms such as copying, sorting, and associative recall from input and output examples. …

One-shot learning with memory-augmented neural networks
A Santoro, S Bartunov, M Botvinick, D Wierstra… – arXiv preprint arXiv: …, 2016 – arxiv.org
… Architec- tures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the abil- ity to quickly encode and retrieve new informa- tion, and hence can potentially obviate the down- sides of conventional models. …

Associative long short-term memory
I Danihelka, G Wayne, B Uria, N Kalchbrenner… – arXiv preprint arXiv: …, 2016 – arxiv.org
… different dimensions. We need to know O( Nh Ncopies ) el- ements of the key to recover the whole value. • Unlike Neural Turing Machines (Graves et al., 2014), it is not necessary to search for free locations when writing. • It is …

Learning simple algorithms from examples
W Zaremba, T Mikolov, A Joulin, R Fergus – Proceedings of the …, 2016 – jmlr.org
… The Neural Turing Machine (NTM) (Graves et al., 2014) uses a modified LSTM (Hochreiter & Schmidhuber, 1997; Gers et al., 2003) as the controller, and has three inferences: sequential input, delayed output and a differ- entiable memory. …

Unitary evolution recurrent neural networks
M Arjovsky, A Shah, Y Bengio – International Conference on Machine …, 2016 – jmlr.org
Page 1. Unitary Evolution Recurrent Neural Networks Martin Arjovsky ? MARJOVSKY@DC.UBA. AR Amar Shah ? AS793@CAM.AC.UK Yoshua Bengio Universidad de Buenos Aires, University of Cambridge, Université de Montréal. Yoshua Bengio is a CIFAR Senior Fellow. …

Learning efficient algorithms with hierarchical attentive memory
M Andrychowicz, K Kurach – arXiv preprint arXiv:1602.03218, 2016 – arxiv.org
… The first ver- satile and highly successful architecture with this property was Neural Turing Machine (NTM) (Graves et al., 2014). … Attention mechanism was used to access the memory in Neural Turing Machines (NTMs) (Graves et al., 2014). …

Hybrid computing using a neural network with dynamic external memory
A Graves, G Wayne, M Reynolds, T Harley, I Danihelka… – Nature, 2016 – nature.com
… modification of memory content. An earlier form of DNC, the neural Turing machine 16 , had a similar structure, but more limited memory access methods (see Methods for further discussion). Whereas conventional computers …

Adaptive computation time for recurrent neural networks
A Graves – arXiv preprint arXiv:1603.08983, 2016 – arxiv.org
… of the memory cells. For a memory augmented network such as a Neural Turing Machine (NTM) [10], the state contains both the complete state of the controller network and the complete state of the memory. In general some …

Adaptive neural compilation
RR Bunel, A Desmaison, PK Mudigonda… – Advances in Neural …, 2016 – papers.nips.cc
… Recently, Graves et al. [2] introduced a learnable representation of programs, called the Neural Turing Machine (NTM). The … CoRR, 2016. [2] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, 2014. [3 …

Learning knowledge base inference with neural theorem provers
T Rocktäschel, S Riedel – Proceedings of AKBC, 2016 – anthology.aclweb.org
… 2012. Neural- symbolic learning systems: foundations and applica- tions. Springer. [Graves et al.2014] Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. 49 Page 6. …

Matching networks for one shot learning
O Vinyals, C Blundell, T Lillicrap… – Advances in Neural …, 2016 – papers.nips.cc
… A key component which allowed for more expressive models was the introduction of “content” based attention in [2], and “computer-like” architectures such as the Neural Turing Machine [4] or Memory Networks [29]. … Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. …

Programming with a differentiable forth interpreter
S Riedel, M Bosnjak… – CoRR, abs/ …, 2016 – pdfs.semanticscholar.org
… The first no- table example of an abstract machine was the Neural Turing Machine (NTM) [5] which is able to learn simple algorithmic problems with its differentiable controller and memory access, trained with backpropagation. … Neural Turing Machines. …

“Control of memory, active perception, and action in minecraft”
J Oh, V Chockalingam, S Singh, H Lee – arXiv preprint arXiv:1605.09128, 2016 – arxiv.org
… 2. Related Work Neural Networks with External Memory. Graves et al. (2014) introduced a Neural Turing Machine (NTM), a dif- ferentiable external memory architecture, and showed that it can learn algorithms such as copy and reverse. …

Continual Learning through Evolvable Neural Turing Machines
B Lüders, M Schläger, S Risi – NIPS 2016 Workshop on Continual …, 2016 – sebastianrisi.com
Abstract Continual learning, ie the ability to sequentially learn tasks without catastrophic forgetting of previously learned ones, is an important open challenge in machine learning. In this paper we take a step in this direction by showing that the recently proposed Evolving

Evolving Neural Turing Machines for Reward-based Learning
RB Greve, EJ Jacobsen, S Risi – Proceedings of the 2016 on Genetic …, 2016 – dl.acm.org
Abstract An unsolved problem in neuroevolution (NE) is to evolve artificial neural networks (ANN) that can store and use information to change their behavior online. While plastic neural networks have shown promise in this context, they have difficulties retaining

Noisy activation functions
C Gulcehre, M Moczulski, M Denil, Y Bengio – arXiv preprint arXiv: …, 2016 – jmlr.org
Page 1. Noisy Activation Functions Caglar Gulcehre†? GULCEHRC@IRO.UMONTREAL.CA Marcin Moczulski† MARCIN.MOCZULSKI@STCATZ.OX.AC.UK Misha Denil† MISHA.DENIL@ GMAIL.COM Yoshua Bengio†? BENGIOY@IRO.UMONTREAL.CA …

Lie Access Neural Turing Machine
G Yang – arXiv preprint arXiv:1602.08671, 2016 – arxiv.org
Abstract: Following the recent trend in explicit neural memory structures, we present a new design of an external memory, wherein memories are stored in an Euclidean key space $\ mathbb R^ n $. An LSTM controller performs read and write via specialized read and write

Strongly-typed recurrent neural networks
D Balduzzi, M Ghifary – arXiv preprint arXiv:1602.02218, 2016 – arxiv.org
Page 1. arXiv:1602.02218v2 [cs.LG] 24 May 2016 Strongly-Typed Recurrent Neural Networks David Balduzzi1 DBALDUZZI@GMAIL.COM Muhammad Ghifary1,2 MGHIFARY@GMAIL.COM 1Victoria University of Wellington, New Zealand 2Weta Digital, New Zealand Abstract …

Understanding visual concepts with continuation learning
WF Whitney, M Chang, T Kulkarni… – arXiv preprint arXiv: …, 2016 – arxiv.org
… 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems, 2672–80. Graves, Alex, Greg Wayne, and Ivo Danihelka. 2014. “Neural Turing Machines.” ArXiv Preprint ArXiv:1410.5401. Hinton, Geoffrey E, and Ruslan R Salakhutdinov. 2006. …

Learning to optimize
K Li, J Malik – arXiv preprint arXiv:1606.01885, 2016 – arxiv.org
… arXiv preprint arXiv:1509.06113, 2015. [11] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014. … [28] Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016. …

End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding
YN Chen, D Hakkani-Tür, G Tur, J Gao… – Proceedings of …, 2016 – microsoft.com
… [23] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” arXiv preprint arXiv:1410.5401, 2014. [24] J. Weston, S. Chopra, and A. Bordesa, “Memory networks,” in International Conference on Learning Representations (ICLR), 2015. …

Hierarchical Memory Networks
S Chandar, S Ahn, H Larochelle, P Vincent… – arXiv preprint arXiv: …, 2016 – arxiv.org
… There exists several variants of neural networks with a memory component: Memory Networks [2], Neural Turing Machines (NTM) [1], Dynamic Memory Networks (DMN) [3]. They all share five major components: memory, input module, reader, writer, and output module. …

Neural symbolic machines: Learning semantic parsers on freebase with weak supervision
C Liang, J Berant, Q Le, KD Forbus, N Lao – arXiv preprint arXiv: …, 2016 – arxiv.org
… However, the memories in these models are either low-level (such as in Neural Turing machines[31]), or required to be differentiable so that they can be trained by backpropagation. … [8] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. …

Building machines that learn and think like people
BM Lake, TD Ullman, JB Tenenbaum… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. In press at Behavioral and Brain Sciences. Building Machines That Learn and Think Like People Brenden M. Lake,1 Tomer D. Ullman,2,4 Joshua B. Tenenbaum,2,4 and Samuel J. Gershman3,4 1Center for Data Science …

Memory-enhanced decoder for neural machine translation
M Wang, Z Lu, H Li, Q Liu – arXiv preprint arXiv:1606.02003, 2016 – arxiv.org
… MEMDEC is obviously related to the recent effort on attaching an external memory to neural net- works, with two most salient examples being Neural Turing Machine (NTM) (Graves et al., 2014) and Memory Network (Weston et al., 2014). … 2014. Neural turing machines. …

An online sequence-to-sequence model using partial conditioning
N Jaitly, QV Le, O Vinyals, I Sutskever… – Advances in Neural …, 2016 – papers.nips.cc
… [10] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. … [21] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015. 9

Separating answers from queries for neural reading comprehension
D Weissenborn – arXiv preprint arXiv:1607.03316, 2016 – arxiv.org
… Graves et al. (2014) introduced Neural Turing Machines (NTM). NTMs augment traditional RNNs with external memory that can be written to and read from. The memory is composed of a prede- fined number of writable slots. …

End-to-end learning of action detection from frame glimpses in videos
S Yeung, O Russakovsky, G Mori… – Proceedings of the IEEE …, 2016 – cv-foundation.org
… [43] for image caption genera- tion. In a non-visual task, Zaremba et al. [47] learn policies for a Reinforcement Learning Neural Turing Machine. Our method builds on these directions and uses REINFORCE to learn policies addressing the task of action detection. 3. Method …

End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension
Y Yu, W Zhang, K Hasan, M Yu, B Xiang… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension Yang Yu?, Wei Zhang?, Kazi Hasan, Mo Yu, Bing Xiang, Bowen Zhou {yu, zhangwei, kshasan, yum, bingxia, zhou}@us.ibm.com IBM Watson Abstract …

Learning online alignments with continuous rewards policy gradient
Y Luo, CC Chiu, N Jaitly, I Sutskever – arXiv preprint arXiv:1608.01281, 2016 – arxiv.org
… For example, the Neural Turing Machine [11] and the Memory Network [19] both use an attention mechanism similar to that of Bahdanau et al. [3] to implement models for learning algorithms and for question answering. … Neural turing machines. …

A neural knowledge language model
S Ahn, H Choi, T Pärnamaa, Y Bengio – arXiv preprint arXiv:1608.00318, 2016 – arxiv.org
Page 1. A Neural Knowledge Language Model Sungjin Ahn1?, Heeyoul Choi1,2, Tanel Pärnamaa1, and Yoshua Bengio1,3 1Université de Montréal, 2Samsung Electronics, 3CIFAR Senior Fellow ? ahnsungj@umontreal.ca Abstract …

Deep fusion LSTMs for text semantic matching
P Liu, X Qiu, J Chen, X Huang – Proceedings of Annual …, 2016 – pdfs.semanticscholar.org
… Therefore, inspired by recent neural memory net- work, such as neural Turing machine(Graves et al., 2014) and memory network (Sukhbaatar et al., 2015), we introduce two external memories to keep the history information, which can relieve the pressure on low-capacity …

Online segment to segment neural transduction
L Yu, J Buys, P Blunsom – arXiv preprint arXiv:1609.08194, 2016 – arxiv.org
… Our model is general and could be incorporated into any RNN- based encoder-decoder architecture, such as Neural Turing Machines (Graves et al., 2014), memory net- works (Weston et al., 2015; Kumar et al., 2016) or stack-based networks (Grefenstette et al., 2015), en …

Attention and Augmented Recurrent Neural Networks
C Olah, S Carter – Distill, 2016 – distill.pub
… Four directions stand out as particularly exciting: Neural Turing Machines have external memory that they can read and write to. … Neural Turing Machines. Neural Turing Machines [2] combine a RNN with an external memory bank. …

Interactive Attention for Neural Machine Translation
F Meng, Z Lu, H Li, Q Liu – arXiv preprint arXiv:1610.05011, 2016 – arxiv.org
… one.2 Attentive Write Inspired by the writing operation of neural turing machines (Graves et al., 2014), we define two types of operation on writing to the memory: FORGET and UPDATE. FORGET is similar 2 Wang et al. (2016 …

Hierarchical attention networks
PH Seo, Z Lin, S Cohen, X Shen, B Han – arXiv preprint arXiv:1606.02393, 2016 – arxiv.org
… For example, they have been used to handle sequences of variable lengths in neural machine translation models [10, 11] and manage memory access mechanisms for memory networks [12] and neural turing machines [13]. …

Artificial intelligence: Deep neural reasoning
H Jaeger – Nature, 2016 – nature.com
… that have a rational reasoning component, such as generating video commentaries or semantic text analysis. A precursor to the DNC, the neural Turing machine 9 , certainly sent thrills through the deep-learning community. …

Neural associative memory for dual-sequence modeling
D Weissenborn – arXiv preprint arXiv:1606.03864, 2016 – arxiv.org
… Similar in spirit to Neural Turing Machines (Graves et al., 2014) we decouple the AM from the RNN and restrict the interaction with the AM to read and write operations which we believe to be important. … Neural Turing Machines inspired subsequent work on using different kinds …

“Expanding perspectives on cognition in humans, animals, and machines”
A Gomez-Marin, ZF Mainen – Current opinion in neurobiology, 2016 – Elsevier
Over the past decade neuroscience has been attacking the problem of cognition with increasing vigor. Yet, what exactly is cognition, beyond a general signifier.

Temporal attention model for neural machine translation
B Sankaran, H Mi, Y Al-Onaizan… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Temporal Attention Model for Neural Machine Translation Baskaran Sankaran Haitao Mi Yaser Al-Onaizan Abe Ittycheriah IBM TJ Watson Research Center 1101 Kitchawan Rd, Yorktown Heights, NY 10598 {bsankara, hmi, onaizan, abei}@us.ibm.com Abstract …

Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks
Z Li, Y Yu – arXiv preprint arXiv:1604.07176, 2016 – arxiv.org
Page 1. Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks Zhen Li Yizhou Yu Department of Computer Science, The University of Hong Kong zli@cs.hku.hk, yizhouy@acm.org Abstract …

Learning to generate with memory
C Li, J Zhu, B Zhang – Proc. ICML, 2016 – jmlr.org
Page 1. Learning to Generate with Memory Chongxuan Li LICX14@MAILS.TSINGHUA.EDU. CN Jun Zhu DCSZJ@MAIL.TSINGHUA.EDU.CN Bo Zhang DCSZB@MAIL.TSINGHUA.EDU. CN Dept. of Comp. Sci. & Tech., State Key Lab of Intell. Tech. …

Hierarchical Memory Networks for Answer Selection on Unknown Words
J Xu, J Shi, Y Yao, S Zheng, B Xu – arXiv preprint arXiv:1609.08843, 2016 – arxiv.org
… Recently, lots of deep learning methods with explicit memory and attention mechanism are explored for Question Answering (QA) task, such as Memory Networks (MemNN) (Sukhbaatar et al., 2015), Neu- ral Machine Translation (NMT) and Neural Turing Machine (NTM) (Yu et …

Improving Neural Language Models with a Continuous Cache
E Grave, A Joulin, N Usunier – arXiv preprint arXiv:1612.04426, 2016 – arxiv.org
… Speech recognition with deep recurrent neural networks. In ICASSP, 2013. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. …

LSTM with Working Memory
A Pulver, S Lyu – arXiv preprint arXiv:1605.01988, 2016 – arxiv.org
… 115– 143. [7] Alex Graves. “Adaptive Computation Time for Recurrent Neural Networks”. In: arXiv preprint arXiv:1603.08983 (2016). [8] Alex Graves, Greg Wayne, and Ivo Danihelka. “Neural turing machines”. In: arXiv preprint arXiv:1410.5401 (2014). 6 Page 7. …

Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering
P Li, W Li, Z He, X Wang, Y Cao, J Zhou… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, Wei Xu Baidu Research – Institute of Deep …

“A Simple, Fast Diverse Decoding Algorithm for Neural Generation”
J Li, W Monroe, D Jurafsky – arXiv preprint arXiv:1611.08562, 2016 – arxiv.org
Page 1. A Simple, Fast Diverse Decoding Algorithm for Neural Generation Jiwei Li, Will Monroe and Dan Jurafsky Computer Science Department, Stanford University, Stanford, CA, USA jiweil,wmonroe4,jurafsky@stanford.edu Abstract …

Neural aggregation network for video face recognition
J Yang, P Ren, D Chen, F Wen, H Li, G Hua – arXiv preprint arXiv: …, 2016 – arxiv.org
… representation. The key component of this net- work is inspired by the Neural Turing Machine [7] and the Orderless Set Network [27], both of which applied an at- tention mechanism to organize the input through a mem- ory. …

Disentangled representations in neural models
W Whitney – arXiv preprint arXiv:1602.02383, 2016 – arxiv.org
Page 1. Disentangled Representations in Neural Models by William Whitney SB, Massachusetts Institute of Technology (2013) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of …

Neural Turing Machines: Convergence of Copy Tasks
J Aleš – arXiv preprint arXiv:1612.02336, 2016 – arxiv.org”
Abstract: The architecture of neural Turing machines is differentiable end to end and is trainable with gradient descent methods. Due to their large unfolded depth Neural Turing Machines are hard to train and because of their linear access of complete memory they do

Learning Operations on a Stack with Neural Turing Machines
T Deleu, J Dureau – arXiv preprint arXiv:1612.00827, 2016 – arxiv.org
Abstract: Multiple extensions of Recurrent Neural Networks (RNNs) have been proposed recently to address the difficulty of storing information over long time periods. In this paper, we experiment with the capacity of Neural Turing Machines (NTMs) to deal with these long-

Neural Turing Machine for sequential learning of human mobility patterns
J Tka?ík, P Kordík – Neural Networks (IJCNN), 2016 …, 2016 – ieeexplore.ieee.org
Abstract: The capacity of recurrent neural networks to learn complex sequential patterns is improving. Recent developments such as Clockwork RNN, Stack RNN, Memory networks and Neural Turing Machine all aim to increase long-term memory capacity of recurrent

Attention-based Model
H Lee – speech.ee.ntu.edu.tw
“… Attention Encode Retrieval Page 24. Neural Turing Machine • von Neumann architecture https://www.quora.com/How-does-the-Von-Neumann-architecture- provide-flexibility-for-program- development … advanced RNN/LSTM. Page 25. Neural Turing Machine x1 x2 y1 y2 h1 h2 h0 …

Brains as optimal emergent Turing Machines
J Weng – Neural Networks (IJCNN), 2016 International Joint …, 2016 – ieeexplore.ieee.org
… As explained in [1], although the work Graves et al. 2014 [14] used the term neural Turing Machines, the work has not established, or intended to establish, that all their operations are sufficient to simulate any TM. Weng 2015 [1] proved that the control of any TM is an FA. …

Tutorial Workshop on Contemporary Deep Neural Network Models
JL McClelland, S Hansen, A Saxe – mindmodeling.org
… selection settings; RNN- LSTMs have been used for cross-domain mapping, either between different spoken languages or (combined with DNNs) between video and spoken language and for creating provocative new cognitive models such as the Neural Turing Machine (NTM …

Question Answering with Neural Networks
Y Tian, N Huang, T Li – cs229.stanford.edu
… Explicit memory repre- sentation like those used in MemN2N may help mitigate this issue. Another possible approach to tackle the long-term dependency issue is an idea called Neural Turing Machine (Graves et al., 2014). 8 Conclusion …

Grid LSTM
E Chu – futureai.media.mit.edu
“… on an architecture that is perhaps slightly less interesting and less cognitively-inspired than I would have liked, it was still nice tackling one that appears to do well on both more synthetic tasks such as the algorithmic replication done in the Neural Turing Machine and NRAM …

Probabilistic Neural Programs
KW Murray, J Krishnamurthy – arXiv preprint arXiv:1612.00712, 2016 – arxiv.org
… Church: a language for generative models. In Proc. of Uncertainty in Artificial Intelligence, 2008. [7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [8] Jayant Krishnamurthy, Oyvind Tafjord, and Aniruddha Kembhavi. …

ECS 240: Project Final Report Programming with Deep Neural Networks
M Rashid – maheenrashid.com
“… 3.3 Neural Turing Machines Neural Turing Machines (NTMs) [1] is perhaps the most ground breaking work in this field of study. The paper in the last section was the first step in identifying that neural networks can be used for programming – but only as an executor of programs. …

A Neural Forth Abstract Machine
M Bošnjak, T Rocktäschel, J Naradowsky, S Riedel – people.idsia.ch
… All three memory structures, data stack, return stack and the heap, are based on a differentiable flat memory – D, R, H ? Rl×v buffers of length l and value width v, with well defined differentiable reading and writing procedures, similarly to the Neural Turing Machine memory (G …

Deep Learning for Natural Language Processing-Research at Noah’s Ark Lab
H Li – 2016 – hangli-hl.com
“Page 1. Deep Learning for Natural Language Processing – Research at Noah’s Ark Lab Hang Li Noah’s Ark Lab Huawei Technologies Institute of Software, CAS Beijing Jan 15, 2016 Page 2. DL for NLP @Noah Lab Zhengdong Lu Lifeng Shang Lin Ma Zhaopeng Tu Xin Jiang …

The Computational Power of Dynamic Bayesian Networks
J Brulé – arXiv preprint arXiv:1603.06125, 2016 – arxiv.org
… 602– 611, 1988. [19] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural com- putation, vol. 9, no. 8, pp. 1735–1780, 1997. [20] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” arXiv preprint arXiv:1410.5401, 2014. 9

Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision (Short Version)
C Liang, J Berant, Q Le, KD Forbus, N Lao – arXiv preprint arXiv: …, 2016 – arxiv.org
… However, the mem- ories in these models are either low-level (such as in Neural Turing machines[19]), or differentiable so that they can be trained by backpropagation. … [7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. …

Guided Neural Machine Translation
F Stahlberg – xilef-software.e8u.de
“Page 1. Guided Neural Machine Translation Felix Stahlberg Department of Engineering University of Cambridge This first year report is submitted as part of the degree of Doctor of Philosophy Queens’ College August 2016 Page 2. Page 3. Declaration …

Extensions and Limitations of the Neural GPU
E Price, W Zaremba, I Sutskever – arXiv preprint arXiv:1611.00736, 2016 – arxiv.org
… There exist a number of neural network architecture that implement this idea: the Neural Turing Machine (NTM) (Graves et al., 2014), the standard LSTM (to some extent) (Zaremba & Sutskever, 2014), the Grid LSTM (Kalchbrenner et al., 2015), the Stack RNN (Joulin & Mikolov …

“Dialog state tracking, a machine reading approach using Memory Network”
J Perez, F Liu – arXiv preprint arXiv:1606.04052, 2016 – arxiv.org
… Page 9. enhanced inference models. Indeed, we plan to experiment and compare the same approach with Stacked-Augmented Recurrent Neural Network [13] and Neural Turing Machine [7] that sounds also promising for these family of reasoning tasks. … Neural turing machines. …

Utilization of Deep Reinforcement Learning for saccadic-based object visual search
T Kornuta, K Rocki – arXiv preprint arXiv:1610.06492, 2016 – arxiv.org
… More effective analysis concerns also the avoidance of already visited places – one possible solution is to use Neural Turing Machine (NTM) [19], ie RNN with an external memory, for memorization of the already visited loca- tions. …

Multi-objective symbolic regression using long-term artificial neural network memory (LTANN-MEM) and neural symbolization algorithm (NSA)
AK Deklel, AM Hamdy, EM Saad – Neural Computing and Applications – Springer
… A more complicated addressing mechanism is proposed in Neural Turing Machine (NTM) introduced in [9]. It proposes two addressing mechanisms content-based addressing, focuses attention on location based on the similarity between their current values and values emitted …

Summary-TerpreT: A Probabilistic Programming Language for Program Induction
AL Gaunt, M Brockschmidt, R Singh… – arXiv preprint arXiv: …, 2016 – arxiv.org
… In Advances in Neural Information Processing Systems 2, [NIPS Conference, Denver, Colorado, USA, November 27-30, 1989], pages 380–387, 1989. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. …

Tracking the World State with Recurrent Entity Networks
M Henaff, J Weston, A Szlam, A Bordes… – arXiv preprint arXiv: …, 2016 – arxiv.org
… al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However …

CogPro: Cognitive Processor for Astronomical Big Data Analysis
AK Mishra – ska.ac.za
“… 2849–2856. [14] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” arXiv preprint arXiv:1410.5401, 2014. [15] D. George, “How the brain might work: A hierarchical and temporal model for learning and recognition,” Ph.D. dissertation, Stanford University, 2008. 3 …

Backpropagation of Hebbian plasticity for lifelong learning
T Miconi – arXiv preprint arXiv:1609.02228, 2016 – pdfs.semanticscholar.org
… However, they are generally applied to fixed-weights networks. Several methods have been proposed to make lifelong learning amenable to backpropagation, including most recently neural Turing machines [2, … “Neural Turing Machines”. In: (Oct. 2014). arXiv:1410.5401 [cs.NE]. …

Deep Reinforcement Learning
E Akba? – 2016 – user.ceng.metu.edu.tr
“… Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489. ? Zaremba, W., & Sutskever, I. (2015). Reinforcement Learning Neural Turing Machines-Revised. arXiv preprint arXiv:1505.00521.

Divide and Conquer with Neural Networks
A Nowak, J Bruna – arXiv preprint arXiv:1611.02401, 2016 – pdfs.semanticscholar.org
… [5] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. … arXiv preprint arXiv:1410.4615, 2014. [22] W. Zaremba and I. Sutskever. Reinforcement learning neural turing machines-revised. arXiv preprint arXiv:1505.00521, 2015. 14 …

A Growing Long-term Episodic & Semantic Memory
M Pickett, R Al-Rfou, L Shao, C Tar – arXiv preprint arXiv:1610.06402, 2016 – arxiv.org
… information. There have been recent advances in differentiable memory, such as Neural Turing Machines [13], Memory Networks [35, 21], Differentiable Neural Computers [14], and Memory-based Deep Rein- forcement Learning [23]. …

Machine learning applied to crime prediction
M Vaquero Barnadas – 2016 – upcommons.upc.edu
“Page 1. MACHINE LEARNING APPLIED TO CRIME PREDICTION A Degree Thesis Submitted to the Faculty of the Escola Tècnica d’Enginyeria de Telecomunicació de Barcelona Universitat Politècnica de Catalunya by Miquel Vaquero Barnadas In partial fulfilment …

Gradual Program Induction
J Strunc, JR Davidson, S Aum – uclmr.github.io
… memory. In CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, pages 791–795. Morgan Kaufmann Publishers, 1992. [5] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. [6 …

Capacity Visual Attention Networks
M Edel, J Lausch – kurg.org
… The second direction is to use another memory representation in favor of the LSTM based approach, such as an adapted version of the recent proposed Neural Turing Machines [9]. References … [9] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. …

“REASONING, ATTENTION AND MEMORY BASED MACHINE LEARNING MODELS”
D Kohli – 2016 – dhruvkohli.me
“… 21 iv Page 5. 3.6.2 Vanishing or Exploding Gradient . . . . . 21 4 Neural Turing Machine 23 4.1 Architecture . . . . . … 32 4.4 Training Neural Turing Machine . . . . . 32 4.5 Experiments . . . . . …

Deep Multi-Task Learning with Shared Memory
P Liu, X Qiu, X Huang – arXiv preprint arXiv:1609.07222, 2016 – arxiv.org
… Different with Neural Turing Machine and memory network, we introduce a deep fu- sion mechanism between internal and external memories, which helps the LSTM units keep them interacting closely without being con- flated. …

Attention-based Memory Selection Recurrent Network for Language Modeling
DR Liu, SP Chuang, H Lee – arXiv preprint arXiv:1611.08656, 2016 – arxiv.org
… The attention mechanism has been applied on RNN models. Neural Turing Machine (NTM) [5] is one of the examples. … [5] Alex Graves, Greg Wayne, and Ivo Danihelka, “Neural turing machines,” arXiv preprint arXiv:1410.5401, 2014. …

On the Recursive Teaching Dimension of VC Classes
X Chen, Y Cheng, B Tang – Advances In Neural Information …, 2016 – papers.nips.cc
… Recently, Graves et al. [2] introduced a learnable representation of programs, called the Neural Turing Machine (NTM). The … CoRR, 2016. [2] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, 2014. [3 …

Neural Functional Programming
JK Feser, M Brockschmidt, AL Gaunt… – arXiv preprint arXiv: …, 2016 – arxiv.org
… However, these models are usually tightly coupled to the idea of a differentiable interpretation of computer hardware, as names such as Neural Turing Machine (Graves et al., 2014), Neural Random-Access Machine (Kurach et al., 2016), and Neural GPU (Kaiser & Sutskever …

Neural Semantic Encoders
T Munkhdalai, H Yu – arXiv preprint arXiv:1607.04315, 2016 – arxiv.org
… 2 Related Work One of the pioneering work that attempts to extend deep neural networks with an external memory is Neural Turing Machines (NTM) [5]. NTM implements a centralized controller and a fixed-sized random access memory. … Neural turing machines. …

Multimodal Memory Modelling for Video Captioning
J Wang, W Wang, Y Huang, L Wang, T Tan – arXiv preprint arXiv: …, 2016 – arxiv.org
… Similar to Neural Turing Machines [10], the proposed M3 attaches an external memory to store and retrieve both visual and textual information by interacting with video and sentence with multiple read and write oper- ations. Fig. …

Condensed Memory Networks for Clinical Diagnostic Inferencing
A Prakash, S Zhao, SA Hasan, V Datla, K Lee… – arXiv preprint arXiv: …, 2016 – arxiv.org
… Memory Networks Memory Networks (MemNNs) (Weston, Chopra, and Bor- des 2014) and Neural Turing Machines (NTMs) (Graves, Wayne, and Danihelka 2014) are the two classes of neu- ral network models with an external memory component. … Neural turing machines. …

Escaping the Local Minimum
K Friedman – Artificial Intelligence, 2016 – futureai.media.mit.edu
… This is done by encoding symbols and logic terms as vectors of real numbers. Second, Alex Graves’ “”Neural Turing Machine””27 is a neural net- 27 Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. …

Implicit ReasoNet: Modeling Large-Scale Structured Relationships with Shared Memory
Y Shen, PS Huang, MW Chang, J Gao – arXiv preprint arXiv:1611.04642, 2016 – arxiv.org
… Compared IRNs to Memory Networks (MemNN) (Weston et al., 2014; Sukhbaatar et al., 2015) and Neural Turing Machines (NTM) (Graves et al., 2014; 2016), the biggest difference between our model and the existing frameworks is the search controller and the use of the …

Deep Gate Recurrent Neural Network
Y Gao, D Glowacka – arXiv preprint arXiv:1604.02910, 2016 – jmlr.org
… Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing Machines. pages 1–26, October 2014. URL http://arxiv.org/abs/1410.5401. Sepp Hochreiter and Jürgen Schmidhuber. …

Review of state-of-the-arts in artificial intelligence with application to AI safety problem
V Shakirov – arXiv preprint arXiv:1605.04232, 2016 – arxiv.org
… 11 So what separates us from human-level AI? Recently, a flow of articles about memory networks and neural turing machines made it possible to use ar- bitrarily large memories while preserving reasonable number of model parameters. …

AGI and reflexivity
P Faudemay – arXiv preprint arXiv:1604.05557, 2016 – arxiv.org
Page 1. 1 AGI and reflexivity Pascal Faudemay 1 Abstract. We define a property of intelligent systems, which we call Reflexivity. In human beings it is one aspect of consciousness, and an element of deliberation. We propose …

Learning Continuous Semantic Representations of Symbolic Expressions
M Allamanis, P Chanthirasegaran, P Kohli… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Under review as a conference paper at ICLR 2017 LEARNING CONTINUOUS SEMANTIC REPRESENTATIONS OF SYMBOLIC EXPRESSIONS Miltiadis Allamanis1, Pankajan Chanthirasegaran1, Pushmeet Kohli2 …

Learning Dynamic Programming with Split-Merge Networks
A Nowak, J Bruna – arXiv preprint arXiv:1611.02401, 2016 – arxiv.org
… [5] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. … arXiv preprint arXiv:1410.4615, 2014. [21] W. Zaremba and I. Sutskever. Reinforcement learning neural turing machines-revised. arXiv preprint arXiv:1505.00521, 2015. 13 …

Learning to learn with backpropagation of Hebbian plasticity
T Miconi – arXiv preprint arXiv:1609.02228, 2016 – arxiv.org
… Several methods have been proposed to make lifelong learning amenable to backpropagation, including most recently neural Turing machines [2, 3] and memory networks [5]. However, it would be useful to incorporate the powerful, well-studied principle of Hebbian plasticity in …

Gaussian Attention Model and Its Application to Knowledgebase Embedding and Question Answering
L Zhang, J Winn, R Tomioka – arXiv preprint arXiv:1611.02266, 2016 – arxiv.org
… location addressable. Neural Turing machines (Graves et al., 2014) imple- ment memory slots that can be read and written as in Turing machines (Turing, 1938) but through differentiable attention mechanism. Each memory …

MuFuRU: The Multi-Function Recurrent Unit
D Weissenborn, T Rocktäschel – arXiv preprint arXiv:1606.03002, 2016 – arxiv.org
… arXiv preprint arXiv:1603.08983, 2016. [5] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [6] K. Greff, RK Srivastava, J. Koutn?k, BR Steunebrink, and J. Schmidhuber. Lstm: A search space odyssey. …

On One Approach of Solving Sentiment Analysis Task for Kazakh and Russian Languages Using Deep Learning
NS Sakenovich, AS Zharmagambetov – International Conference on …, 2016 – Springer
… extracting syntax relations. Neural Turing Machines, adversarial neural networks will be considered instead of or jointly with recurrent relation. Moreover, aspect based sentiment classification task will be studied. References. …

Differentiable Programming
AG Baydin – cs.nuim.ie
“… Neural Turing Machine (Graves et al., ) ? can infer algorithms: copy, sort, recall Stack-augmented RNN (Joulin & Mikolov, ) End-to-end memory network (Sukhbaatar et al., ) Stack, queue, deque (Grefenstette et al., ) Discrete interfaces (Zaremba & Sutskever, ) / Page 5. …

Low-rank passthrough neural networks
AVM Barone – arXiv preprint arXiv:1603.03116, 2016 – arxiv.org
Page 1. LOW-RANK PASSTHROUGH NEURAL NETWORKS Antonio Valerio Miceli Barone ? School of Informatics The University of Edinburgh amiceli@inf.ed.ac.uk ABSTRACT Deep learning consists in training neural networks …

Programming with a Differentiable Forth Interpreter
M Bošnjak, T Rocktäschel, J Naradowsky… – arXiv preprint arXiv: …, 2016 – arxiv.org
… akin to the Neural Turing Machine (NTM) memory (Graves et al., 2014), where ? is the outer product, ? is the Hadamard product, and a is the address pointer.2 In addition to the memory buffers D and R, the data stack and the return stack contain pointers to the current top-of-the …

An Attentional Neural Conversation Model with Improved Specificity
K Yao, B Peng, G Zweig, KF Wong – arXiv preprint arXiv:1606.01292, 2016 – arxiv.org
Page 1. An Attentional Neural Conversation Model with Improved Specificity Kaisheng Yao Microsoft Research Redmond, USA kaisheny@microsoft.com Baolin Peng Chinese University of Hong Kong blpeng@se.cuhk.edu.hk …

Stock market forecasting using recurrent neural network
Q Gao – 2016 – mospace.umsystem.edu
“Page 1. STOCK MARKET FORECASTING USING RECURRENT NEURAL NETWORK A Thesis Presented to the Faculty of Graduate School at the University of Missouri-Columbia In Partial Fulfillment of the Requirements for the Degree Master of Science By Qiyuan Gao …

PROGRESSIVE ATTENTION NETWORKS FOR VISUAL ATTRIBUTE PREDICTION
PH Seo, Z Lin, S Cohen, X Shen… – arXiv preprint arXiv: …, 2016 – pdfs.semanticscholar.org
… they have been used to handle sequences of variable lengths in neural machine translation models (Bahdanau et al., 2015; Luong et al., 2015) and manage memory access mechanisms for memory networks (Weston et al., 2015) and neural turing machines (Graves et al., 2014 …

The VQA-Machine: Learning How to Use Existing Vision Algorithms to Answer New Questions
P Wang, Q Wu, C Shen, A Hengel – arXiv preprint arXiv:1612.05386, 2016 – arxiv.org
Page 1. The VQA-Machine: Learning How to Use Existing Vision Algorithms to Answer New Questions Peng Wang?, Qi Wu?, Chunhua Shen, Anton van den Hengel School of Computer Science, The University of Adelaide, Australia …

Neural Paraphrase Generation with Stacked Residual LSTM Networks
A Prakash, SA Hasan, K Lee, V Datla, A Qadir… – arXiv preprint arXiv: …, 2016 – arxiv.org
… In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 273–278. IEEE. A. Graves, G. Wayne, and I. Danihelka. 2014. Neural Turing Machines. In arXiv:1410.5401. A. Graves. 2013. Generating Sequences with Recurrent Neural Networks. …

Review of state-of-the-arts in artificial intelligence. Present and future of AI.
V Shakirov – alpha.sinp.msu.ru
“… 10 So what separates us from human-level AI? Recently, a flow of articles about memory networks and neural turing machines made it possible to use ar- bitrarily large memories while preserving reasonable number of model parameters. …

# HashtagWars: Learning a Sense of Humor
P Potash, A Romanov, A Rumshisky – arXiv preprint arXiv:1612.03216, 2016 – arxiv.org
… 2004. Lexrank: Graph-based lexical centrality as salience in text sum- marization. Journal of Artificial Intelligence Re- search, pages 457–479. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Alex Graves. 2013. …

A Context-aware Attention Network for Interactive Question Answering
H Li, MR Min, Y Ge, A Kadav – arXiv preprint arXiv:1612.07411, 2016 – arxiv.org
… summarize these input sentences. Neural Turing Machine (NTM) (Graves et al., 2014), a model with content and location-based memory addressing mecha- nisms, has also been used for QA tasks recently. There is other recent …

Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
J Rae, JJ Hunt, I Danihelka, T Harley… – Advances In Neural …, 2016 – papers.nips.cc
Paper accepted and presented at the Neural Information Processing Systems Conference (http://nips.cc/).

A Cheap Linear Attention Mechanism with Fast Lookups and Fixed-Size Representations
A de Brébisson, P Vincent – arXiv preprint arXiv:1609.05866, 2016 – arxiv.org
… Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. In ICLR’2015, arXiv:1409.0473, 2015. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. …

A convolutional attention network for extreme summarization of source code
M Allamanis, H Peng, C Sutton – arXiv preprint arXiv:1602.03001, 2016 – jmlr.org
Page 1. A Convolutional Attention Network for Extreme Summarization of Source Code Miltiadis Allamanis M.ALLAMANIS@ED.AC.UK School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB, United Kingdom Hao Peng† PENGHAO.PKU@GMAIL.COM …

Attend in groups: a weakly-supervised deep learning framework for learning from web data
B Zhuang, L Liu, Y Li, C Shen, I Reid – arXiv preprint arXiv:1611.09960, 2016 – arxiv.org
… layer for classification. For the testing phase, the input is a single image and output is the predicted class label. age memory access mechanisms for memory networks [43] and neural turing machines [16]. Different from the above …

Genome Dreaming
A Maheshwari, B Wu, OH Elibol – cs229.stanford.edu
… In this faux-genome setting, models other than LSTMs such as Generative Adversarial Networks with attention or Neural Turing Machines, which may be better suited for genome dreaming than LSTMs, could be rapidly prototyped. …

Deep reinforcement learning for dialogue generation
J Li, W Monroe, A Ritter, M Galley, J Gao… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Deep Reinforcement Learning for Dialogue Generation Jiwei Li1, Will Monroe1, Alan Ritter2, Michel Galley3, Jianfeng Gao3 and Dan Jurafsky1 1Stanford University, Stanford, CA, USA 2Ohio State University, OH, USA …

Emergence of foveal image sampling from learning to attend in visual scenes
B Cheung, E Weiss, B Olshausen – arXiv preprint arXiv:1611.09430, 2016 – arxiv.org
… Models of overt attention. Oxford handbook of eye movements, pp. 439–454, 2011. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. …

Concrete problems in AI safety
D Amodei, C Olah, J Steinhardt, P Christiano… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Concrete Problems in AI Safety Dario Amodei? Google Brain Chris Olah? Google Brain Jacob Steinhardt Stanford University Paul Christiano UC Berkeley John Schulman OpenAI Dan Mané Google Brain Abstract Rapid …

“Scan, attend and read: End-to-end handwritten paragraph recognition with mdlstm attention”
T Bluche, J Louradour, R Messina – arXiv preprint arXiv:1604.03286, 2016 – arxiv.org
… Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [15] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [16] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. …

Variational inference for Monte Carlo objectives
A Mnih, DJ Rezende – arXiv preprint arXiv:1602.06725, 2016 – arxiv.org
Page 1. Variational Inference for Monte Carlo Objectives Andriy Mnih AMNIH@ GOOGLE.COM Danilo J. Rezende DANILOR@GOOGLE.COM Google DeepMind Abstract Recent progress in deep latent variable models has largely …

Research on attention memory networks as a model for learning natural language inference
Z Liu, D Huang, J Zhang, K Huang – EMNLP 2016, 2016 – aclweb.org
… optimization. The Journal of Machine Learning Research,(12): 2121–2159. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arX- iv: 1410.5401. Sanda Harabagiu and Andrew Hickl. 2006. …

Neuro-Symbolic Program Synthesis
E Parisotto, A Mohamed, R Singh, L Li, D Zhou… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Under review as a conference paper at ICLR 2017 NEURO-SYMBOLIC PROGRAM SYNTHESIS Emilio Parisotto1,2, Abdel-rahman Mohamed1, Rishabh Singh1, Lihong Li1, Dengyong Zhou1, Pushmeet Kohli1 1Microsoft …

End-to-end reinforcement learning of dialogue agents for information access
B Dhingra, L Li, X Li, J Gao, YN Chen, F Ahmed… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. End-to-End Reinforcement Learning of Dialogue Agents for Information Access Bhuwan Dhingra?? Lihong Li† Xiujun Li† Jianfeng Gao† Yun-Nung Chen‡? Faisal Ahmed† Li Deng† ?School of Computer Science, Carnegie …

Machine Transliteration: Attention Mechanism
P Jain – cse.iitb.ac.in
“Page 1. Machine Transliteration: Attention Mechanism by Palak Jain Roll No: 130050031 under the guidance of Dr. Pushpak Bhattacharyya B.tech Project Department of Computer Science and Engineering Indian Institute of Technology, Bombay Page 2. Abstract …

Classify or Select: Neural Architectures for Extractive Document Summarization
R Nallapati, B Zhou, M Ma – arXiv preprint arXiv:1611.04244, 2016 – arxiv.org
Page 1. CLASSIFY OR SELECT: NEURAL ARCHITECTURES FOR EXTRACTIVE DOCUMENT SUMMARIZATION Ramesh Nallapati, Bowen Zhou IBM Watson Yorktown Heights, NY 10598 USA 1nallapati,zhoul@us.ibm.com …

Recurrent memory networks for language modeling
K Tran, A Bisazza, C Monz – arXiv preprint arXiv:1601.01272, 2016 – arxiv.org
Page 1. Recurrent Memory Networks for Language Modeling Ke Tran Arianna Bisazza Christof Monz Informatics Institute, University of Amsterdam Science Park 904, 1098 XH Amsterdam, The Netherlands {mktran,a.bisazza,c.monz}@uva.nl Abstract …

Machine Learnig for Robotic Manipulation in cluttered environments
F Alet Puig – 2016 – upcommons.upc.edu
“… Some examples are the geometric algorithms to compute the features, algorithms for scene generation, research done in recursive Neural Networks, LSTMs and Neural Turing Machines, Gaussian Processes for 1- object scenes, other pure planning algorithms such as Rapidly …

Imposing higher-level Structure in Polyphonic Music Generation using Convolutional Restricted Boltzmann Machines and Constraints
S Lattner, M Grachten, G Widmer – arXiv preprint arXiv:1612.04742, 2016 – arxiv.org
Page 1. 0000 Imposing higher-level Structure in Polyphonic Music Generation using Convolutional Restricted Boltzmann Machines and Constraints STEFAN LATTNER, Austrian Research Institute for Artificial Intelligence, Vienna …

Designing Regularizers and Architectures for Recurrent Neural Networks
D Krueger – 2016 – papyrus.bib.umontreal.ca
“… MLP Multi-Layer Perceptron MSE Mean Squared Error NaN Not a Number NLP Natural Language Processing NTM Neural Turing Machine PGP Predictive Gating Pyramid RL Reinforcement Learning ReLU Rectified Linear Unit RNN Recurrent Neural Network …

Challenges in Deep Learning
P Angelov, A Sperduti – … of the 24th European Symposium on …, 2016 – elen.ucl.ac.be
… [15] J. Weston, S. Chopra, and A. Bordes. Memory networks. CoRR, abs/1410.3916, 2014. [16] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. [17] S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. …

Bidirectional decoder networks for attention-based end-to-end offline handwriting recognition
P Doetsch, A Zeyer, H Ney – Frontiers in Handwriting …, 2016 – ieeexplore.ieee.org
… [22] R. Kiros, Y. Zhu, R. Salakhutdinov, RS Zemel, A. Torralba, R. Urtasun, and S. Fidler, “Skip-thought vectors,” 2015. [Online]. Available: http://arxiv.org/abs/1506.06726 [23] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” CoRR, vol. abs/1410.5401, 2014. …

Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation
IV Serban, T Klinger, G Tesauro… – arXiv preprint arXiv: …, 2016 – arxiv.org
… In NIPS, pages 2962–2970. [8] Graves, A. (2013). Generating sequences with recurrent neural networks. arXiv:1308.0850. [9] Graves, A., Wayne, G., and Danihelka, I. (2014). Neural turing machines. arXiv:1410.5401. [10] Hinton, G. et al. (2012). …

Match memory recurrent networks
S Samothrakis, T Vodopivec, M Fasli… – … Joint Conference on, 2016 – ieeexplore.ieee.org
… These mechanisms have been presented under various auspices and names (eg, “Memory Networks” [1] and “Neural Turing Machines” [2]). Though not explicitly stated, the internal structure of these models is (at least partially) inspired by Reinforcement Learn- ing (RL). …

Lifelong Perceptual Programming By Example
AL Gaunt, M Brockschmidt, N Kushman… – arXiv preprint arXiv: …, 2016 – arxiv.org
… CoRR, abs/1608.04428, 2016. URL http://arxiv.org/abs/1608.04428. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. URL http://arxiv.org/ abs/1410.5401. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihel

Towards robust ego-centric hand gesture analysis for robot control
H Song, W Feng, N Guan, X Huang… – Signal and Image …, 2016 – ieeexplore.ieee.org
… 529, no. 7587, pp. 484–489, 2016. [14] A. Graves, G.Wayne, and I. Danihelka, “Neural turing machines,” Eprint Arxiv, 2014. [15] V. Vezhnevets, V. Sazonov, and A. Andreeva, “A survey on pixel- based skin color detection techniques,” pp. 85–92, 2003. …

Categorical Reparameterization with Gumbel-Softmax
E Jang, S Gu, B Poole – arXiv preprint arXiv:1611.01144, 2016 – arxiv.org
… Hybrid computing using a neural net- work with dynamic external memory. Nature, 538(7626):471–476, 2016. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. …

Neuromorphic Deep Learning Machines
E Neftci, C Augustine, S Paul… – arXiv preprint arXiv: …, 2016 – pdfs.semanticscholar.org
Page 1. Neuromorphic Deep Learning Machines Emre Neftci1, Charles Augustine3, Somnath Paul3, and Georgios Detorakis1 1Department of Cognitive Sciences, UC Irvine, Irvine, CA, USA, 3Circuit Research Lab, Intel Corporation, Hilsboro, OR, USA, January 24, 2017 …

Learning to superoptimize programs
R Bunel, A Desmaison, MP Kumar, PHS Torr… – arXiv preprint arXiv: …, 2016 – arxiv.org
… Eliminating branches using a superoptimizer and the GNU C compiler. ACM SIGPLAN Notices, 1992. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, 2014. Sumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkatesan. …

Aspect level sentiment classification with deep memory network
D Tang, B Qin, T Liu – arXiv preprint arXiv:1605.08900, 2016 – arxiv.org
Page 1. Aspect Level Sentiment Classification with Deep Memory Network Duyu Tang, Bing Qin?, Ting Liu Harbin Institute of Technology, Harbin, China {dytang, qinb, tliu}@ir.hit.edu.cn Abstract We introduce a deep memory network for aspect level sentiment classification. …

IMAGE CAPTIONING WITH RECURRENT NEURAL NETWORKS
BJ KVITA – dspace.vutbr.cz
“… Machines [48]. Different architectures are trying to connect RNN with an external memory resource, which can be a tape in case of Neural Turing Machines [19], a stack in Neural network Pushdown Automata [50], etc. During …

Decoupled neural interfaces using synthetic gradients
M Jaderberg, WM Czarnecki, S Osindero… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Decoupled Neural Interfaces using Synthetic Gradients Max Jaderberg Wojciech Marian Czarnecki Simon Osindero Oriol Vinyals Alex Graves Koray Kavukcuoglu Google DeepMind, London, UK Abstract Training directed …

Deep Recurrent and Convolutional Neural Networks for Automated Behavior Classification
Z Nado – pdfs.semanticscholar.org
“Page 1. Deep Recurrent and Convolutional Neural Networks for Automated Behavior Classification Zachary Nado Advisor: Erik Sudderth, Reader: Thomas Serre Abstract In this thesis we investigate different methods of automating …

Differentiable Functional Program Interpreters
JK Feser, M Brockschmidt… – arXiv preprint arXiv: …, 2016 – pdfs.semanticscholar.org
Page 1. arXiv:1611.01988v2 [cs.PL] 2 Mar 2017 Differentiable Functional Program Interpreters John K. Feser 1 Marc Brockschmidt 2 Alexander L. Gaunt 2 Daniel Tarlow 3 Abstract Programming by Example (PBE) is the task …

Visualizing and Understanding Curriculum Learning for Long Short-Term Memory Networks
V Cirik, E Hovy, LP Morency – arXiv preprint arXiv:1611.06204, 2016 – arxiv.org
… Pattern Analysis and Machine Intelligence, IEEE Transactions on 31(5):855–868. Graves, A.; Wayne, G.; and Danihelka, I. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Grefenstette, E.; Hermann, KM; Suleyman, M.; and Blunsom, P. 2015. …

Neural network computing using on-chip accelerators
S Eldridge – 2016 – open.bu.edu
“Page 1. Boston University OpenBU http://open.bu.edu Theses & Dissertations Boston University Theses & Dissertations 2016 Neural network computing using on-chip accelerators Eldridge, Schuyler http://hdl.handle.net/2144/19511 Boston University Page 2. …

Improving Policy Gradient by Exploring Under-appreciated Rewards
O Nachum, M Norouzi, D Schuurmans – arXiv preprint arXiv:1611.09321, 2016 – arxiv.org
Page 1. IMPROVING POLICY GRADIENT BY EXPLORING UNDER-APPRECIATED REWARDS Ofir Nachum?, Mohammad Norouzi, Dale Schuurmans† Google Brain {ofirnachum, mnorouzi, schuurmans}@google.com ABSTRACT …

Memory-augmented Attention Modelling for Videos
R Fakoor, A Mohamed, M Mitchell, SB Kang… – arXiv preprint arXiv: …, 2016 – arxiv.org
… In ICML-14, pp. 1764–1772, 2014. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, June 2016. …

Recurrent Deep Q-Learning for PAC-MAN
K Ranjan, A Christensen, B Ramos – cs231n.stanford.edu
Page 1. Recurrent Deep Q-Learning for PAC-MAN Kushal Ranjan kranjan@stanford. edu Amelia Christensen amyjc@stanford.edu Bernardo Ramos bramos@stanford. edu Abstract Classic Artificial Intelligence agents are limited …

Teaching Machines to Paint
M Jaques – project-archive.inf.ed.ac.uk
“Page 1. Teaching Machines to Paint Miguel Jaques Master of Science Artificial Intelligence University of Edinburgh 2016 Page 2. Declaration I declare that this thesis was composed by myself and that the work contained therein …

Self-critical Sequence Training for Image Captioning
SJ Rennie, E Marcheret, Y Mroueh, J Ross… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Self-critical Sequence Training for Image Captioning Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross and Vaibhava Goel Watson Multimodal Algorithms and Engines Group IBM TJ Watson Research …

Investigating Neural-Based Learning Algorithms for Control
Y Gao – 2016 – dspace2.lib.helsinki.fi
“Page 1. Date of acceptance Grade Instructor Investigating Neural-Based Learning Algorithms for Control Alex Yuan Gao Helsinki April 13, 2016 UNIVERSITY OF HELSINKI Department of Computer Science Page 2. Faculty of Science Department of Computer Science …

A Parallel-Distributed Processing Approach to Mathematical Cognition
JL McClelland, K Mickey, S Hansen, A Yuan, Q Lu – 2016 – cseweb.ucsd.edu
Page 1. Mathematical Cognition 1 A Parallel-Distributed Processing Approach to Mathematical Cognition1 James L. McClelland, Kevin Mickey, Steven Hansen, Arianna Yuan and Qihong Lu Stanford University February 18, 2016 …

Dialogue Learning With Human-In-The-Loop
J Li, AH Miller, S Chopra, MA Ranzato… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Under review as a conference paper at ICLR 2017 DIALOGUE LEARNING WITH HUMAN-IN-THE-LOOP Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, Jason Weston Facebook AI Research, New …

Ontology Learning in the Deep
G Petrucci, C Ghidini, M Rospocher – Knowledge Engineering and …, 2016 – Springer
… 245–267. Springer, Heidelberg (2009)CrossRefGoogle Scholar. 6. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. CoRR abs/1410.5401 (2014). 7. Grefenstette, E., Hermann, KM, Suleyman, M., Blunsom, P.: Learning to transduce with unbounded memory. …

Optimization of image description metrics using policy gradient methods
S Liu, Z Zhu, N Ye, S Guadarrama, K Murphy – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Optimization of image description metrics using policy gradient methods Siqi Liu?1, Zhenhai Zhu2, Ning Ye2, Sergio Guadarrama2, and Kevin Murphy2 siqi.liu@cs.ox.ac.uk {zhenhai,nye,sguada,kpmurphy}@google …

On Reinforcement Learning for Deep Neural Architectures
E Bengio – 2016 – folinoid.com
“Page 1. On Reinforcement Learning for Deep Neural Architectures : Conditional computation with stochastic computation policies Emmanuel Bengio Computer Science McGill University, Montreal October 26, 2016 A thesis submitted …

“Deep Neural Networks for Visual Reasoning, Program Induction, and Text-to-Image Synthesis”
SE Reed – 2016 – deepblue.lib.umich.edu
“Page 1. Deep Neural Networks for Visual Reasoning, Program Induction, and Text-to-Image Synthesis by Scott Ellison Reed A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy …

DeepCoder: Learning to Write Programs
M Balog, AL Gaunt, M Brockschmidt, S Nowozin… – arXiv preprint arXiv: …, 2016 – arxiv.org
… CoRR, abs/1608.04428, 2016. URL http://arxiv.org/abs/1608.04428. 9 Page 10. Under review as a conference paper at ICLR 2017 Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. URL http://arxiv.org/abs/1410.5401. …

Drawing and Recognizing Chinese Characters with Recurrent Neural Network
XY Zhang, F Yin, YM Zhang, CL Liu… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. 1 Drawing and Recognizing Chinese Characters with Recurrent Neural Network Xu-Yao Zhang, Fei Yin, Yan-Ming Zhang, Cheng-Lin Liu, Yoshua Bengio Abstract—Recent deep learning based approaches have achieved great success on handwriting recognition. …

Deep learning in bioinformatics
S Min, B Lee, S Yoon – Briefings in Bioinformatics, 2016 – Oxford Univ Press
We use cookies to enhance your experience on our website. By continuing to use our website, you are agreeing to our use of cookies. You can change your cookie settings at any time. Find out more. Skip Navigation. …

Learning Typographic Style
S Baluja – arXiv preprint arXiv:1603.04000, 2016 – arxiv.org
Page 1. Learning Typographic Style Shumeet Baluja shumeet@google.com Google, Inc. Abstract. Typography is a ubiquitous art form that affects our under- standing, perception, and trust in what we read. Thousands of different …

The SP theory of intelligence: distinctive features and advantages
JG Wolff – IEEE Access, 2016 – ieeexplore.ieee.org
… The potential advan- tage of this approach is that it can help us avoid old tramlines, and open doors to new ways of thinking ([76, Sec. 2], Appendix I-E1). 1An apparent exception is the concept of a ”neural Turing machine” [14]. VOLUME 4, 2016 217 Page 3. …

Differentiable Genetic Programming
D Izzo, F Biscani, A Mereta – arXiv preprint arXiv:1611.04766, 2016 – arxiv.org
Page 1. Differentiable Genetic Programming Dario Izzo1, Francesco Biscani2, and Alessio Mereta1 Advanced Concepts Team, European Space Agency, Noordwijk 2201AZ, The Netherlands dario.izzo@esa.int Abstract. We …

What are the computational correlates of consciousness?
JA Reggia, G Katz, DW Huang – Biologically Inspired Cognitive …, 2016 – Elsevier
Cognitive phenomenology refers to the idea that our subjective experiences include deliberative thought processes and high-level cognition. The recent ascendanc.

Neural Information Retrieval: A Literature Review
Y Zhang, MM Rahman, A Braylan, B Dang… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Neural Information Retrieval: A Literature Review Ye Zhang, Md Mustafizur Rahman, Alex Braylan, Brandon Dang, Heng-Lu Chang, Henna Kim, Quinten McNamara, Aaron Angert, Edward Banner, Vivek Khetan, Tyler …

Dynamic Key-Value Memory Network for Knowledge Tracing
J Zhang, X Shi, I King, DY Yeung – arXiv preprint arXiv:1611.08108, 2016 – arxiv.org
Page 1. Dynamic Key-Value Memory Network for Knowledge Tracing Jiani Zhang1, Xingjian Shi2, Irwin King1, Dit-Yan Yeung2 1{jnzhang, king}@cse.cuhk.edu.hk Department of Computer Science and Engineering The Chinese …

Generalizing to Unseen Entities and Entity Pairs with Row-less Universal Schema
P Verga, A Neelakantan, A McCallum – arXiv preprint arXiv:1606.05804, 2016 – arxiv.org
Page 1. Generalizing to Unseen Entities and Entity Pairs with Row-less Universal Schema Patrick Verga, Arvind Neelakantan, & Andrew McCallum College of Information and Computer Sciences University of Massachusetts Amherst 1pat, arvind, mccalluml@cs.umass.edu …

Deep Amortized Inference for Probabilistic Programs
D Ritchie, P Horsfall, ND Goodman – arXiv preprint arXiv:1610.05735, 2016 – arxiv.org
Page 1. Deep Amortized Inference for Probabilistic Programs Daniel Ritchie Stanford University Paul Horsfall Stanford University Noah D. Goodman Stanford University Abstract Probabilistic programming languages (PPLs) are …

Prediction with a Short Memory
S Kakade, P Liang, V Sharan, G Valiant – arXiv preprint arXiv:1612.02526, 2016 – arxiv.org
… complexity of the distribution). At a time where increasingly complex models such as recurrent neural networks and neural Turing machines [8] are in vogue, Proposition 1 serves as a baseline theoretical result. After all, the Markov …

What learning systems do intelligent agents need? complementary learning systems theory updated
D Kumaran, D Hassabis, JL McClelland – Trends in Cognitive Sciences, 2016 – Elsevier
We update complementary learning systems (CLS) theory, which holds that intelligent agents must possess two learning systems, instantiated in mammalians in neoc.

Assessment and analysis of the applicability of recurrent neural networks to natural language understanding with a focus on the problem of coreference resolution
FD Kaumanns – 2016 – edoc.ub.uni-muenchen.de
“Page 1. Assessment and Analysis of the Applicability of Recurrent Neural Networks to Natural Language Understanding with a Focus on the Problem of Coreference Resolution Inaugural-Dissertation zur Erlangung des Doktorgrades der Philosophie …

Learning Through Dialogue Interactions
J Li, AH Miller, S Chopra, MA Ranzato… – arXiv preprint arXiv: …, 2016 – arxiv.org
Page 1. Under review as a conference paper at ICLR 2017 LEARNING THROUGH DIALOGUE INTERACTIONS Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, Jason Weston Facebook AI Research, New York, USA {jiwel,ahm,spchopra,ranzato,jase}@fb.com …

Common-Description Learning: A Framework for Learning Algorithms and Generating Subproblems from Few Examples
BG El-Barashy – arXiv preprint arXiv:1605.00241, 2016 – arxiv.org
… sequence generators. Neural Turing Machine (Graves et al., 2014) used a modified version of LSTM to learn a set of tasks, such as copying and sorting data sequences, and did not overfit the length of the training sequences. CDL …

Deep Reinforcement Learning From Raw Pixels in Doom
D Hafner – arXiv preprint arXiv:1610.02164, 2016 – arxiv.org
Page 1. Deep Reinforcement Learning From Raw Pixels in Doom Danijar Hafner July 2016 A thesis submitted for the degree of Bachelor of Science Hasso Plattner Institute, Potsdam Supervisor: Prof. Dr. Tobias Friedrich arXiv:1610.02164v1 [cs.LG] 7 Oct 2016 Page 2. Abstract …

“Ground truth data, content, metrics, and analysis”
S Krig – Computer Vision Metrics, 2016 – Springer”

Image Pre-Processing
S Krig – Computer Vision Metrics, 2016 – Springer”

Learning Algorithms from Data
W Zaremba – 2016 – cs.nyu.edu
“… pressing algorithms, and the results presented are based on the papers “Rein- 7 Page 28. forcement learning neural Turing machines”146, “Learning Simple Algorithms from Examples”144 and “Sequence Level Training with Recurrent Neural Net- works”100. …

Global and Regional Features
S Krig – Computer Vision Metrics, 2016 – Springer”

Taxonomy of Feature Description Attributes
S Krig – Computer Vision Metrics, 2016 – Springer”

Feature Learning Architecture Taxonomy and Neuroscience Background
S Krig – Computer Vision Metrics, 2016 – Springer”

Local Feature Design Concepts
S Krig – Computer Vision Metrics, 2016 – Springer”

Effective connectivity analysis in brain networks: a GPU-Accelerated Implementation of the Cox Method
FC Vafa Andalibi, T Laukkarinen, T Mikkonen – bioinformatics – ieeexplore.ieee.org
Page 1. 1932-4553 (c) 2016 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …

Feature Learning and Deep Learning Architecture Survey
S Krig – Computer Vision Metrics, 2016 – Springer”

Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback
AE Orhan, WJ Ma – arXiv preprint arXiv:1601.03060, 2016 – arxiv.org
Page 1. Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback A. Emin Orhan† Wei Ji Ma†,‡ eorhan@cns.nyu.edu weijima@nyu.edu †Center for Neural Science and ‡Department …

Neuronové sít? s pam?tí
O Kužela – 2016 – dspace.cvut.cz
“… learn. In the last years a new trend has arisen, connecting additional memory to classic neural networks. The memory can be either internal (LSTM, Liquid State Machine) or external (Memory Networks, Neural Turing Machine). 1 …

Survey on the attention based RNN model and its applications in computer vision
F Wang, DMJ Tax – arXiv preprint arXiv:1601.06823, 2016 – arxiv.org
Page 1. Survey on the attention based RNN model and its applications in computer vision Survey on the attention based RNN model and its applications in computer vision Feng Wang f.wang-6@student.tudelft.nl Pattern Recognition …

A Framework for Searching for General Artificial Intelligence
M Rosa, J Feyereisl, TGAI Collective – arXiv preprint arXiv:1611.00685, 2016 – arxiv.org
Page 1. M. ROSA, J. FEYEREISL & THE GOODAI TEAM A FRAMEWORK FOR SEARCHING FOR GENERAL ARTIFICIAL INTELLIGENCE VERSION 1 arXiv:1611.00685v1 [cs.AI] 2 Nov 2016 Page 2. Versions 1.0 current version: initial release for the general public …

Reinforcement Learning in Complex Environments: Evaluating Algorithms on Image Classification
D Steckelmacher, T Lenaerts – steckdenis.be
… 42 3.3 Incremental solution construction . . . . . 42 3.4 Neural Turing machines . . . . . 43 II Contributions 45 4 Incremental Gaussian Mixture Model 47 4.1 Introduction . . . . . …

The computational origin of representation and conceptual change
ST Piantadosi – 2016 – colala.bcs.rochester.edu
“Page 1. The computational origin of representation and conceptual change Steven T. Piantadosi October 3, 2016 Abstract Each of our theories of mental representation provides some insight into how the mind works. How- ever …

Theory and Practice of Computing with Excitable Dynamics
A Goudarzi – 2016 – search.proquest.com
“… application. To address this problem neural Turing machines [191] have been proposed which are networks augmented with virtually innite memory and a specialized training algorithm to teach the network how to access the memory. …

Reinforcement learning with natural language signals
S Sidor – 2016 – dspace.mit.edu
“Page 1. Reinforcement Learning with Natural Language Signals by Szymon Sidor BA, University of Cambridge (2013) MASSACHUS ITUTE OF TECHNLG APR 152016 LIBRARIES ARCHIVES Submitted to the Department of Electrical Engineering and Computer Science …

txt2calories: Nutrition Estimation via Natural Languages
SQ Liu – siqi.fr
“Page 1. txt2calories: Nutrition Estimation via Natural Languages Si-Qi Liu Keble College University of Oxford Submitted in partial fulfillment of the MSc in Computer Science Trinity 2016 Abstract Accurate nutrition estimation has …

End-to-End Speech Recognition Models
W Chan – 2016 – williamchan.ca
“Page 1. End-to-End Speech Recognition Models Submitted in partial fullfillment of the requirements for the degree of Doctor of Philosophy in Department of Electrical and Computer Engineering William Chan BASc Computer …

Effective Connectivity Analysis in Brain Networks: A GPU-Accelerated Implementation of the Cox Method
V Andalibi, F Christophe, T Laukkarinen… – IEEE Journal of …, 2016 – ieeexplore.ieee.org
Page 1. 1226 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 10, NO. 7, OCTOBER 2016 Effective Connectivity Analysis in Brain Networks: A GPU-Accelerated Implementation of the Cox Method Vafa …

Learning structured representations for perception and control
TD Kulkarni – 2016 – dspace.mit.edu
“Page 1. Learning Structured Representations for Perception and Control by Tejas Dattatraya Kulkarni BS, Purdue University (2010) MASSACHUSETTS INSTITUTE OF TECHNOWOGY DEC 20 2016 LIBRARIES Submitted to the Department of Brain and Cognitive Science …

Toward an integration of deep learning and neuroscience
AH Marblestone, G Wayne… – Frontiers in Computational …, 2016 – ncbi.nlm.nih.gov”

Extracting Cognition out of Images for the Purpose of Autonomous Driving
C Chen – 2016 – search.proquest.com
“Extracting Cognition out of Images for the Purpose of Autonomous Driving. Abstract. Autonomous driving is a broadly recognized solution to serious traffic problems such as accidents and congestions. It is a very broad topic that …