Neural Language Models 2018


Notes:

CNN is a type of feed-forward artificial neural network, and variation of multilayer perceptron, designed to minmize preprocessing.

  • Bigram neural language model
  • Class-based neural language model
  • Conditional neural language model
  • Context-aware neural language model
  • Factored neural language model
  • Feature-based neural language model
  • Hierarchal neural language model
  • Log-bilinear neural language model
  • Multimodal neural language model
  • Neural language modeLing
  • Neural language modeLLing
  • Neural network language model
  • Probabilistic neural language model
  • Recurrent neural language model
  • Structure-content neural language model

Resources:

  • rwthlm .. a toolkit for feedforward and long short-term memory neural network language modeling

Wikipedia:

See also:

100 Best Deep Learning Videos | 100 Best GitHub: Deep Learning | Deep Learning & Dialog Systems | DNLP (Deep Natural Language Processing)Natural Language Image Recognition | Word2vec Neural Network


Empower sequence labeling with task-aware neural language model
L Liu, J Shang, X Ren, FF Xu, H Gui, J Peng… – Thirty-Second AAAI …, 2018 – aaai.org
Linguistic sequence labeling is a general approach encompassing a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features …

An analysis of neural language modeling at multiple scales
S Merity, NS Keskar, R Socher – arXiv preprint arXiv:1803.08240, 2018 – arxiv.org
Many of the leading approaches in language modeling introduce novel, complex and specialized architectures. We take existing state-of-the-art word level language models based on LSTMs and QRNNs and extend them to both larger vocabularies as well as …

Sharp nearby, fuzzy far away: How neural language models use context
U Khandelwal, H He, P Qi, D Jurafsky – arXiv preprint arXiv:1805.04623, 2018 – arxiv.org
We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled …

Adaptive input representations for neural language modeling
A Baevski, M Auli – arXiv preprint arXiv:1809.10853, 2018 – arxiv.org
We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al.(2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model  …

Memory architectures in recurrent neural network language models
D Yogatama, Y Miao, G Melis, W Ling, A Kuncoro… – 2018 – openreview.net
We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the …

Neural network language modeling with letter-based features and importance sampling
H Xu, K Li, Y Wang, J Wang, S Kang… – … on acoustics, speech …, 2018 – ieeexplore.ieee.org
In this paper we describe an extension of the Kaldi software toolkit to support neural-based language modeling, intended for use in automatic speech recognition (ASR) and related tasks. We combine the use of subword features (letter n-grams) and one-hot encoding of …

Darkembed: Exploit prediction with neural language models
N Tavabi, P Goyal, M Almukaynizi, P Shakarian… – Thirty-Second AAAI …, 2018 – aaai.org
Software vulnerabilities can expose computer systems to attacks by malicious actors. With the number of vulnerabilities discovered in the recent years surging, creating timely patches for every vulnerability is not always feasible. At the same time, not every vulnerability will be …

Grammar induction with neural language models: An unusual replication
PM Htut, K Cho, SR Bowman – arXiv preprint arXiv:1808.10000, 2018 – arxiv.org
A substantial thread of recent work on latent tree learning has attempted to develop neural network models with parse-valued latent variables and train them on non-parsing tasks, in the hope of having them discover interpretable tree structure. In a recent paper, Shen et …

Neural network models for paraphrase identification, semantic textual similarity, natural language inference, and question answering
W Lan, W Xu – arXiv preprint arXiv:1806.04330, 2018 – arxiv.org
In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and …

Pivot based language modeling for improved neural domain adaptation
Y Ziser, R Reichart – … for Computational Linguistics: Human Language …, 2018 – aclweb.org
Abstract Representation learning with pivot-based methods and with Neural Networks (NNs) have lead to significant progress in domain adaptation for Natural Language Processing. However, most previous work that follows these approaches does not explicitly exploit the …

Whole sentence neural language models
Y Huang, A Sethy, K Audhkhasi… – … on Acoustics, Speech …, 2018 – ieeexplore.ieee.org
Recurrent neural networks have become increasingly popular for the task of language modeling achieving impressive gains in state-of-the-art speech recognition and natural language processing (NLP) tasks. Recurrent models exploit word dependencies over a …

A deep neural network language model with contexts for source code
AT Nguyen, TD Nguyen, HD Phan… – 2018 IEEE 25th …, 2018 – ieeexplore.ieee.org
Statistical language models (LMs) have been applied in several software engineering applications. However, they have issues in dealing with ambiguities in the names of program and API elements (classes and method calls). In this paper, inspired by the success …

Lightweight adaptive mixture of neural and n-gram language models
A Bakhtin, A Szlam, MA Ranzato, E Grave – arXiv preprint arXiv …, 2018 – arxiv.org
It is often the case that the best performing language model is an ensemble of a neural language model with n-grams. In this work, we propose a method to improve how these two models are combined. By using a small network which predicts the mixture weight between …

Recurrent neural network language models for open vocabulary event-level cyber anomaly detection
AR Tuor, R Baerwolf, N Knowles, B Hutchinson… – Workshops at the Thirty …, 2018 – aaai.org
Automated analysis methods are crucial aids for monitoring and defending a network to protect the sensitive or confidential data it hosts. This work introduces a flexible, powerful, and unsupervised approach to detecting anomalous behavior in computer and network logs; …

Neural lattice language models
J Buckman, G Neubig – Transactions of the Association for Computational …, 2018 – MIT Press
In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence …

Unsupervised cross-lingual word embedding by multilingual neural language models
T Wada, T Iwata – arXiv preprint arXiv:1809.02306, 2018 – arxiv.org
We propose an unsupervised method to obtain cross-lingual embeddings without any parallel data or pre-trained word embeddings. The proposed model, which we call multilingual neural language models, takes sentences of multiple languages as an input …

Groupreduce: Block-wise low-rank approximation for neural language model shrinking
P Chen, S Si, Y Li, C Chelba, CJ Hsieh – Advances in Neural …, 2018 – papers.nips.cc
Abstract Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. For advanced NLP problems, a neural language model usually consists of recurrent layers (eg, using LSTM …

Feature memory-based deep recurrent neural network for language modeling
H Deng, L Zhang, X Shu – Applied Soft Computing, 2018 – Elsevier
Recently, deep recurrent neural networks (DRNNs) have been widely proposed for language modeling. DRNNs can learn higher-level features of input data by stacking multiple recurrent layers, making them achieve better performance than single-layer models …

Exploring the functional and geometric bias of spatial relations using neural language models
S Dobnik, M Ghanimifard, J Kelleher – 2018 – arrow.dit.ie
The challenge for computational models of spatial descriptions for situated dialogue systems is the integration of information from different modalities. The semantics of spatial descriptions are grounded in at least two sources of information:(i) a geometric …

Navigating with graph representations for fast and scalable decoding of neural language models
M Zhang, W Wang, X Liu, J Gao, Y He – Advances in Neural …, 2018 – papers.nips.cc
Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks. However, NLMs are very computationally demanding largely due to the computational cost of the …

Recurrent Neural Network Language Model Adaptation for Conversational Speech Recognition.
K Li, H Xu, Y Wang, D Povey, S Khudanpur – Interspeech, 2018 – danielpovey.com
We propose two adaptation models for recurrent neural network language models (RNNLMs) to capture topic effects and longdistance triggers for conversational automatic speech recognition (ASR). We use a fast marginal adaptation (FMA) framework to adapt a …

On-device neural language model based word prediction
S Yu, N Kulkarni, H Lee, J Kim – … of the 27th International Conference on …, 2018 – aclweb.org
Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation. However, deploying huge language models on mobile devices for on-device keyboards poses computation as a …

Interpreting recurrent and attention-based neural models: a case study on natural language inference
R Ghaeini, XZ Fern, P Tadepalli – arXiv preprint arXiv:1808.03894, 2018 – arxiv.org
Deep learning models have achieved remarkable success in natural language inference (NLI) tasks. While these models are widely explored, they are hard to interpret and it is often unclear how and why they actually work. In this paper, we take a step toward explaining …

Unsupervised Word Discovery with Segmental Neural Language Models
K Kawakami, C Dyer, P Blunsom – arXiv preprint arXiv:1811.09353, 2018 – arxiv.org
We propose a segmental neural language model that combines the representational power of neural networks and the structure learning mechanism of Bayesian nonparametrics, and show that it learns to discover semantically meaningful units (eg, morphemes and words) …

Slim embedding layers for recurrent neural language models
Z Li, R Kulhanek, S Wang, Y Zhao, S Wu – Thirty-Second AAAI Conference …, 2018 – aaai.org
Recurrent neural language models are the state-of-the-art models for language modeling. When the vocabulary size is large, the space taken to store the model parameters becomes the bottleneck for the use of recurrent neural language models. In this paper, we introduce a …

Disfluency detection using a noisy channel model and a deep neural language model
PJ Lou, M Johnson – arXiv preprint arXiv:1808.09091, 2018 – arxiv.org
This paper presents a model for disfluency detection in spontaneous speech transcripts called LSTM Noisy Channel Model. The model uses a Noisy Channel Model (NCM) to generate n-best candidate disfluency analyses and a Long Short-Term Memory (LSTM) …

Using morphological knowledge in open-vocabulary neural language models
A Matthews, G Neubig, C Dyer – … Language Technologies, Volume 1 …, 2018 – aclweb.org
Languages with productive morphology pose problems for language models that generate words from a fixed vocabulary. Although character-based models allow any possible word type to be generated, they are linguistically naïve: they must discover that words exist and …

Neural language codes for multilingual acoustic models
M Müller, S Stüker, A Waibel – arXiv preprint arXiv:1807.01956, 2018 – arxiv.org
Multilingual Speech Recognition is one of the most costly AI problems, because each language (7,000+) and even different accents require their own acoustic models to obtain best recognition performance. Even though they all use the same phoneme symbols, each …

Learning neural trans-dimensional random field language models with noise-contrastive estimation
B Wang, Z Ou – … Conference on Acoustics, Speech and Signal …, 2018 – ieeexplore.ieee.org
Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training …

Accelerating recurrent neural network language model based online speech recognition system
K Lee, C Park, N Kim, J Lee – 2018 IEEE International …, 2018 – ieeexplore.ieee.org
This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems. Firstly, a lossy compression of the past hidden layer outputs (history vector) with caching is introduced in order to reduce the …

A tri-partite neural document language model for semantic information retrieval
GH Nguyen, L Tamine, L Soulier, N Souf – European Semantic Web …, 2018 – Springer
Previous work in information retrieval have shown that using evidence, such as concepts and relations, from external knowledge sources could enhance the retrieval performance. Recently, deep neural approaches have emerged as state-of-the art models for capturing …

Dual neural network model of speech and language evolution: new insights on flexibility of vocal production systems and involvement of frontal cortex
SR Hage – Current opinion in behavioral sciences, 2018 – Elsevier
Human speech vastly outperforms primate vocal behavior in scope and flexibility making the elucidation of speech evolution one of biology’s biggest challenges. A proposed dual-network model including a volitional articulatory motor network originating in the prefrontal …

Progress and tradeoffs in neural language models
R Tang, J Lin – arXiv preprint arXiv:1811.00942, 2018 – arxiv.org
In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial …

Analysing Dropout and Compounding Errors in Neural Language Models
JO Neill, D Bollegala – arXiv preprint arXiv:1811.00998, 2018 – arxiv.org
This paper carries out an empirical analysis of various dropout techniques for language modelling, such as Bernoulli dropout, Gaussian dropout, Curriculum Dropout, Variational Dropout and Concrete Dropout. Moreover, we propose an extension of variational dropout to …

Personalized neural language models for real-world query auto completion
N Fiorini, Z Lu – arXiv preprint arXiv:1804.06439, 2018 – arxiv.org
Query auto completion (QAC) systems are a standard part of search engines in industry, helping users formulate their query. Such systems update their suggestions after the user types each character, predicting the user’s intent using various signals-one of the most …

A comparison of character neural language model and bootstrapping for language identification in multilingual noisy texts
W Adouane, S Dobnik, JP Bernardy… – … /Character LEvel Models, 2018 – aclweb.org
This paper seeks to examine the effect of including background knowledge in the form of character pre-trained neural language model (LM), and data bootstrapping to overcome the problem of unbalanced limited resources. As a test, we explore the task of language  …

The hierarchical developmental model: Neural control, natural language, and the recurrent organization of the brain
FM Levin, JE Gedo – Mapping the Mind, 2018 – taylorfrancis.com
Chapter 5 formally introduces the reader to the developmental hierarchical model of Gedo and Goldberg (as recently modified by Gedo), as well as to Gedo’s theorizing on the subject of development. I discuss some details and implications of Gedo and Goldberg’s model  …

Ulmfit at germeval-2018: A deep neural language model for the classification of hate speech in german tweets
K Rother, M Allee, A Rettberg – … Conference on Natural Language …, 2018 – ids-pub.bsz-bw.de
Offensive Language. For this task, German tweets were classified as either offensive or non-offensive. The entry employs a task-specific classifier built on top of a medium-specific language model which is built on top of a universal language model. The approach uses a …

Improving neural morphological tagging using language models
A Sorokin – … and Intellectual Technologies: Proceedings of the …, 2018 – researchgate.net
Morphological tagging determines morphological labels of words in text: ??? det ??????? noun, case=Nom, gender=Neut, number=Sing ?????? noun, case=Gen, gender=Fem, number=Sing ???? aux, mood=Ind, tense=Past, aspect=Imp gender=Neut, number=Sing ???????????? …

Enhancing recurrent neural network-based language models by word tokenization
HM Noaman, SS Sarhan… – … -centric Computing and …, 2018 – biomedcentral.com
Different approaches have been used to estimate language models from a given corpus. Recently, researchers have used different neural network architectures to estimate the language models from a given corpus using unsupervised learning neural networks …

Unsupervised and efficient vocabulary expansion for recurrent neural network language models in asr
Y Khassanov, ES Chng – arXiv preprint arXiv:1806.10306, 2018 – arxiv.org
In automatic speech recognition (ASR) systems, recurrent neural network language models (RNNLM) are used to rescore a word lattice or N-best hypotheses list. Due to the expensive training, the RNNLM’s vocabulary set accommodates only small shortlist of most frequent …

Do Character-Level Neural Network Language Models Capture Knowledge of Multiword Expression Compositionality?
AH Parizi, P Cook – Proceedings of the Joint Workshop on Linguistic …, 2018 – aclweb.org
In this paper, we propose the first model for multiword expression (MWE) compositionality prediction based on character-level neural network language models. Experimental results on two kinds of MWEs (noun compounds and verb-particle constructions) and two …

Limited-memory bfgs optimization of recurrent neural network language models for speech recognition
X Liu, S Liu, J Sha, J Yu, Z Xu, X Chen… – … on Acoustics, Speech …, 2018 – ieeexplore.ieee.org
Recurrent neural network language models (RNNLM) have become an increasingly popular choice for state-of-the-art speech recognition systems. RNNLMs are normally trained by minimizing the cross entropy (CE) using the stochastic gradient descent (SGD) algorithm …

Automatic speech recognition for launch control center communication using recurrent neural networks with data augmentation and custom language model
K Yun, J Osborne, M Lee, T Lu… – … in Information Sciences, 2018 – spiedigitallibrary.org
Transcribing voice communications in NASA’s launch control center is important for information utilization. However, automatic speech recognition in this environment is particularly challenging due to the lack of training data, unfamiliar words in acronyms …

Multi-language neural network model with advance preprocessor for gender classification over social media
K Raiyani, T Gonçalves, P Quaresma, V Nogueira – 2018 – dspace.uevora.pt
This paper describes approaches for the Author Profiling Shared Task at PAN 2018. The goal was to classify the gender of a Twitter user solely by their tweets. Paper explores a simple and efficient Multi-Language model for gender classification. The approach consists …

Large Margin Neural Language Model
J Huang, Y Li, W Ping, L Huang – arXiv preprint arXiv:1808.08987, 2018 – arxiv.org
We propose a large margin criterion for training neural language models. Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences. However, we demonstrate that PPL may not be the best metric to optimize in …

Multi-cell LSTM Based Neural Language Model
T Cherian, A Badola, V Padmanabhan – arXiv preprint arXiv:1811.06477, 2018 – arxiv.org
Language models, being at the heart of many NLP problems, are always of great interest to researchers. Neural language models come with the advantage of distributed representations and long range contexts. With its particular dynamics that allow the cycling …

Interactive Dictionary Expansion using Neural Language Models.
A Alba, D Gruhl, P Ristoski, S Welch – HumL@ ISWC, 2018 – pdfs.semanticscholar.org
Dictionaries and ontologies are foundational elements of systems extracting knowledge from unstructured text. However, as new content arrives keeping dictionaries up-to-date is a crucial operation. In this paper, we propose a human-in-the-loop (HumL) dictionary …

Adaptive Pruning of Neural Language Models for Mobile Devices
R Tang, J Lin – arXiv preprint arXiv:1809.10282, 2018 – arxiv.org
Neural language models (NLMs) exist in an accuracy-efficiency tradeoff space where better perplexity typically comes at the cost of greater computation complexity. In a software keyboard application on mobile devices, this translates into higher power consumption and …

A Conversational Neural Language Model for Speech Recognition in Digital Assistants
E Cho, S Kumar – … on Acoustics, Speech and Signal Processing …, 2018 – ieeexplore.ieee.org
Speech recognition in digital assistants such as Google Assistant can potentially benefit from the use of conversational context consisting of user queries and responses from the agent. We explore the use of recurrent, Long Short-Term Memory (LSTM), neural language  …

Decipherment of substitution ciphers with neural language models
N Kambhatla – 2018 – summit.sfu.ca
The decipherment of homophonic substitution ciphers using language models (LMs) is a well-studied task in Natural Language Processing (NLP). Previous work in this topic score short local spans of possible plaintext decipherments using n-gram LMs. The most widely …

Detecting Domain Generation Algorithms with Convolutional Neural Language Models
J Huang, P Wang, T Zang, Q Qiang… – 2018 17th IEEE …, 2018 – ieeexplore.ieee.org
To evade detection, botnets apply DNS domain fluxing for Command and Control (C&C) servers. In this way, each bot generates a large number of domain names with Domain Generation Algorithms (DGAs) and the botmaster registers only one of them as the domain …

Estimating Marginal Probabilities of n-grams for Recurrent Neural Language Models
T Noraset, D Downey, L Bing – … Empirical Methods in Natural Language …, 2018 – aclweb.org
Recurrent neural network language models (RNNLMs) are the current standard-bearer for statistical language modeling. However, RNNLMs only estimate probabilities for complete sequences of text, whereas some applications require context-independent phrase …

Cross Entropy of Neural Language Models at Infinity—A New Bound of the Entropy Rate
S Takahashi, K Tanaka-Ishii – Entropy, 2018 – mdpi.com
Neural language models have drawn a lot of attention for their strong ability to predict natural language text. In this paper, we estimate the entropy rate of natural language with state-of-the-art neural language models. To obtain the estimate, we consider the cross …

Structured Word Embedding for Low Memory Neural Network Language Model.
K Shi, K Yu – Interspeech, 2018 – isca-speech.org
Neural network language model (NN LM), such as long short term memory (LSTM) LM, has been increasingly popular due to its promising performance. However, the model size of an uncompressed NN LM is still too large to be used in embedded or portable devices. The …

Semi-supervised neural machine translation with language models
I Skorokhodov, A Rykachevskiy… – Proceedings of the …, 2018 – aclweb.org
Training neural machine translation models is notoriously slow and requires abundant parallel corpora and computational power. In this work we propose an approach of transferring knowledge from separately trained language models to translation systems, also …

A Discourse Parser Language Model Based on Improved Neural Network in Machine Translation
C Xue – … Conference on Intelligent Transportation, Big Data & …, 2018 – ieeexplore.ieee.org
The development of statistical machine translation technology is so fast and there are many new models and methods. This obtains great achievement in translation of simple sentence or fixed sentence with certain applications. However, there still exists poor coherence and …

Building neural network language model with POS-based negative sampling and stochastic conjugate gradient descent
J Liu, L Lin, H Ren, M Gu, J Wang, G Youn, JU Kim – Soft Computing, 2018 – Springer
Traditional statistical language model is a probability distribution over sequences of words. It has the problem of curse of dimensionality incurred by the exponentially increasing number of possible sequences of words in training text. To solve this issue, neural network language  …

Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System
F Zamora-Martinez, MJ Castro-Bleda – International journal of neural …, 2018 – World Scientific
Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage …

Improved training of neural trans-dimensional random field language models with dynamic noise-contrastive estimation
B Wang, Z Ou – 2018 IEEE Spoken Language Technology …, 2018 – ieeexplore.ieee.org
A new whole-sentence language model-neural trans-dimensional random field language model (neural TRF LM), where sentences are modeled as a collection of random fields, and the potential function is defined by a neural network, has been introduced and successfully …

Language Model Domain Adaptation Via Recurrent Neural Networks with Domain-Shared and Domain-Specific Representations
T Moriokal, N Tawara, T Ogawa… – … , Speech and Signal …, 2018 – ieeexplore.ieee.org
Training recurrent neural network language models (RNNLMs) requires a large amount of data, which is difficult to collect for specific domains such as multiparty conversations. Data augmentation using external resources and model adaptation, which adjusts a model  …

Recurrent Neural Networks with Python Quick Start Guide: Sequential learning and language modeling with TensorFlow
S Kostadinov – 2018 – books.google.com
Learn how to develop intelligent applications with sequential learning and apply modern methods for language modeling with neural network architectures for deep learning with Python’s most popular TensorFlow framework. Key Features Train and deploy Recurrent …

Deep-speare: A joint neural model of poetic language, meter and rhyme
JH Lau, T Cohn, T Baldwin, J Brooke… – arXiv preprint arXiv …, 2018 – arxiv.org
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are …

textTOvec: Deep Contextualized Neural Autoregressive Models of Language with Distributed Compositional Prior
P Gupta, Y Chaudhary, F Buettner… – arXiv preprint arXiv …, 2018 – arxiv.org
We address two challenges of probabilistic topic modelling in order to better estimate the probability of a word in a given context, ie, P (word| context):(1) No Language Structure in Context: Probabilistic topic models ignore word order by summarizing a given context as a” …

Structured neural models for natural language processing
M Ma – 2018 – ir.library.oregonstate.edu
In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies. To combine deep learning with …

A Potent Model to Recognize Bangla Sign Language Digits Using Convolutional Neural Network
S Islam, SSS Mousumi, AKMSA Rabby… – Procedia computer …, 2018 – Elsevier
Hearing impaired people have own language called Sign Language but it is difficult for understanding to general people. Sign language is the basic method of communication for deaf people during their everyday of life. Sign digits are also a major part of sign language …

Modular Mechanistic Networks: On Bridging Mechanistic and Phenomenological Models with Deep Neural Networks in Natural Language Processing
S Dobnik, JD Kelleher – arXiv preprint arXiv:1807.09844, 2018 – arxiv.org
Natural language processing (NLP) can be done using either top-down (theory driven) and bottom-up (data driven) approaches, which we call mechanistic and phenomenological respectively. The approaches are frequently considered to stand in opposition to each other …

Neural Network Models for Natural Language Inference Fail to Capture the Semantics of Inference
A Talman, S Chatzikyriakidis – arXiv preprint arXiv:1810.09774, 2018 – arxiv.org
Neural network models have been very successful for natural language inference, with the best models reaching 90% accuracy in some tasks. However, the success of these models turns out to be largely task specific. We show that models trained on one inference task fail …

Stress-Testing Neural Models of Natural Language Inference with Multiply-Quantified Sentences
A Geiger, I Cases, L Karttunen, C Potts – arXiv preprint arXiv:1810.13033, 2018 – arxiv.org
Standard evaluations of deep learning models for semantics using naturalistic corpora are limited in what they can tell us about the fidelity of the learned representations, because the corpora rarely come with good measures of semantic complexity. To overcome this …

Translating Arabic as low resource language using distribution representation and neural machine translation models
EH Almansor – 2018 – opus.lib.uts.edu.au
Rapid growth in social media platforms makes the communication between users easier. According to that, the communication increased the importance of translating human languages. Machine translation technology has been widely used for translating several …

Multi-Language Neural Network Model with Advance Preprocessor for Gender Classification over Social Media: Notebook for PAN at CLEF 2018.
K Raiyani, T Gonçalves, P Quaresma… – CLEF (Working …, 2018 – pan.webis.de
Page 1. Multi-Language Neural Network Model with Advance Preprocessor for Gender Classification over Social Media Notebook for PAN at CLEF 2018 Kashyap Raiyani, Teresa Gonçalves, Paulo Quaresma and Vitor Beires Nogueira Computer Science Department …

Invited Talk# 2 Vietnamese Neural Language Model for NLP Tasks With Limited Resources
QT Tho – 2018 5th NAFOSTED Conference on Information and …, 2018 – ieeexplore.ieee.org
A statistical language model is a probability distribution over sequences of words. Language modeling is used in various computing tasks such as speech recognition, machine translation, optical character and handwriting recognition and information retrieval and other …

Neural Language Models
S Skansi – Introduction to Deep Learning, 2018 – Springer
This chapter revisits language processing, this time equipped with deep learning. Recurrent neural networks and autoencoders are needed for this chapter, but the exposition is clear and uses them mainly in a conceptual rather than computational sense. The idea of word …

Weight Initialization in Neural Language Models
A Deshpande, V Somani – arXiv preprint arXiv:1805.06503, 2018 – arxiv.org
Semantic Similarity is an important application which finds its use in many downstream NLP applications. Though the task is mathematically defined, semantic similarity’s essence is to capture the notions of similarity impregnated in humans. Machines use some heuristics to …

Visual grounding of spatial relations in recurrent neural language models
M Ghanimifard, S Dobnik – dobnik.net
The task of automatically describing an image with natural language requires techniques to associate visual representations with their corresponding linguistic units. In the state of the art techniques, most commonly, a pre-trained convolutional neural networks extracts visual …

Analysing Dropout and Compounding Errors in Neural Language Models
J O’Neill, D Bollegala – arXiv preprint arXiv:1811.00998, 2018 – adsabs.harvard.edu
This paper carries out an empirical analysis of various dropout techniques for language modelling, such as Bernoulli dropout, Gaussian dropout, Curriculum Dropout, Variational Dropout and Concrete Dropout. Moreover, we propose an extension of variational dropout to …

Learning Private Neural Language Modeling with Attentive Aggregation
S Ji, S Pan, G Long, X Li, J Jiang, Z Huang – arXiv preprint arXiv …, 2018 – arxiv.org
Mobile keyboard suggestion is typically regarded as a word-level language modeling problem. Centralized machine learning technique requires massive user data collected to train on, which may impose privacy concerns for sensitive personal typing data of users …

Reusing Weights in Subword-aware Neural Language Models
Z Assylbekov, R Takhanov – arXiv preprint arXiv:1802.08375, 2018 – arxiv.org
We propose several ways of reusing subword embeddings and other weights in subword-aware neural language models. The proposed techniques do not benefit a competitive character-aware model, but some of them improve the performance of syllable-and …

Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model
S Derby, P Miller, B Murphy, B Devereux – … Neural Networks for NLP, 2018 – aclweb.org
In this work, we evaluate latent semantic knowledge present in the LSTM activation patterns produced before and after the word of interest. We evaluate whether these activations predict human similarity ratings, human-derived property knowledge, and brain imaging …

Improving Neural Language Models with Weight Norm Initialization and Regularization
C Herold, Y Gao, H Ney – Proceedings of the Third Conference on …, 2018 – aclweb.org
Embedding and projection matrices are commonly used in neural language models (NLM) as well as in other sequence processing networks that operate on large vocabularies. We examine such matrices in fine-tuned language models and observe that a NLM learns word …

Inspecting and Directing Neural Language Models
T Noraset – 2018 – search.proquest.com
The ability of a machine to synthesize textual output in a form of human language is a long-standing goal in a field of artificial intelligence and has wide-range of applications such as spell correction, speech recognition, machine translation, abstractive summarization, etc …

A Neural Language Model for Multi-Dimensional Textual Data based on CNN-LSTM Network
S Park, JH Song, Y Kim – 2018 19th IEEE/ACIS International …, 2018 – ieeexplore.ieee.org
Language Modeling (LM) is a subtask in Natural Language Processing (NLP), and the goal of LM is to build a statistical language model that can learn and estimate a probability distribution of natural language over sentences of terms. Recently, many recurrent neural  …

Dual Fixed-Size Ordinally Forgetting Encoding (FOFE) for Competitive Neural Language Models
S Watcharawittayakul, M Xu, H Jiang – … Methods in Natural Language …, 2018 – aclweb.org
In this paper, we propose a new approach to employ the fixed-size ordinally-forgetting encoding (FOFE)(Zhang et al., 2015b) in neural languages modelling, called dual-FOFE. The main idea of dual-FOFE is that it allows to use two different forgetting factors so that it …

A Neural Language Model with a Modified Attention Mechanism for Software Code
X Zhang, K Ben – 2018 IEEE 9th International Conference on …, 2018 – ieeexplore.ieee.org
The language model, which has its roots in statistical natural language processing, has been shown to successfully capture the predictable regularities of source code, and help with many software tasks, such as code suggestion, code porting, and bug detection …

GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking-Supplementary
PH Chen, S Si, Y Li, C Chelba, CJ Hsieh – papers.nips.cc
In our method, the number of clusters is a hyperparameter for groupReduce. We experimented with different numbers of clusters on the PTB-Large setup with 6.6 times compression (eg, using only 15% of the memory compared to the original matrices) of both …

Correcting writing errors in turkish with a character-level neural language model
B BenlIgIray – 2018 26th Signal Processing and …, 2018 – ieeexplore.ieee.org
A large part of the written content on the Internet is composed of social media posts, articles written for content platforms and user comments. In contrast to the content prepared for print media, these types of texts include a large number of writing errors. Automating the detection …

Improving neural language models on low-resource creole languages
S Schieferstein – 2018 – ideals.illinois.edu
When using neural models for NLP tasks, like language modelling, it is difficult to utilize a language with little data, also known as a low-resource language. Creole languages are frequently low-resource and as such it is difficult to train neural language models for them …

Neural language models: Dealing with large vocabularies
M Labeau – 2018 – theses.fr
Résumé Le travail présenté dans cette thèse explore les méthodes pratiques utilisées pour faciliter l’entraînement et améliorer les performances des modèles de langues munis de très grands vocabulaires. La principale limite à l’utilisation des modèles de langue neuronaux …

Training Neural Language Models with SPARQL queries for Semi-Automatic Semantic Mapping
G Futia, A Vetro, A Melandri, JC De Martin – Procedia Computer Science, 2018 – Elsevier
Abstract Knowledge graphs are labeled and directed multi-graphs that encode information in the form of entities and relationships. They are gaining attention in different areas of computer science: from the improvement of search engines to the development of virtual …

Hierarchical Coordinate Structure Analysis for Japanese Statutory Sentences Using Neural Language Models
T Yamakoshi, T Ohno, Y Ogawa… – … of Natural Language …, 2018 – jstage.jst.go.jp
?? We propose a method for analyzing the hierarchical coordinate structure of Japanese statutory sentences using neural language models (NLMs). Our method deterministically identifies hierarchical coordinate structures according to their rigorously defined descriptive …

Information extraction using neural language models for the case of online job listings analysis
DS Botov, JD Klenin… – Yugra State University …, 2018 – journals.eco-vector.com
In this article we discuss the approach to information extraction (IE) using neural language models. We provide a detailed overview of modern IE methods: both supervised and unsupervised. The proposed method allows to achieve a high quality solution to the problem …

Conditional Neural Language Models for Multimodal Learning and Natural Language Understanding
JR Kiros – 2018 – tspace.library.utoronto.ca
In this thesis we introduce conditional neural language models based on log-bilinear and recurrent neural networks with applications to multimodal learning and natural language understanding. We first introduce a LSTM encoder for learning visual-semantic embeddings …

Disentangled Representation Learning for Stylistic Variation in Neural Language Models
V John – 2018 – uwspace.uwaterloo.ca
The neural network has proven to be an effective machine learning method over the past decade, prompting its usage for modelling language, among several other domains. However, the latent representations learned by these neural network function approximators …

Deep Neural Language Model for Text Classification Based on Convolutional and Recurrent Neural Networks
A Hassan – 2018 – scholarworks.bridgeport.edu
The evolution of the social media and the e-commerce sites produces a massive amount of unstructured text data on the internet. Thus, there is a high demand to develop an intelligent model to process it and extract a useful information from it. Text classification plays an …

An empirical study of statistical language models: n-gram language models vs. neural network language models
F Mezzoudj, A Benyettou – International Journal of …, 2018 – inderscienceonline.com
Statistical language models are an important module in many areas of successful applications such as speech recognition and machine translation. And n-gram models are basically the state-of-the-art. However, due to sparsity of data, the modelled language  …

Stacked Neural Networks With Parameter Sharing For Multilingual Language Modeling
BK Khonglah, S Madikeri, N Rekabsaz, N Pappas… – navid-rekabsaz.com
Neural language models are useful in Automatic Speech Recognition (ASR) due to their superior re-scoring capabilities over N-gram language models. Recently, multilingual neural language modeling based on recurrent neural networks has gained attraction for the same …

Syntax-Aware Language Modeling with Recurrent Neural Networks
D Blythe, A Akbik, R Vollgraf – arXiv preprint arXiv:1803.03665, 2018 – arxiv.org
Neural language models (LMs) are typically trained using only lexical features, such as surface forms of words. In this paper, we argue this deprives the LM of crucial syntactic signals that can be detected at high confidence using existing parsers. We present a simple …

Persian Language Modeling Using Recurrent Neural Networks
SHH Saravani, M Bahrani, H Veisi… – 2018 9th International …, 2018 – ieeexplore.ieee.org
In this paper, recurrent neural networks are applied to language modeling of Persian, using word embedding as word representation. To this aim, unidirectional and bidirectional Long Short-Term Memory (LSTM) networks are used, and the perplexity of Persian language  …

Unsupervised Neural Word Segmentation for Chinese via Segmental Language Modeling
Z Sun, ZH Deng – arXiv preprint arXiv:1810.03167, 2018 – arxiv.org
Previous traditional approaches to unsupervised Chinese word segmentation (CWS) can be roughly classified into discriminative and generative models. The former uses the carefully designed goodness measures for candidate segmentation, while the latter focuses on …

A Recurrent Neural Network Language Model Based on Word Embedding
S Li, J Xu – Asia-Pacific Web (APWeb) and Web-Age Information …, 2018 – Springer
Abstract Language model is one of the basic research issues of natural language processing, and which is the premise for realizing more complicated tasks such as speech recognition, machine translation and question answering system. In recent years, neural  …

Neural Random Projections for Language Modelling
D Nunes, L Antunes – arXiv preprint arXiv:1807.00930, 2018 – arxiv.org
Neural network-based language models deal with data sparsity problems by mapping the large discrete space of words into a smaller continuous space of real-valued vectors. By learning distributed vector representations for words, each training sample informs the …

Information-Weighted Neural Cache Language Models for ASR
L Verwimp, J Pelemans… – … IEEE Spoken Language …, 2018 – ieeexplore.ieee.org
Neural cache language models (LMs) extend the idea of regular cache language models by making the cache probability dependent on the similarity between the current context and the context of the words in the cache. We make an extensive comparison ofregular’cache …

Efficient Transfer Learning for Neural Network Language Models
J Skryzalin, H Link, J Wendt, R Field… – 2018 IEEE/ACM …, 2018 – ieeexplore.ieee.org
We apply transfer learning techniques to create topically and/or stylistically biased natural language models from small data samples, given generic long short-term memory (LSTM) language models trained on larger data sets. Although LSTM language models are powerful …

Character-level Language Modeling with Gated Hierarchical Recurrent Neural Networks.
I Choi, J Park, W Sung – Interspeech, 2018 – isca-speech.org
Recurrent neural network (RNN)-based language models are widely used for speech recognition and translation applications. We propose a gated hierarchical recurrent neural network (GHRNN) and apply it to the character-level language modeling. GHRNN consists …

Language Modeling With Recurrent Neural Networks
A Fischer, C Bauckhage – 2018 – alpopkes.com
Motivated by the potential benefits of a system that accelerates the process of writing radiological reports, we present a Recurrent Neural Network Language Model for modeling radiological language. We show that Recurrent Neural Network Language Models can be …

Neural Error Corrective Language Models for Automatic Speech Recognition.
T Tanaka, R Masumura, H Masataki, Y Aono – Interspeech, 2018 – isca-speech.org
We present novel neural network based language models that can correct automatic speech recognition (ASR) errors by using speech recognizer output as a context. These models, called neural error corrective language models (NECLMs), utilizes ASR hypotheses of a …

A Mongolian Language Model based on Recurrent Neural Networks.
Z Ma, L Zhang, R Yang, T Li – International Journal of …, 2018 – search.ebscohost.com
In view of data sparsity and long-range dependence when training the N-Gram Mongolian language model, a Mongolian Language Model based on Recurrent Neural Networks (MLMRNN) is proposed. The Mongolian classified word vector is designed and used as the …

Fast Oov Words Incorporation Using Structured Word Embeddings for Neural Network Language Model
R Chen, K Yu – … Conference on Acoustics, Speech and Signal …, 2018 – ieeexplore.ieee.org
Recently, deep learning approaches have been widely used in language modeling and achieved great success. However, the out-of-vocabulary (OOV) words are often estimated in a rather crude way using only one special symbol, which ignores the linguistic information …

Factorised Hidden Layer Based Domain Adaptation for Recurrent Neural Network Language Models
M Hentschel, M Delcroix, A Ogawa… – 2018 Asia-Pacific …, 2018 – ieeexplore.ieee.org
Language models, which are used in various tasks including speech recognition and sentence completion, are usually used with texts covering various domains. Therefore, domain adaptation has been a long-ongoing challenge in language model research …

DSM: a specification mining tool using recurrent neural network based language model
TDB Le, L Bao, D Lo – Proceedings of the 2018 26th ACM Joint Meeting …, 2018 – dl.acm.org
Formal specifications are important but often unavailable. Furthermore, writing these specifications is time-consuming and requires skills from developers. In this work, we present Deep Specification Miner (DSM), an automated tool that applies deep learning to …

Recurrent Neural Networks with Pre-trained Language Model Embedding for Slot Filling Task
L Qiu, Y Ding, L He – arXiv preprint arXiv:1812.05199, 2018 – arxiv.org
In recent years, Recurrent Neural Networks (RNNs) based models have been applied to the Slot Filling problem of Spoken Language Understanding and achieved the state-of-the-art performances. In this paper, we investigate the effect of incorporating pre-trained language  …

Neural Speech-to-Text Language Models for Rescoring Hypotheses of DNN-HMM Hybrid Automatic Speech Recognition Systems
T Tanaka, R Masumura, T Moriya… – 2018 Asia-Pacific Signal …, 2018 – ieeexplore.ieee.org
In this paper, we propose to leverage end-to-end automatic speech recognition (ASR) systems for assisting deep neural network-hidden Markov model (DNN-HMM) hybrid ASR systems. The DNN-HMM hybrid ASR system, which is composed of an acoustic model, a …

Improving Russian LVCSR Using Deep Neural Networks for Acoustic and Language Modeling
I Kipyatkova – International Conference on Speech and Computer, 2018 – Springer
In the paper, we present our very large vocabulary continuous Russian speech recognition system based on various neural networks. We employed neural networks on both acoustic and language modeling stages. For training hybrid acoustic models, we experimented with …

Discriminatively trained continuous Hindi speech recognition system using interpolated recurrent neural network language modeling
M Dua, RK Aggarwal, M Biswas – Neural Computing and Applications, 2018 – Springer
This paper implements and evaluates the performance of a discriminatively trained continuous Hindi language speech recognition system. The system uses maximum mutual information and minimum phone error discriminative techniques with various numbers of …

Phrase2VecGLM: Neural generalized language model–based semantic tagging for complex query reformulation in medical IR
M Das, E Fosler-Lussier, S Lin… – Proceedings of the …, 2018 – aclweb.org
In this work, we develop a novel, completely unsupervised, neural language model-based document ranking approach to semantic tagging of documents, using the document to be tagged as a query into the GLM to retrieve candidate phrases from top-ranked related …

Irony Detection Using Neural Network Language Model, Psycholinguistic Features and Text Mining
K Ravi, V Ravi – 2018 IEEE 17th International Conference on …, 2018 – ieeexplore.ieee.org
Irony is a form of figurative language, which is often used to make fun of an entity. We employ paragraph vector to capture syntactic and semantic features of the figurative language to subsequently detect satiric and ironic content in a given text using data mining …

Recurrent neural network language models in the context of under-resourced South African languages
A Scarcella – 2018 – open.uct.ac.za
Over the past five years neural network models have been successful across a range of computational linguistic tasks. However, these triumphs have been concentrated in languages with significant resources such as large datasets. Thus, many languages, which …

Outperforming Neural Readers Using Broad-Context Discriminative Language Modeling on the LAMBADA Word Prediction Task
N Nabizadeh, M Singh, D Klakow – oeaw.ac.at
Abstract Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various …

On Modelling Uncertainty in Neural Language Generation for Policy Optimisation in Voice-Triggered Dialog Assistants
S Wang, T Gunter, D VanDyke – alborz-geramifard.com
While much effort has gone into user-modelling in the context of simulation for dialog policy training through reinforcement-learning (RL), the majority of this research has focused on matching user behaviour, with relatively little work dedicated to accurately replicating system …

CLUF: a Neural Model for Second Language Acquisition Modeling
S Xu, J Chen, L Qin – Proceedings of the Thirteenth Workshop on …, 2018 – aclweb.org
Abstract Second Language Acquisition Modeling is the task to predict whether a second language learner would respond correctly in future exercises based on their learning history. In this paper, we propose a neural network based system to utilize rich contextual, linguistic …

Modeling Evolution of Language Through Time with Neural Networks
E Delasalles, S Lamprier, L Denoyer – 2018 – openreview.net
Language evolves over time with trends and shifts in technological, political, or cultural contexts. Capturing these variations is important to develop better language models. While recent works tackle temporal drifts by learning diachronic embeddings, we instead propose …

Neural Networks for Language Modeling and Related Tasks in Low-Resourced Domains and Languages
O TILK – 2018 – digi.lib.ttu.ee
Survival of small languages largely depends on their utility in modern use cases like voice interfaces for computer systems, automatic transcription, chatbots, automatic translation and summarization, predictive keyboards, optical character recognition and handwritten text …

Speech Recognition Model for Assamese Language Using Deep Neural Network
MT Singh, PP Barman, R Gogoi – researchgate.net
The work presents a speech recognition model for the Assamese language of the state of Assam of India. We experimented the model on the digits of Assamese language. The Deep Neural Network is used to make the recognition model. The Long Short-Term Memory …

Implementation of Recurrent Neural Network with Sequence to Sequence Model to Translate Language Based on TensorFlow
SKY Donzia, HK Kim – Proceedings of the World Congress on …, 2018 – iaeng.org
LM, a type of deep neural network for dealing with sequential data, have been proposed and achieved remarkable results. Most deep learning frame works, support the GPU to form fast models; in particular the execution of these models on several GPUs. In this work, an …

textTOvec: Deep Contextualized Neural Autoregressive Topic Models Of Language With Distributed Compositional Prior
P Gupta, Y Chaudhary, F Buettner, H Schuetze – 2018 – openreview.net
We address two challenges of probabilistic topic modelling in order to better estimate the probability of a word in a given context, ie, P (wordjcontext):(1) No Language Structure in Context: Probabilistic topic models ignore word order by summarizing a given context as a …

Sign Language Recognition Using Modified Convolutional Neural Network Model
H Gunawan, N Thiracitta… – … Association for Pattern …, 2018 – ieeexplore.ieee.org
Sign Language is an interesting topic and similar to Action Recognition. Especially along with the great development of Deep Learning. Video-based Sign Language Recognition is our concern because we want to recognize a sign not only by the shape but also by the …

A Potent Model to Recognize Bangla Sign Language Digits Using Convolutional Neural Network
SSS Mousumi, AKM Rabby, SA Hossain, S Abujar – 2018 – dspace.daffodilvarsity.edu.bd
Hearing impaired people have own language called Sign Language but it is difficult for understanding to general people. Sign language is the basic method of communication for deaf people during their everyday of life. Sign digits are also a major part of sign language …

Natural Language Generation with Neural Variational Models
H Bahuleyan – arXiv preprint arXiv:1808.09012, 2018 – arxiv.org
In this thesis, we explore the use of deep neural networks for generation of natural language. Specifically, we implement two sequence-to-sequence neural variational models-variational autoencoders (VAE) and variational encoder-decoders (VED). VAEs for text …

Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language
T Tanadi – IOP Conference Series: Materials Science and …, 2018 – iopscience.iop.org
Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled …

A Syntax-Guided Neural Model for Natural Language Interfaces to Databases
F Brad, R Iacob, I Hosu, S Ruseti… – 2018 IEEE 30th …, 2018 – ieeexplore.ieee.org
Recent advances in neural code generation have incorporated syntax to improve the generation of the target code based on the user’s request in natural language. We adapt the model of [1] to the Natural Language Interface to Databases (NLIDB) problem by taking into …

Improving Neural Models of Language with Input-Output Tensor Contexts
E Mizraji, A Pomi, J Lin – International Conference on Speech and …, 2018 – Springer
Tensor contexts enlarge the performances and computational powers of many neural models of language by generating a double filtering of incoming data. Applied to the linguistic domain, its implementation enables a very efficient disambiguation of polysemous …

A Study of Neural Networks Models applied to Natural Language Inference
VG Noronha, JCP da Silva – researchgate.net
Natural Language Inference is a task that given two sentences, a premise P and hypothesis H, try to establish entailment, contradiction or neutral relationships between them. The Stanford Natural Language Inference (SNLI) is a corpus that due to its size allows that …

How Spoken and Signed Language Structure Space Differently: A Neural Model
L Talmy – Ten Lectures on Cognitive Semantics, 2018 – brill.com
NB: Assumed here from my past work is the notion that, in spoken language, closed-class forms represent conceptual structure, while open-class forms represent conceptual content—ie, that the two subsystems have a functional division of labor. Thus, to examine how spatial …

Dialogue Act Classification Model Based on Deep Neural Networks for a Natural Language Interface to Databases in Korean
M Kim, H Kim – 2018 IEEE International Conference on Big …, 2018 – ieeexplore.ieee.org
Dialogue act classification is an essential task for implementing a natural language interface to databases because speakers’ intentions can be represented by dialogue acts (domain-independent speech act and domain-dependent predicator pairs). To resolve ambiguities in …

Research on multi-leaf collimator fault prediction model of Varian Novalis Tx medical linear accelerator based on BP Neural Network realized by R language
Y Deng, Z Xiao, B Ouyang, Z Wang… – Chinese Journal of …, 2018 – wprim.whocc.org.cn
Objective To construct and investigate the multi-leaf collimator (MLC) fault prediction model of Varian NovalisTx medical linear accelerator based on BP neural network. Methods The MLC fault data applied in clinical trial for 18 months were collected and analyzed. The total …

A Neural Network model for the Evaluation of Text Complexity in Italian Language: a Representation Point of View
GL Bosco, G Pilato, D Schicchi – Procedia computer science, 2018 – Elsevier
The goal of a text simplification system (TS) is to create a new text suited to the characteristics of a reader, with the final goal of making it more understandable. The building of an Automatic Text Simplification System (ATS) cannot be separated from a correct …

Teach Your Robot Your Language! Trainable Neural Parser for Modelling Human Sentence Processing: Examples for 15 Languages
X Hinaut, J Twiefel – 2018 – hal.inria.fr
We present a Recurrent Neural Network (RNN) that performs thematic role assignment and can be used for Human-Robot Interaction (HRI). The RNN is trained to map sentence structures to meanings (eg predicates). Previously, we have shown that the model is able to …

Natural Language Processing with Java: Techniques for building machine learning and neural network models for NLP
RM Reese, AS Bhatia – 2018 – books.google.com
Explore various approaches to organize and extract useful text from unstructured data using Java Key Features Use deep learning and NLP techniques in Java to discover hidden insights in text Work with popular Java libraries such as CoreNLP, OpenNLP, and Mallet …

(Visited 148 times, 1 visits today)