IRSTLM (IRST Language Modeling) Toolkit 2015


Notes:

A statistical language model is a probability distribution over sequences of words.

Resources:

  • iwslt.org .. international workshop on spoken language translation

Wikipedia:

See also:

IRSTLM (IRST Language Modeling) Toolkit 2013 | IRSTLM (IRST Language Modeling) Toolkit 2014


Language localisation of Tamil using Statistical Machine Translation Y Achchuthan, K Sarveswaran – Advances in ICT for Emerging …, 2015 – ieeexplore.ieee.org … There are several tools available to carry out these tasks. However, in this research GIZA++, KenLM, IRSTLM are used with the Moses framework to carryout above tasks. … Two language modelling tools that work with Moses IRSTLM [8] and KenLM [9] are used. … Related articles

Real-time direct translation system for Sinhala and Tamil languages S Rajpirathap, S Sheeyam… – Computer Science …, 2015 – ieeexplore.ieee.org … 1441 Page 6. development we have used some modules from some tools such as IRSTLM, GIZA++ and Moses. The IRST Language Modeling Toolkit features algorithms and data structures suitable to estimate, store, and access very large n-gram language models. … Related articles All 4 versions

Augmenting Performance of SMT Models by Deploying Fine Tokenization of the Text and Part-of-Speech Tag AT Nedjo, H Degen – Computer and Information Science, 2015 – search.proquest.com … If ‘T’ is the target language, the LM computes P(T) and feed this into the decoder software. IRSTLM of M. Federico et.al. (2008) was used for language modeling. IRSTLM is an open-source language modeling toolkit and is hosted on sourceforge. … Related articles All 6 versions

Development of Indonesian-Japanese statistical machine translation using lemma translation and additional post-process MA Sulaeman, A Purwarianti – Electrical Engineering and …, 2015 – ieeexplore.ieee.org … B. Building Baseline System We built baseline statistical machine translation system using IRSTLM[11] as language model builder, GIZA++[12] for sentence alignment, and Moses[2] as decoder and phrase table builder. We … Related articles

Towards a hybrid NLG system for Data2Text in Portuguese JC Pereira, A Teixeira, JS Pinto – 2015 10th Iberian Conference …, 2015 – ieeexplore.ieee.org … The target language corresponds with the messages to be sent, in natural language, to the end user. The heavy work is done by MOSES 1 , GIZA++ 2 and IRSTLM 3 tools. … 1 http://www.statmt. org/moses/ 2 https://code.google.com/p/giza-pp/ 3 http://sourceforge.net/projects/irstlm/ … Cited by 2 Related articles All 4 versions

Deep belief neural networks and bidirectional long-short term memory hybrid for speech recognition L Brocki, K Marasek – Archives of Acoustics, 2015 – degruyter.com … A standard system based on GMM models was also trained in HTK toolkit (Young, 2000) and tested in the Julius (Lee, 2001) decoder using a language model created by IRSTLM toolkit (Federico, 2008). … Julius+IRSTLM GMM 120 60K 276 47% … Cited by 2 Related articles All 8 versions

An Improved Hierarchical Word Sequence Language Model Using Directional Information X Wu, Y Matsumoto – 2015 – aclweb.org … Models(+Smoothing) BLEU TER IRSTLM(+MKN) 31.2 49.1 SRILM(+MKN) 31.3 48.9 3-gram(+MKN) 31.3 49.1 3-gram(+GLM) 31.3 49.2 HWS-3-gram(+MKN) 31.2 48.6 HWS-3-gram(+GLM) 31.2 48.7 DHWS-3-gram(+MKN) 31.2 48.6 DHWS-3-gram(+GLM) 31.3 48.6 … Related articles All 8 versions

An Improved Hierarchical Word Sequence Language Model Using Word Association X Wu, Y Matsumoto, K Duh, H Shindo – International Conference on …, 2015 – Springer … As shown in Table 2, since the results performed by our implementation (3-gram+MKN) is almost the same as that performed by existing language model toolkits IRSTLM 7 and SRILM 8 \(^{, 9 , we believe that our implementation is correct. … METEOR. TER. IRSTLM-3. MKN. 31.2. … Related articles

A Binarized Neural Network Joint Model for Machine Translation J Zhang, M Utiyama, E Sumita, G Neubig, S Nakamura – aclweb.org … We used the default parameters for Moses, and a 5-gram language model was trained on the tar- get side of the training corpus using the IRSTLM Toolkit5 with improved Kneser-Ney smoothing. Feature weights were tuned by MERT (Och, 2003). … Cited by 2 All 10 versions

A Hybrid Model for Enhancing Lexical Statistical Machine Translation (SMT) AGM ElSayed, AS Salama… – arXiv preprint arXiv: …, 2015 – arxiv.org … These models are combined to enhance the performance of statistical machine translation (SMT). Many implementation tools have been used in this work such as Moses, Gizaa++, IRSTLM, KenLM, and BLEU. … These tools are as follows: • IRSTLM Toolkit for language modeling. … Related articles All 7 versions

Bengali to Assamese Statistical Machine Translation using Moses (Corpus Based) NJ Kalita, B Islam – arXiv preprint arXiv:1504.01182, 2015 – arxiv.org … Other translation tools like IRSTLM for Language Model and GIZA-PP-V1.0.7 for Translation model are utilized within this framework which is accessible in Linux situations. … The IRSTLM documentation gives a full explanation of the command-line option [8]. … Related articles All 5 versions

Adapting Machine Translation Models toward Misrecognized Speech with Text-to-Speech Pronunciation Rules and Acoustic Confusability N Ruiz, Q Gao, W Lewis… – … of Interspeech, Dresden …, 2015 – research.microsoft.com … Our baseline sys- tem features a statistical log-linear model including a phrase- based translation model (TM) and a lexicalized phrase-based reordering model (RM), both trained on TED data, a 5-gram language model (LM) trained with IRSTLM [13] and converted into … Cited by 2 Related articles All 5 versions

Boosting English-Chinese Machine Transliteration via High Quality Alignment and Multilingual Resources Y Shao, J Tiedemann, J Nivre – Proceedings of NEWS 2015 …, 2015 – anthology.aclweb.org … We use IRSTLM (Federico et al., 2008) to build language models with order 6. For English to Chinese transliteration, we build two systems with different transliteration units on the English side. … 2008. Irstlm: an open source toolkit for handling large scale language models. … Cited by 3 Related articles All 8 versions

TED-MWE: a bilingual parallel corpus with MWE annotation J Monti, F Sangati, M Arcan – CLiC it, 2015 – iris.unito.it … The IRSTLM toolkit (Federico et al., 2008) was used to build the 5-gram language model. … Trento, Italy. Marcello Federico, Nicola Bertoldi, and Mauro Cettolo. 2008. IRSTLM: an open source toolkit for handling large scale language models. … Related articles All 3 versions

Experimenting the use of catenae in Phrase-Based SMT M Sanguinetti – CLiC it, 2015 – iris.unito.it … For language modeling, we opted for the trigram option using the IRSTLM toolkit (Federico et al., 2008). The translation model was computed using the de- fault settings provided by the system guidelines. … Irstlm: an open source toolkit for han- dling large scale language models. … Related articles All 2 versions

Refining Kazakh Word Alignment Using Simulation Modeling Methods for Statistical Machine Translation A Kartbayev – National CCF Conference on Natural Language …, 2015 – Springer … The system parameters were optimized with the minimum error rate train- ing (MERT) algorithm [16], and we trained 5-gram language models with the IRSTLM toolkit[17] and then were converted to binary form using KenLM for a faster execution[18]. … Related articles

The FBK Participation in the WMT15 Automatic Post-editing Shared Task R Chatterjee, M Turchi, M Negri – Proceedings of the Tenth …, 2015 – anthology.aclweb.org … Page 232. ing IRSTLM toolkit (Federico et al., 2008) having order of 5 gram with kneser-ney smoothing. … Marcello Federico, Nicola Bertoldi, and Mauro Cet- tolo. 2008. Irstlm: an open source toolkit for han- dling large scale language models. In Interspeech, pages 1618–1621. … Cited by 2 Related articles All 11 versions

A Probabilistic Feature-Based Fill-up for SMT J Zhang, L Li, A Way, Q Liu – mt-archive.info … The data used for SVM training, language model training and SVM tuning are summa- rized in Table 2. The SVM-tuned parameters are presented in Table 3. We use the open source IRSTLM toolkit (Federico et al., 2008) for language model training and KenLM (Heafield, 2011 … Related articles

Normalising orthographic and dialectal variants for the automatic processing of Swiss German T Samardzic, Y Scherrer, E Glaser – ltc.amu.edu.pl … In both cases, we create trigram language models using IRSTLM (Federico et al., 2008). … Federico, Marcello, Nicola Bertoldi, and Mauro Cettolo, 2008. IRSTLM: an open source toolkit for handling large scale language models. In Proceedings of Inter- speech 2008. Brisbane. … Cited by 1 Related articles

SMT: A case study of Kazakh-English word alignment A Kartbayev – International Conference on Web Engineering, 2015 – Springer … All language mod- els were trained with the IRSTLM toolkit[18] and then were converted to binary form using KenLM for a faster execution[19]. … 160–167 (2003) 18. Federico, M., Bertoldi, N., Cettolo, M.: IRSTLM: an open source toolkit for han- dling large scale language models. … Cited by 1 Related articles

A Hybrid System for Chinese-English Patent Machine Translation H Li, K Zhao, R Hu, Y Zhu, Y Jin – … of 6th Workshp on Patent and …, 2015 – researchgate.net … The phrase-based SMT baseline system Moses is built on the basis of freely available state-of- the-art tools: the GIZA++ toolkit (Och 2003) to estimate word alignments, the IRST Language Modelling toolkit (IRSTLM) (Federico, et al., 2008) with modified Kneser-Ney smoothing … Related articles

Normalising orthographic and dialectal variants for the automatic processing of Swiss German T Samardžic, Y Scherrer, E Glaser – 2015 – archive-ouverte.unige.ch … In both cases, we create trigram language models using IRSTLM (Federico et al., 2008). … Federico, Marcello, Nicola Bertoldi, and Mauro Cettolo, 2008. IRSTLM: an open source toolkit for handling large scale language models. In Proceedings of Inter- speech 2008. Brisbane. …

Learning Word Alignment Models for Kazakh-English Machine Translation A Kartbayev – International Symposium on Integrated Uncertainty in …, 2015 – Springer … All 5-gram language models were trained with the IRSTLM toolkit[21] and then were converted to binary form using KenLM for a faster execution[22]. … Federico, M., Bertoldi, N., Cettolo, M.: IRSTLM: an open source toolkit for han- dling large scale language models. … Related articles

Automatic speech recognition based Odia system B Karan, J Sahoo, PK Sahu – Microwave, Optical and …, 2015 – ieeexplore.ieee.org … Kaldi use FST based framework [1].We can use any language model but it should represent in FST. For Building LM from raw text in Kaldi, IRSTLM toolkit is available [1]. Bi-gram language model is employed for our Odia ASR system. Example: Consider a simple sentence …

UGENT-LT3 SCATE system for machine translation quality estimation A Tezcan, V Hoste, B Desmet… – Tenth Workshop on …, 2015 – biblio.ugent.be … For the PoS LM, we used IRSTLM with Witten-Bell smoothing (Federico et al., 2008) as the modified Kneser- Ney smoothing, which is used by KENLM, is not well defined when there are no singletons (Chen and Goodman 1999), which leads to modeling issues in the PoS … Cited by 3 Related articles All 12 versions

TECHLIMED @QALB-Shared Task 2015: a hybrid Arabic Error Correction System D Mostefa, J Abualasal, M Gzawi, O Asbayou… – ANLP Workshop …, 2015 – oma-project.com … corpora we collected from various online newspapers for a total of 300M words. The language model was created with the IRSTLM toolkit (Federico, 2008). SMT System TECH-1 TECH-2 TECH-3 MADAMIRA Yes Yes No Training … Cited by 3 Related articles All 10 versions

Improving Word Alignment Through Morphological Analysis V Van Bui, TT Tran, NBT Nguyen, TD Pham… – … Integrated Uncertainty in …, 2015 – Springer … Together with the word alignment, a language model for the target lan- guage, Vietnamese in this case, is also trained by the popular tool IRSTLM [3] on a Vietnamese corpus, particularly the Vietnamese training part in our exper- iment. … Related articles All 2 versions

Improving Word Alignment Through Morphological Analysis TD Pham, AN Le, CA Le – … 2015, Nha Trang, Vietnam, October 15 …, 2015 – books.google.com … Together with the word alignment, a language model for the target lan- guage, Vietnamese in this case, is also trained by the popular tool IRSTLM [3] on a Vietnamese corpus, particularly the Vietnamese training part in our exper- iment. … Related articles

Linguistically-augmented perplexity-based data selection for language models A Toral, P Pecina, L Wang, J van Genabith – Computer Speech & Language, 2015 – Elsevier … All the LMs used in the experiments are built with IRSTLM 5.80.01 (Federico et al., 2008), they consider n-grams up to order 4 and they are smoothed using a simplified version of the modified Kneser-Ney method ( Chen and Goodman, 1996). … Cited by 4 Related articles All 4 versions

Head Finalization And Morphological Analysis In Factored Phrase-Based Statistical Machine Translation From English To … H IMREN – 2015 – etd.lib.metu.edu.tr … TUBITAK Scientific and Technological Research Council of Turkey HPSG Head-Driven Phrase Structure Grammar POS Part of Speech IRSTLM IRST Language Modeling LM Language Model MERT Minimum Error Rate Training EU European Union UN United Nations … Related articles

Hierarchical Phrase-based Translation Model vs. Classical Phrase-based Translation Model for Spanish-English Statistical Machine Translation System AE para Espanol-Inglés, B Ahmadnia, J Serrano – researchgate.net … The language model in both systems was smooth, with a modified Kneser-Ney algorithm (Pickhardt et al., 2014), and implemented in IRSTLM (Federico, Bertoldi, and Cettolo, 2008). … 2008. Irstlm: an open source toolkit for handling large scale language models. … Related articles All 5 versions

Experiment on a phrase-based statistical machine translation using PoS Tag information for Sundanese into Indonesian AA Suryani, DH Widyantoro… – 2015 International …, 2015 – ieeexplore.ieee.org … In the translation language training, we utilize the word alignment results using GIZA ++ (Och & Ney, 2003), while in the language model training we use IRSTLM, which apply the n-gram language model (Federico, Bertoldi, & Cettolo, 2008). … Related articles

Solving Data Sparsity by Morphology Injection in Factored SMT S Sreelekha, P Dungarwal, P Bhattacharyya, D Malathi – ltrc.iiit.ac.in … Also, as the reduc- 2To use null when particular word can not have that factor 3http://www.statmt.org/moses/ 4https://hlt.fbk.eu/technologies/irstlm-irst-language- modelling-toolkit 5http://nlp.stanford.edu/software/tagger.shtml Page 4. Morph. … Related articles

Translating Literary Text between Related Languages using SMT A Toral, A Way – on Computational Linguistics for Literature, 2015 – aclweb.org … We train an LM of order 3 and improved Kneser-Ney smoothing (Chen and Goodman, 1996) with IRSTLM (Federico et al., 2008). … 2008. Irstlm: an open source toolkit for handling large scale language models. In INTERSPEECH, pages 1618–1621. Christian Federmann. 2012. … Cited by 1 Related articles All 13 versions

Splitting Compounds by Semantic Analogy J Daiber, L Quiroz, R Wechsler, S Frank – arXiv preprint arXiv:1509.04473, 2015 – arxiv.org … Word align- ment is performed with Giza++ (Och and Ney, 2003). We use a 3rd order language model estimated using IRSTLM (Federico et al., 2008), as well as lexicalized reordering. … 2008. IRSTLM: An open source toolkit for handling large scale language models. … Cited by 2 Related articles All 8 versions

Learning word reorderings for hierarchical phrase-based statistical machine translation J Zhang, M Utiyama, E Sumita, H Zhao – … of the 53rd annual meeting of …, 2015 – aclweb.org … Why did the improvement level off quickly? 3http://hlt.fbk.eu/en/irstlm Sub-model M1 M2 M3 M4 CE 93.9 92.8 92.2 91.2 JE 92.9 91.3 90.1 89.3 (a) Our model Reordering Distance 1 2 3 4 CE 90.1 88.3 87.0 85.6 JE 85.3 81.9 80.6 78.8 (b) Hayashi model … Cited by 2 Related articles All 7 versions

The IWSLT 2015 Evaluation Campaign M Cettolo, J Niehues, S Stüker… – Proceedings of the …, 2015 – workshop2015.iwslt.org … fined at InterBEST 2009.2 Translation and lexicalized reordering models were trained on the parallel training data by means of the Moses toolkit; 5-gram LMs with improved Kneser-Ney smoothing were estimated on the target side of the training data with the IRSTLM toolkit [33]. … Cited by 3 Related articles

Improving Semantic Parsing with Enriched Synchronous Context-Free Grammar J Li, M Zhu, W Lu, G Zhou – nlp.suda.edu.cn … Then for each fold, we use it as the tuning data while the other 540 instances and the NP list are used as train- ing data.7 We use IRSTLM toolkit (Federico et al., 2008) to train a 5-gram LM on the MRL side of the training data, using modified Kneser-Ney smoothing. … Cited by 1 Related articles All 9 versions

Relation Extraction for Matrix (type) entities in Introductory programing problems H Shukla, K Gaurav – 2015 – cse.iitk.ac.in … corpus. Note that the language model was built using KENLM which comes inbuilt with mosesdecoder, one can as well use SRILM and IRSTLM for better results. The picture shows the phrase-table generated by mosesdecoder for the train data. … Related articles All 6 versions

Machine Translation Development For Indian Languages And Its Approaches A Godase, S Govilkar – academia.edu … 2. Model for English- Urdu Statistical Machine Translation [31] 2013 English-Urdu General Statistical Approach The model is trained on TrainSet using Moses with language modeling toolkit IRSTLM. TestSet gives the BLEU score of 32.11. … Related articles

Indian Language to Indian Language Machine Translation and Semantics NR Prabhugaonkar – 2015 – nlp.unigoa.ac.in Page 1. GOA UNIVERSITY Master Thesis Indian Language to Indian Language Machine Translation and Semantics Author: Neha Raghuvir Prabhugaonkar Supervisors: Dr. Jyoti D. Pawar Prof. Pushpak Bhattacharyya A thesis submitted in partial fulfilment of the requirements …

A monolingual approach to detection of text reuse in Russian-English collection O Bakhteev, R Kuznetsova, A Romanov… – … , Social Media and …, 2015 – ieeexplore.ieee.org … gather as much sentences covering the topic of sociology, law and philosophy, as possible, in order to remain in the domain of our experiment dataset (the details of the experiment and dataset preparation are described in Section V). For Moses training we used IRSTLM [36] 3 … Related articles

Language Identification and Modeling in Specialized Hardware K Heafield, R Kshirsagar, S Barona – Volume 2: Short Papers – research.ed.ac.uk … Marcello Federico, Nicola Bertoldi, and Mauro Cet- tolo. 2008. IRSTLM: an open source toolkit for handling large scale language models. In Proceed- ings of Interspeech, Brisbane, Australia. David Hall, Taylor Berg-Kirkpatrick, John Canny, and Dan Klein. 2014. … Related articles All 10 versions

Translating without in-domain corpus: Machine translation post-editing with online learning techniques AL Lagarda, D Ortiz-Martínez, V Alabau… – Computer Speech & …, 2015 – Elsevier Globalization has dramatically increased the need of translating information from one language to another. Frequently, such translation needs should be satisfie. Cited by 2 Related articles All 6 versions

A Review of the Various Approaches for Text to Text Machine Translations OB Abiola, AO Adetunmbi… – International Journal of …, 2015 – search.proquest.com … The system is implemented and evaluated using BLEU score and precision measure and the hybrid approach is found to improve the performance of the translator. Open source tools such as IRSTLM, GIZA++, Moses decoder etc. … Cited by 1 Related articles All 7 versions

The MGB Challenge: Evaluating multi-genre broadcast media recognition P Bell, MJF Gales, T Hain, J Kilgour… – … IEEE Workshop on …, 2015 – ieeexplore.ieee.org … Table 3 shows the performance of the baseline speaker segmentation and clustering systems for unlinked speaker 3http://xmlstar.sourceforge.net/ 4http://www.speech.sri.com/ projects/srilm/ 5https://hlt.fbk.eu/technologies/irstlm 690 Page 5. … Cited by 14 Related articles All 6 versions

Syllabification and parameter optimisation in Zulu to English machine translation G Kotzé, F Wolff – 2015 – uir.unisa.ac.za … There are several language modeling tools available, such as SRILM (Stolcke, 2002), IRSTLM (Federico, Bertoldi & Cettolo, 2008), RandLM (Talbot & Osborne, 2007) and KenLM (Heafield, 2011; Heafield, Pouzyrevsky, Clark & Koehn, 2013). … Related articles All 4 versions

Non-projective Dependency-based Pre-Reordering with Recurrent Neural Network for Machine Translation AV Miceli-Barone, G Attardi – Syntax, Semantics and …, 2015 – anthology.aclweb.org … The baseline system is phrase-based Moses in a default configuration with maximum distortion distance equal to 6 and lexicalized reordering enabled. Maximum phrase size is equal to 7. The language model is a 5-gram IRSTLM/KenLM. … Cited by 2 Related articles All 13 versions

Harmonised Shape Grammar in Design Practice A Kunkhet – 2015 – eprints.staffs.ac.uk Page 1. Harmonised Shape Grammar in Design Practice By Arus Kunkhet A Thesis Submitted in Partial Fulfilment of the Requirements of Staffordshire University for a Degree of Doctor of Philosophy Faculty of Computing, Engineering and Sciences April 2015 Page 2. i Abstract … Related articles All 4 versions

Quality estimation-guided supplementary data selection for domain adaptation of statistical machine translation P Banerjee, R Rubino, J Roturier, J van Genabith – Machine Translation, 2015 – Springer … 2002). The IRSTLM toolkit (Federico et al. 2008) is used for training all the 5-gram language models as well as for learning the linear interpolation weights using EM. The same toolkit is also used to learn the interpolation weights for combining the phrase tables. … Cited by 2 Related articles All 3 versions

Modernising historical Slovene words Y Scherrer, T Erjavec – Natural Language Engineering, 2015 – Cambridge Univ Press … 4 https://code.google.com/p/giza-pp/. 5 http://www.statmt.org/moses/. 6 http://hlt.fbk.eu/ technologies/irstlm-irst-language-modelling-toolkit. Page 11. Modernising historical Slovene words 11 Table 5. Baseline and upper bound performances on Lfoo. Corr. … Cited by 5 Related articles All 3 versions

Application 2: Machine Translation C Ramisch – Multiword Expressions Acquisition, 2015 – Springer … The corpus was word-aligned using GIZA++ and the phrase tables were extracted using the grow-diag-final heuristic. Language models were estimated from the French part of the parallel training corpus using 5-grams with IRSTLM. … Related articles

Boosted acoustic model learning and hypotheses rescoring on the CHiME-3 task S Jalalvand, D Falavigna, M Matassoni… – … IEEE Workshop on …, 2015 – ieeexplore.ieee.org … of ICASSP, 1995, pp. 181–184. [20] M. Federico, N. Bertoldi, and M. Cettolo, “IRSTLM: an Open Source Toolkit for Handling Large Scale Language Models,” in Proc. of Interspeech, Brisbane, Australia, September 2008, pp. 1618–1621. … Cited by 3 Related articles

Speech recognition based confidence measures for building voices from untranscribed speech TS Godambe – 2015 – web2py.iiit.ac.in Page 1. Speech recognition based confidence measures for building voices from untranscribed speech Thesis submitted in partial fulfillment of the requirements for the degree of MS by Research in Electronics and Communication Engineering by … Related articles

Enhancing Machine Translation for English-Japanese Using Syntactic Pattern Recognition Methods T McMahon – 2015 – curve.carleton.ca … 108 6.4.1 Moses Statistical Machine Translation System . . . . . 108 6.4.2 IRST Language Modeling (IRSTLM) . . . . 109 6.4.3 GIZA++ Word Alignment Tool . . . . . 109 6.4.4 Baseline Results . . . . . 109 … Related articles

Disambiguating discourse connectives for statistical machine translation T Meyer, N Hajlaoui… – IEEE/ACM Transactions …, 2015 – ieeexplore.ieee.org Page 1. 2329-9290 (c) 2015 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This … Cited by 5 Related articles All 6 versions

Latent semantics in language models T Brychcin, M Konopik – 2015 – researchgate.net Page 1. Latent semantics in language models Tomáš Brychcina,b,*, Miloslav Konopika,b aDepartment of Computer Science and Engineering, Faculty of Applied Sciences, University of West Bohemia, Univerzitni 8, 306 14 Plzen … Cited by 4 Related articles

User Behavioral Modeling of Web-based Systems for Continuous User Authentication LC Milton – 2015 – drum.lib.umd.edu Page 1. ABSTRACT Title of dissertation: USER BEHAVIORAL MODELING OF WEB-BASED SYSTEMS FOR CONTINUOUS USER AUTHENTICATION Leslie C. Milton, Doctor of Philosophy, 2015 Dissertation directed by: Professor …

Latent semantics in language models T Brychcín, M Konopík – Computer Speech & Language, 2015 – Elsevier This paper investigates three different sources of information and their integration into language modelling. Global semantics is modelled by Latent Dirichlet a. Cited by 1 Related articles All 2 versions

Community post-editing of machine-translated user-generated content L Mitchell – 2015 – doras.dcu.ie Page 1. Dublin City University Community Post-Editing of Machine-Translated User-Generated Content Author: Linda Mitchell, BA, MA Supervisors: Dr. Sharon O’Brien Dr. Johann Roturier Dr. Fred Hollowood Thesis submitted for the degree of the Doctor of Philosophy in the … Related articles