Notes:
YodaQA is an open-source factoid question answering system that can produce answers from both databases and text corpora using information extraction. It is composed of multiple natural language processing and machine learning components, including datasets to search for potential answers, parsers, and other text processing tools. YodaQA returns an ordered list of answers for a given input question and is primarily inspired by the architecture of information retrieval systems. It is capable of answering a wide range of questions and has been adapted for use in the biomedical domain. It is also capable of executing different tasks for individual commands and has been used in the creation of a chatbot. YodaQA has been compared to other QA systems, such as QuASE, AskMSR, and DeepQA.
Resources:
- ailao.eu/yodaqa
- github.com/brmson/yodaqa (YodaQA REST API)
- pasky.or.cz (Petr Baudis)
- qald.sebastianwalter.org (Question Answering over Linked Data)
See also:
YodaQA: a modular question answering system pipeline P Baudiš – POSTER 2015-19th International Student Conference …, 2015 – ailao.eu Abstract. This is a preprint, submitted on 2015-03-22. Question Answering as a sub-field of information retrieval and information extraction is recently enjoying renewed popularity, triggered by the publicized success of IBM Watson in the Jeopardy! competition. But … Cited by 10 Related articles All 5 versions
Modeling of the question answering task in the YodaQA system P Baudiš, J Šedivý – International Conference of the Cross-Language …, 2015 – Springer Abstract We briefly survey the current state of art in the field of Question Answering and present the YodaQA system, an open source framework for this task and a baseline pipeline with reasonable performance. We take a holistic approach, reviewing and aiming to … Cited by 4 Related articles All 5 versions
Biomedical Question Answering using the YodaQA System: Prototype Notes P Baudiš, J Šedivý – 2015 – ceur-ws.org Abstract. We briefly outline the YodaQA open domain question answering system and its initial adaptation to the Biomedical domain for the purposes of the BIOASQ challenge (question answering task 3b) on CLEF2015. Keywords: Question answering, linked data, … Cited by 1 Related articles All 3 versions
QALD Challenge and the YodaQA System: Prototype Notes P Baudiš, J Šedivý – ailao.eu Abstract. We briefly outline the YodaQA open domain question answering system and its initial application on the Question Answering over Linked Data challenge QALD5 on CLEF2015. Since YodaQA is focused on QA over unstructured data and has been only … Related articles
Sentence Pair Scoring: Towards Unified Framework for Text Comprehension P Baudiš, J Šedivý – arXiv preprint arXiv:1603.06127, 2016 – arxiv.org … To alleviate the problems listed above, we are introducing a new dataset yodaqa/curatedv2 based on the curatedv2 question dataset (in- troduced in (Baudiš and Šedivý, 2015), further denoisified by Mechanical Turkers) with candi- date sentences as retrieved by the YodaQA … Cited by 1 Related articles All 4 versions
Systems and Approaches for Question Answering DRAFT MP Baudiš – 2015 – sharelatex.com … 11 3 The YodaQA System 14 3.1 YodaQA Pipeline Architecture . . . . . … 4The demo at http://45.55.181.170/defgen/ is nice. 13 Page 16. Chapter 3 The YodaQA System Question Analysis Answer Merging Answer Analysis Full-text Search Answer Scoring Title Text Search … Related articles
Question Answering Survey (Preliminary) P Baudiš – 2015 – sharelatex.com … 7 Page 8. TREC F1 improvement was 3.5%. (Absolute numbers are not comparable due to sub-sampling and re-evaluation.) YodaQA: A Modular Question Answering System Pipeline Baudiš (2015) … Petr Baudiš. YodaQA: A Modular Question Answering System Pipeline. … Related articles All 2 versions
Text Analytics: the convergence of Big Data and Artificial Intelligence A Moreno, T Redondo – International Journal of Interactive Multimedia and …, 2016 – ijimai.org … Unfortunately, OpenEphyra has been discontinued and some alternatives have appeared, such as YodaQA [24], which is general purpose QA system, that is, an “open domain” Page 4. International Journal of Interactive Multimedia and Artificial Intelligence, Vol. 3, Nº6 – 60 – … Cited by 1 Related articles All 3 versions
The Fudan participation in the 2015 BioASQ Challenge: Large-scale Biomedical Semantic Indexing and Question Answering S Peng, R You, Z Xie, Y Zhang… – Working Notes for the …, 2015 – pdfs.semanticscholar.org … 4 main system(0.9600) main system(0.3201) fdu(0.2192) fdu(0.9600) fdu(0.2299) main system(0.1349) fdu(0.7143) oaqa-3b-4(0.2727) oaqa-3b-5(0.1875) 5 fa1(0.6786) fdu(0.2500) YodaQA base(0.1631) main system(0.6786) YodaQA base(0.2045) fdu(0.1340) … Cited by 2 Related articles All 2 versions
Benchmarking Question Answering Systems R Usbeck, M Röder, C Unger, M Hoffmann, C Demmler… – svn.aksw.org … SPARQL generation process. Page 5. YodaQA [1] is a modular, open source, hybrid approach built on top of the Apache UIMA framework11 that is part of the Brmson platform and is inspired by DeepQA. Yoda’s pipeline is divided … Related articles
Results of the BioASQ tasks of the Question Answering Lab at CLEF 2015 G Balikas, A Kosmopoulos, A Krithara… – CLEF …, 2015 – hal.archives-ouvertes.fr … Several versions of the system were examined, which differ on the features and training data that was used. The YodaQA system, described in [4], is a pipeline question answering system that was altered in order to make it compatible with the BioASQ task. … Cited by 18 Related articles All 7 versions
When a Knowledge Base Is Not Enough: Question Answering over Knowledge Bases with External Text Data D Savenkov, E Agichtein – mathcs.emory.edu … The result reported for YodaQA system is F1 score at position 1. As we can see, Text2KB significantly improves over the baseline system and reaches the current best published result – STAGG [31]. … OpenQA [16] – – – 0.35 YodaQA [4] – – – 0.343 … Cited by 1 Related articles
Answering Controlled Natural Language Questions on RDF Knowledge Bases GM Mazzeo, C Zaniolo – openproceedings.org … 610 Page 4. Proc. Right Part. F proc. F glob. CANaLI 46 44 1 0.98 0.92 Xser 42 26 7 0.73 0.63 QAnswer 37 9 4 0.40 0.30 APEQ 26 8 5 0.44 0.23 SemGraphQA 31 7 3 0.31 0.20 YodaQA 33 8 2 0.26 0.18 Figure 3: Results on QALD-5 benchmark – Total number of questions: 49. … Related articles
Semgraphqa @qald-5: Limsi participation at qald-5 @clef R Beaumont, B Grau, AL Ligozat – 2015 – ceur-ws.org … Processed Right Partial Recall Precision F1 Xser 42 26 7 0.72 0.74 0.73 APEQ 26 8 5 0.48 0.40 0.44 QAnswer 37 9 4 0.35 0.46 0.40 SemGraphQA 31 7 3 0.32 0.31 0.31 YodaQA 33 8 2 0.25 0.28 0.26 Table 1. QALD-5 test evaluation … Cited by 2 Related articles All 2 versions
[BOOK] Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF’15, Toulouse, France, … J Mothe, J Savoy, J Kamps, K Pinel-Sauvagnat… – 2015 – books.google.com … 215 Hosein Azarbonyad, Ferron Saan, Mostafa Dehghani, Maarten Marx, and Jaap Kamps Modeling of the Question Answering Task in the YodaQA System….. 222 Petr Baudiš and Jan Šedivý Unfair Means: Use Cases Beyond Plagiarism….. … Cited by 2 Related articles
Survey on Challenges of Question Answering in the Semantic Web K Höffner, S Walter, E Marx, R Usbeck… – Submitted to the …, 2016 – svn.aksw.org … IBM Watson [63], which was able to win the Jeopardy! challenge against human experts. YodaQA [12] is a modular open source hybrid ap- proach built on top of the Apache UIMA framework14 that is part of the Brmson platform and is inspired by DeepQA. … Cited by 1 Related articles
Qanswer-enhanced entity matching for question answering over linked data S Ruseti, A Mirea, T Rebedea, S Trausan-Matu – 2015 – pdfs.semanticscholar.org … F1 Global Xser (en) 42 26 7 0.72 0.74 0.73 0.63 QAnswer (en) 37 9 4 0.35 0.46 0.40 0.30 APEQ (en) 26 8 5 0.48 0.40 0.44 0.23 SemGraphQA (en) 31 7 3 0.32 0.31 0.31 0.20 YodaQA (en) 33 8 2 0.25 0.28 0.26 0.18 Out of the … Cited by 4 Related articles All 2 versions
Joint Learning of Sentence Embeddings for Relevance and Entailment P Baudis, S Stanko, J Sedivy – arXiv preprint arXiv:1605.04655, 2016 – arxiv.org Page 1. arXiv:1605.04655v1 [cs.CL] 16 May 2016 Joint Learning of Sentence Embeddings for Relevance and Entailment Petr Baudiš, Silvestr Stanko and Jan Šedivý FEE CTU Prague Department of Cybernetics Technická 2, Prague, Czech Republic baudipet@fel.cvut.cz … Related articles All 2 versions
AskNow: A Framework for Natural Language Query Formalization in SPARQL M Dubey, S Dasgupta, A Sharma, K Höffner… – International Semantic …, 2016 – Springer Related articles All 3 versions
CANaLI: A System for Answering Controlled Natural Language Questions on RDF Knowledge Bases GM Mazzeo, C Zaniolo – fmdb.cs.ucla.edu … APEQ 26 8 5 0.48 0.40 0.44 0.23 SemGraphQA 31 7 3 0.32 0.31 0.31 0.20 YodaQA 33 8 2 0.25 0.28 0.26 0.18 (c) Figure 8: Results on QALD-3 (a), QALD-4 (b), and QALD-5 (c) DBpedia test sets, containing 99, 50, and 49 questions, respectively … Related articles
Core techniques of ontology-based question answering systems: a survey D Diefenbacha, K Singha, P Mareta – Semantic Web Journal – wtlab.um.ac.ir … SWIP [26] was excluded since it uses templates adapted to a specific ontology so that it is not an open-domain QA system. YodaQA [3] was excluded since it mainly uses information re- trieval techniques. … YodaQA 50 33 8 2 0.28 0.25 0.26 – [1] Page 5. 5 … Related articles
Overcoming challenges of semantic question answering in the semantic web K Höffner, S Walter, E Marx, J Lehmann… – Semantic Web … – semantic-web-journal.net … However, it is yet limited to Korean for the moment. 10For example, “123”ˆˆ<http://dbpedia.org/ datatype/squareKilometre> can be verbalized as 123 square kilometres. 11http://brmlab.cz/ project/brmson 12http://ailao.eu/yodaqa/ 13http://exobrain.kr/ Page 6. … Related articles