GATE & GATECloud


General Architecture for Text Engineering | GATECloud

Resources:

See also:

GATE & Dialog Systems


Natural Language Interfaces to Conceptual Models by Danica D. Damljanovic

Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at The University of Sheffield Department of Computer Science July 2011

Contents
Abstract
Acknowledgements
Publications

I What are Natural Language Interfaces to Conceptual Models?

1 Introduction
1.1 Motivation
1.2 Challenges
1.3 Contribution

2 Conceptual Models
2.1 What are Conceptual Models?
2.2 Browsing Conceptual Models

3 Natural Language Interfaces: a Brief Overview
3.1 Natural Language Interfaces to Relational Databases
3.2 Open-domain Question-Answering Systems
3.3 Interactive Natural Language Interface Systems
3.4 Summary and Discussion

4 Evaluation of Natural Language Interfaces
4.1 Habitability
4.2 Usability
4.2.1 Effectiveness
4.2.2 Effciency
4.2.3 User Satisfaction
4.3 Summary

II Usability of Natural Language Interfaces to Conceptual Models: State of the Art

5 Portability of Natural Language Interfaces to Structured Data
5.1 Introduction
5.2 ORAKEL
5.3 AquaLog and PowerAqua
5.4 E-librarian
5.5 PANTO
5.6 Querix
5.7 NLP-Reduce
5.8 CPL
5.9 Attempto Controlled English (ACE)
5.10 Summary and Discussion

6 Usability Enhancement Methods
6.1 Language Restriction
6.2 Feedback
6.3 Guided Interfaces
6.4 Extending the Vocabulary
6.5 How to Deal with Ambiguities?
6.5.1 Automatically Solving Ambiguities
6.5.2 Clarification Dialogs
6.5.3 Query Refinement
6.6 Summary and Discussion

III Building Natural Language Interfaces to Conceptual Models

7 QuestIO
7.1 Building the Domain Lexicon
7.2 Query Processing
7.2.1 Query Interpretation
7.2.2 Query Analysis
7.3 Coverage
7.4 Qualitative and Quantitative Evaluation
7.4.1 Correctness and Coverage
7.4.2 Portability and Scalability
7.5 User-centric Evaluation
7.5.1 QuestIO Prototype
7.5.2 Dataset
7.5.3 Evaluation Scope
7.5.4 Experimental Setup
7.5.5 Tasks
7.5.6 Results
7.6 Summary and Discussion

8 Towards Better Usability with FREyA: Part I
8.1 Feedback
8.1.1 Hiding Complexities
8.1.2 Identified Context and Tree-based View
8.1.3 Linearised List of Concepts
8.2 Evaluation
8.2.1 Evaluation Scope
8.2.2 Experimental Setup
8.2.3 Dataset
8.2.4 Tasks
8.2.5 Participants
8.2.6 Results
8.2.7 Summary and Discussion

9 Towards Better Usability with FREyA: Part II
9.1 FREyA Work flow
9.1.1 Ontology-based Lookup
9.1.2 Syntactic Parsing and Analysis
9.1.3 Consolidation
9.1.4 The Disambiguation Dialog
9.1.5 The Mapping Dialog
9.1.6 Combining Ontology Concepts into Triples and Generating SPARQL
9.1.7 An Illustrative Example
9.2 Answer Type Identification
9.2.1 QA Detector
9.2.2 FOC Finder
9.2.3 Consolidation
9.2.4 Generating Suggestions
9.2.5 An Illustrative Example
9.3 What to Show: Presentation of Results to the User
9.3.1 Display the Concise Answer
9.3.2 Feedback: the Graph-based View
9.4 Enriching Lexicon through User Interaction
9.5 Learning from the User’s Selection
9.5.1 Environment
9.5.2 Reinforcement Function
9.5.3 Value Function
9.5.4 Generalisation of the Learning Model
9.6 Portability
9.7 Evaluation
9.7.1 Correctness
9.7.2 Learning
9.7.3 Ranked Suggestions
9.7.4 Answer Type
9.7.5 Querying Linked Data with FREyA
9.8 Summary

IV Conclusion

10 Summary of Findings

11 Future Challenges
11.1 Scalability
11.2 What to Show?
11.3 Learning
11.4 Personalised Vocabulary
11.5 Using FREyA in the Open-Domain Scenario

Appendices

A User-centric Evaluation with QuestIO
B User-centric Evaluation with FREyA
C Using Large Ontologies
Bibliography