SemEval


SemEval


[PDF] The semeval-2007 weps evaluation: Establishing a benchmark for the web people search task [PDF] from upenn.edu J Artiles, J Gonzalo… – Proceedings of Semeval, 2007 – acl.ldc.upenn.edu Abstract This paper presents the task definition, resources, participation, and comparative  results for the Web People Search task, which was organized as part of the SemEval-2007  evaluation exercise. This task consists of clustering a set of documents that mention an … Cited by 132 – Related articles – View as HTML – All 33 versions

[PDF] Semeval-2007 task 14: Affective text [PDF] from upenn.edu C Strapparava… – Proc. of SemEval, 2007 – acl.ldc.upenn.edu Abstract The “Affective Text” task focuses on the classification of emotions and valence  (positive/negative polarity) in news headlines, and is meant as an exploration of the  connection between emotions and lexical semantics. In this paper, we describe the data … Cited by 93 – Related articles – View as HTML – All 35 versions

[PDF] Semeval-2007 task 15: Tempeval temporal relation identification [PDF] from upenn.edu M Verhagen, R Gaizauskas, F Schilder… – Proceedings of the …, 2007 – acl.ldc.upenn.edu Abstract The TempEval task proposes a simple way to evaluate automatic extraction of  temporal relations. It avoids the pitfalls of evaluating a graph of inter-related labels by  defining three sub tasks that allow pairwise evaluation of temporal relations. The task not … Cited by 88 – Related articles – View as HTML – All 37 versions

[PDF] Semeval-2007 task 04: Classification of semantic relations between nominals [PDF] from upenn.edu R Girju, P Nakov, V Nastase, S Szpakowicz… – Urbana, 2007 – acl.ldc.upenn.edu Abstract The NLP community has shown a renewed interest in deeper semantic analyses,  among them automatic recognition of relations between pairs of words in a text. We present  an evaluation task designed to provide a framework for comparing different approaches to … Cited by 85 – Related articles – View as HTML – All 53 versions

[PDF] Semeval-2007 task-17: English lexical sample, srl and all words [PDF] from upenn.edu S Pradhan, E Loper, D Dligach… – … Evaluations (SemEval- …, 2007 – acl.ldc.upenn.edu Abstract This paper describes our experience in preparing the data and evaluating the  results for three subtasks of SemEval-2007 Task-17-Lexical Sample, Semantic Role  Labeling (SRL) and All-Words respectively. We tabulate and analyze the results of … Cited by 86 – Related articles – View as HTML – All 33 versions

[PDF] SemEval-2007 task 10: English lexical substitution task [PDF] from upenn.edu D McCarthy… – … on Semantic Evaluations (SemEval- …, 2007 – acl.ldc.upenn.edu Abstract In this paper we describe the English Lexical Substitution task for SemEval. In the  task, annotators and systems find an alternative substitute word or phrase for a target word  in context. The task involves both finding the synonyms and disambiguating the context. … Cited by 69 – Related articles – View as HTML – All 42 versions

[PDF] Semeval-2007 task 02: Evaluating word sense induction and discrimination systems [PDF] from upenn.edu E Agirre… – … on Semantic Evaluations (SemEval-2007), 2007 – acl.ldc.upenn.edu Abstract The goal of this task is to allow for comparison across sense-induction and  discrimination systems, and also to compare these systems to other supervised and  knowledgebased systems. In total there were 6 participating systems. We reused the … Cited by 60 – Related articles – View as HTML – All 31 versions

[PDF] SemEval’07 task 19: frame semantic structure extraction [PDF] from upenn.edu C Baker, M Ellsworth… – Proceedings of the 4th …, 2007 – acl.ldc.upenn.edu Abstract This task consists of recognizing words and phrases that evoke semantic frames as  defined in the FrameNet project (http://framenet. icsi. berkeley. edu), and their semantic  dependents, which are usually, but not always, their syntactic dependents (including … Cited by 52 – Related articles – View as HTML – All 30 versions

Semeval-2007 task 07: Coarse-grained english all-words task [PDF] from upenn.edu R Navigli, KC Litkowski… – Proceedings of the 4th …, 2007 – dl.acm.org Abstract This paper presents the coarse-grained English all-words task at SemEval-2007.  We describe our experience in producing a coarse version of the WordNet sense inventory  and preparing the sense-tagged corpus for the task. We present the results of participating … Cited by 49 – Related articles – All 37 versions

[PDF] SemEval-2007 Task 06: Word-sense disambiguation of prepositions [PDF] from upenn.edu K Litkowski… – SemEval-2007: 4th International …, 2007 – acl.ldc.upenn.edu Abstract The SemEval-2007 task to disambiguate prepositions was designed as a lexical  sample task. A set of over 25,000 instances was developed, covering 34 of the most frequent  English prepositions, with two-thirds of the instances for training and one-third as the test … Cited by 26 – Related articles – View as HTML – All 32 versions

[PDF] Semeval-2007 task 08: Metonymy resolution at semeval-2007 [PDF] from upenn.edu K Markert… – Proceedings of SemEval-2007, 2007 – acl.ldc.upenn.edu Abstract We provide an overview of the metonymy resolution shared task organised within  SemEval-2007. We describe the problem, the data provided to participants, and the  evaluation measures we used to assess performance. We also give an overview of the … Cited by 24 – Related articles – View as HTML – All 34 versions

[PDF] Semeval-2007 task 09: Multilevel semantic annotation of Catalan and Spanish [PDF] from mt-archive.info MA Marti… – 2007 – mt-archive.info Abstract In this paper we describe SemEval-2007 task number 9 (Multilevel Semantic  Annotation of Catalan and Spanish). In this task, we aim at evaluating and comparing  automatic systems for the annotation of several semantic linguistic levels for Catalan and … Cited by 23 – Related articles – View as HTML – All 2 versions

SemEval-2007 task 05: multilingual Chinese-English lexical sample [PDF] from upenn.edu P Jin, Y Wu… – Proceedings of the 4th International Workshop on …, 2007 – dl.acm.org Abstract The Multilingual Chinese-English lexical sample task at SemEval-2007 provides a  framework to evaluate Chinese word sense disambiguation and to promote research. This  paper reports on the task preparation and the results of six participants. Cited by 23 – Related articles – All 35 versions

Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals [PDF] from rug.nl I Hendrickx, SN Kim, Z Kozareva… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract SemEval-2 Task 8 focuses on Multi-way classification of semantic relations  between pairs of nominals. The task was designed to compare different approaches to  semantic relation classification and to provide a standard testbed for future research. This … Cited by 22 – Related articles – All 36 versions

Semeval-2010 task 3: Cross-lingual word sense disambiguation [PDF] from ugent.be E Lefever… – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract The goal of this task is to evaluate the feasibility of multilingual WSD on a newly  developed multilingual lexical sample data set. Participants were asked to automatically  determine the contextually appropriate translation of a given English noun in five … Cited by 21 – Related articles – All 35 versions

Semeval-2010 task 17: All-words word sense disambiguation on a specific domain [PDF] from rug.nl E Agirre, OL de Lacalle, C Fellbaum… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract Domain portability and adaptation of NLP components and Word Sense  Disambiguation systems present new challenges. The difficulties found by supervised  systems to adapt might change the way we assess the strengths and weaknesses of … Cited by 24 – Related articles – All 30 versions

Semeval-2010 task 10: Linking events and their participants in discourse [PDF] from pascal-network.org J Ruppenhofer, C Sporleder, R Morante… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract We describe the SemEval-2010 shared task on” Linking Events and Their  Participants in Discourse”. This task is an extension to the classical semantic role labeling  task. While semantic role labeling is traditionally viewed as a sentence-internal task, local … Cited by 21 – Related articles – All 35 versions

[PDF] SemEval-2010 task 13: evaluating events, time expressions, and temporal relations (TempEval-2) [PDF] from aclweb.org J Pustejovsky… – … of the Workshop on Semantic Evaluations …, 2009 – aclweb.org Abstract We describe the TempEval-2 task which is currently in preparation for the SemEval- 2010 evaluation exercise. This task involves identifying the temporal relations between  events and temporal expressions in text. Six distinct subtasks are defined, ranging from … Cited by 18 – Related articles – View as HTML – All 15 versions

SemEval-2010 task 13: TempEval-2 [PDF] from aclweb.org M Verhagen, R Saurí, T Caselli… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract Tempeval-2 comprises evaluation tasks for time expressions, events and temporal  relations, the latter of which was split up in four sub tasks, motivated by the notion that  smaller subtasks would make both data preparation and temporal relation extraction … Cited by 20 – Related articles – All 17 versions

SemEval-2010 Task 1: Coreference resolution in multiple languages [PDF] from aclweb.org M Recasens, L Màrquez, E Sapena… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract This paper presents the SemEval-2010 task on Coreference Resolution in Multiple  Languages. The goal was to evaluate and compare automatic coreference resolution  systems for six different languages (Catalan, Dutch, English, German, Italian, and Spanish … Cited by 24 – Related articles – All 29 versions

Semeval-2010 task 5: Automatic keyphrase extraction from scientific articles [PDF] from aclweb.org SN Kim, O Medelyan, MY Kan… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract This paper describes Task 5 of the Workshop on Semantic Evaluation 2010  (SemEval-2010). Systems are to automatically assign keyphrases or keywords to given  scientific articles. The participating systems were evaluated by matching their extracted … Cited by 18 – Related articles – All 19 versions

SemEval-2010 task 14: Word sense induction & disambiguation [PDF] from mercubuana.ac.id S Manandhar, IP Klapaftis, D Dligach… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract This paper presents the description and evaluation framework of SemEval-2010  Word Sense Induction & Disambiguation task, as well as the evaluation results of 26  participating systems. In this task, participants were required to induce the senses of 100 … Cited by 14 – Related articles – All 17 versions

SemEval-2007 task 01: evaluating WSD on cross-language information retrieval [PDF] from upenn.edu E Agirre, O Lopez de Lacalle, B Magnini… – … in Multilingual and …, 2008 – Springer This paper presents a first attempt of an application-driven evaluation exercise of WSD. We  used a CLIR testbed from the Cross Lingual Evaluation Forum. The expansion, indexing and  retrieval strategies where fixed by the organizers. The participants had to return both the … Cited by 13 – Related articles – All 48 versions

Semeval-2010 task 12: Parser evaluation using textual entailments [PDF] from mercubuana.ac.id D Yuret, A Han… – … of the 5th International Workshop on …, 2010 – dl.acm.org Abstract Parser Evaluation using Textual Entailments (PETE) is a shared task in the  SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing  textual entailments based on syntactic information alone. PETE introduces a new parser … Cited by 16 – Related articles – All 14 versions

[PDF] Semeval-2010 task 2: Cross-lingual lexical substitution [PDF] from pitt.edu R Sinha, D McCarthy… – SEW, 2009 – cs.pitt.edu Abstract In this paper we describe the SemEval-2010 Cross-Lingual Lexical Substitution  task, which is based on the English Lexical Substitution task run at SemEval-2007. In the  English version of the task, annotators and systems had to find an alternative substitute … Cited by 12 – Related articles – View as HTML – All 22 versions

[PDF] Swat-mp: the semeval-2007 systems for task 5 and task 14 [PDF] from upenn.edu P Katz, M Singleton… – Proc. of SemEval, 2007 – acl.ldc.upenn.edu Abstract In this paper, we describe our two SemEval-2007 entries. Our first entry, for Task 5:  Multilingual Chinese-English Lexical Sample Task, is a supervised system that decides the  most appropriate English translation of a Chinese target word. This system uses a … Cited by 13 – Related articles – View as HTML – All 34 versions

SemEval-2010 task 9: The interpretation of noun compounds using paraphrasing verbs and prepositions [PDF] from pitt.edu C Butnariu, SN Kim, P Nakov… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract Previous research has shown that the meaning of many noun-noun compounds N 1  N 2 can be approximated reasonably well by paraphrasing clauses of the form’N 2 that… N  1′, where’…’stands for a verb with or without a preposition. For example, malaria mosquito … Cited by 11 – Related articles – All 35 versions

[PDF] SemEval-2007 task 11: English lexical sample task via English-Chinese parallel text [PDF] from upenn.edu HT Ng… – … Workshop on Semantic Evaluations (SemEval …, 2007 – acl.ldc.upenn.edu Abstract We made use of parallel texts to gather training and test examples for the English  lexical sample task. Two tracks were organized for our task. The first track used examples  gathered from an LDC corpus, while the second track used examples gathered from a … Cited by 10 – Related articles – View as HTML – All 44 versions

[PDF] Lcc-wsd: System description for English coarse grained all words task at semeval 2007 [PDF] from upenn.edu A Novischi, M Srikanth… – … Evaluations (SemEval-2007 …, 2007 – acl.ldc.upenn.edu Abstract This document describes the Word Sense Disambiguation system used by  Language Computer Corporation at English Coarse Grained All Word Task at SemEval  2007. The system is based on two supervised machine learning algorithms: Maximum … Cited by 10 – Related articles – View as HTML – All 30 versions

[PDF] UCB: system description for SemEval task# 4 [PDF] from upenn.edu PI Nakov… – … of the 4th International Workshop on …, 2007 – acl.ldc.upenn.edu Abstract The UC Berkeley team participated in the SemEval 2007 Task# 4, with an approach  that leverages the vast size of the Web in order to build lexically-specific features. The idea  is to determine which verbs, prepositions, and conjunctions are used in sentences … Cited by 10 – Related articles – View as HTML – All 42 versions

[CITATION] Lluis Marquez, Emili Sapena, M. Antonia Marti, Mariona Taulé, Véronique Hoste, Massimo Poesio, and Yannick Versley. 2010. SemEval-2010 Task 1: … M Recasens – … Workshop on Semantic Evaluations (SemEval-2010), …, 2010 Cited by 11 – Related articles

Semeval-2010 task 2: Cross-lingual lexical substitution [PDF] from rug.nl R Mihalcea, R Sinha… – Proceedings of the 5th International …, 2010 – dl.acm.org Abstract In this paper we describe the SemEval-2010 Cross-Lingual Lexical Substitution  task, where given an English target word in context, participating systems had to find an  alternative substitute word or phrase in Spanish. The task is based on the English Lexical … Cited by 10 – Related articles – All 18 versions

[CITATION] Establishing a benchmark for the web people search task: The semeval 2007 weps track J Artiles, J Gonzalo… – Proceedings of Semeval, 2007 Cited by 8 – Related articles

[PDF] Semeval-2007 task 16: Evaluation of wide coverage knowledge resources [PDF] from upenn.edu M Cuadros, G Rigau – … on Semantic Evaluations (SemEval- …, 2007 – acl.ldc.upenn.edu Abstract This task tries to establish the relative quality of available semantic resources  (derived by manual or automatic means). The quality of each large-scale knowledge  resource is indirectly evaluated on a Word Sense Disambiguation task. In particular, we … Cited by 8 – Related articles – View as HTML – All 34 versions

[PDF] UPC: Experiments with joint learning within semeval task 9 [PDF] from surdeanu.name L Màrquez, L Padró, M Surdeanu… – … Evaluations (SemEval, 2007 – surdeanu.name ? Simple features: length, lexical and POS head information, strong NE information  (number and type, np count in path to strong NE), syntactic function.? Bag of content words  inside the candidate.? Pattern-based features? codify the sequence of tokens inside the … Cited by 8 – Related articles – View as HTML – All 7 versions

[PDF] Metonymy resolution at semeval i: Guidelines for participants [PDF] from swarthmore.edu K Markert… – Proceedings of the ACL 2007 …, 2007 – nlp.cs.swarthmore.edu This task is organised by Katja Markert, School of Computing, University of Leeds, UK  (markert@ comp. leeds. ac. uk) and Malvina Nissim, University of Bologna, Italy and Institute  for Cognitive Science and Technology, CNR, Italy (malvina. nissim@ unibo. it). Please … Cited by 7 – Related articles – View as HTML – All 2 versions

[PDF] SemEval-2010 task 14: evaluation setting for word sense induction & disambiguation systems [PDF] from aclweb.org S Manandhar… – DEW, 2009 – aclweb.org Abstract This paper presents the evaluation setting for the SemEval-2010 Word Sense  Induction (WSI) task. The setting of the SemEval-2007 WSI task consists of two evaluation  schemes, ie unsupervised evaluation and supervised evaluation. The first one evaluates … Cited by 7 – Related articles – View as HTML – All 14 versions

Semeval-2010 task: Japanese wsd [PDF] from aclweb.org M Okumura, K Shirai, K Komiya… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract An overview of the SemEval-2 Japanese WSD task is presented. It is a lexical  sample task, and word senses are defined according to a Japanese dictionary, the Iwanami  Kokugo Jiten. This dictionary and a training corpus were distributed to participants. The … Cited by 8 – Related articles – All 16 versions

[PDF] Semeval 2007 task 18: Arabic semantic labeling [PDF] from upenn.edu M Diab, M Alkhalifa, S ElKateb… – Proc. of SemEval …, 2007 – acl.ldc.upenn.edu Abstract In this paper, we present the details of the Arabic Semantic Labeling task. We  describe some of the features of Arabic that are relevant for the task. The task comprises two  subtasks: Arabic word sense disambiguation and Arabic semantic role labeling. The task … Cited by 6 – Related articles – View as HTML – All 31 versions

Semeval-2010 task 18: Disambiguating sentiment ambiguous adjectives [PDF] from mercubuana.ac.id Y Wu… – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract Sentiment ambiguous adjectives cause major difficulties for existing algorithms of  sentiment analysis. We present an evaluation task designed to provide a framework for  comparing different approaches in this problem. We define the task, describe the data … Cited by 6 – Related articles – All 17 versions

Duluth-WSI: SenseClusters applied to the sense induction task of SemEval-2 [PDF] from aclweb.org T Pedersen – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract The Duluth-WSI systems in SemEval-2 built word co–occurrence matrices from the  task test data to create a second order co–occurrence representation of those test instances.  The senses of words were induced by clustering these instances, where the number of … Cited by 5 – Related articles – All 16 versions

[PDF] USP-IBM-1 and USP-IBM-2: the ILP-based systems for lexical sample WSD in SemEval-2007 [PDF] from upenn.edu L Specia, MGV Nunes, A Srinivasan… – 4th international …, 2007 – acl.ldc.upenn.edu Abstract We describe two systems participating of the English Lexical Sample task in  SemEval-2007. The systems make use of Inductive Logic Programming for supervised  learning in two different ways:(a) to build Word Sense Disambiguation (WSD) models from … Cited by 5 – Related articles – View as HTML – All 36 versions

[PDF] Lcc-srn: Lcc’s srn system for semeval 2007 task 4 [PDF] from upenn.edu A Badulescu… – Proceedings of the Fourth …, 2007 – acl.ldc.upenn.edu Abstract This document provides a description of the Language Computer Corporation  (LCC) SRN System that participated in the SemE-val 2007 Semantic Relation between  Nominals task. The system combines the outputs of different binary and multi-class … Cited by 5 – Related articles – View as HTML – All 34 versions

Task08: Metonymy Resolution at Semeval 2007 K Markert… – Proceedings of Semeval 2007. Prague, Czech Rep, 2007 – Citeseer Abstract: We provide an overview of the metonymy resolution shared task organised within  SemEval-2007. We describe the problem, the data provided to participants, and the  evaluation measures we used to assess performance. We also give an overview of the … Cited by 4 – Related articles – Cached – All 2 versions

SemEval-2010 task 7: argument selection and coercion [PDF] from rug.nl J Pustejovsky, A Rumshisky, A Plotnick… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract We describe the Argument Selection and Coercion task for the SemEval-2010  evaluation exercise. This task involves characterizing the type of compositional operation  that exists between a predicate and the arguments it selects. Specifically, the goal is to … Cited by 5 – Related articles – All 31 versions

[CITATION] SemEval-2007 Task 14: Affective Text S Carlo… – 45th Aunual Meeting of Association for Computational …, 2007 Cited by 4 – Related articles

The semeval English lexical substitution task D McCarthy… – Proceedings of the ACL Semeval …, 2007 – en.scientificcommons.org Abstract In this document we show precision (P) and recall (R) and mode precision (mode P)  and mode recall (mode R) as described in our scoring documentation 1. In tables 1 to 4 we  have ordered systems according to recall on the best task, and in tables 5 to 8 according … Cited by 3 – Related articles – Cached

[CITATION] SemEval-2007 Task 08: Metonymy Resolution at SemEval-2007 M Nissim… – Proceedings of SemEval-2007, 2007 Cited by 2 – Related articles

[CITATION] Task 04: Classification of semantic relations between nominal at SemEval 2007 R Girju, M Hearst, P Nakov, V Nastase… – ACL’07 SemEval Workshop, 2007 Cited by 2 – Related articles

[CITATION] Task 04: Classification of semantic relations between nominal at SemEval 2007. 4th Intl R Girju, M Hearst, P Nakov, V Nastase… – … (SemEval’07), in ACL’07, 2007 Cited by 2 – Related articles

[CITATION] SemEval-2007 Task 5: Multilingual Chinese-English Lexical Sample J Peng, W Yunfang… – … Workshop on Semantic Evaluations (SemEval-2007 …, 2007 Cited by 2 – Related articles

[CITATION] SemEval Task-17: English Lexical Sample, SRL and All Words S Pradhan, E Loper, D Dligach… – … on Semantic Evaluations (SemEval- …, 2007 Cited by 2 – Related articles

[CITATION] SemEval-2007 Task-17: English Lexical Sample S Pradhan, E Loper, D Dligach… – … on Semantic Evaluations (SemEval-07), 2007 Cited by 2 – Related articles

[CITATION] SemEval-2007 task 8: Metonymy resolution K Markert… – … Workshop on Semantic Evaluations (SemEval-2007), …, 2007 Cited by 2 – Related articles

[CITATION] Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, & Deniz Yuret. 2007. Semeval-2007 task 04: Classification of semantic relations between … R Girju – … Workshop on Semantic Evaluations (SemEval-2007) Cited by 2 – Related articles

[CITATION] Pustejovsky, and J.: SemEval-2007 Task 15: TempEval Temporal Relation Identification M Verhagen, R Gaizauskas, F Schilder, M Hepple… – … on Semantic Evaluations (semEval … Cited by 2 – Related articles

[CITATION] Classification of Semantic Relations between Nominals: Dataset for Task 4. SemEval 2007 R Girju, M Hearst, P Nakov, V Nastase… – 4th International Workshop …, 2007 Cited by 2 – Related articles

[CITATION] SemEval Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions C Butnariu, SN Kim, P Nakov, DÓ Séaghdha… – Proc. of the NAACL HLT …, 2009 Cited by 2 – Related articles

[CITATION] SemEval 2007: Proceedings of the 4th International Workshop on Semantic Evaluations E Agirre, L Marquez… – Prague, Czech Republic, 2007 Cited by 2 – Related articles

[PDF] Ju-sknsb: Extended WordNet based WSD on the english all-words task at semeval-1 [PDF] from upenn.edu SK Naskar… – Proc. Semeval-2007 Workshop, 2007 – acl.ldc.upenn.edu Abstract This paper presents an Extended WordNet based word sense disambiguation  system using a major modification to the Lesk algorithm. The algorithm tries to disambiguate  nouns, verbs and adjectives. The algorithm relies on the POS-sense tagged synset … Cited by 1 – Related articles – View as HTML – All 27 versions

COLEUR and COLSLM: A WSD approach to multilingual lexical substitution, tasks 2 and 3 SemEval 2010 [PDF] from columbia.edu W Guo… – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract In this paper, we present a word sense disambiguation (WSD) based system for  multilingual lexical substitution. Our method depends on having a WSD system for English  and an automatic word alignment method. Crucially the approach relies on having parallel … Cited by 1 – Related articles – All 17 versions

HERMIT: Flexible clustering for the SemEval-2 WSI task [PDF] from mercubuana.ac.id D Jurgens… – Proceedings of the 5th International Workshop …, 2010 – dl.acm.org Abstract A single word may have multiple unspecified meanings in a corpus. Word sense  induction aims to discover these different meanings through word use, and knowledge-lean  algorithms attempt this without using external lexical resources. We propose a new … Cited by 1 – Related articles – All 17 versions

SemEval-2010 Task 11: Event detection in Chinese news sentences Q Zhou – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract The goal of the task is to detect and analyze the event contents in real world  Chinese news texts. It consists of finding key verbs or verb phrases to describe these events  in the Chinese sentences after word segmentation and part-of-speech tagging, selecting … Cited by 1 – Related articles – All 16 versions

RACAI: Unsupervised WSD experiments@ SemEval-2, task# 17 [PDF] from mercubuana.ac.id R Ion… – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract This paper documents the participation of the Research Institute for Artificial  Intelligence of the Romanian Academy (RACAI) to the Task 17–All-words Word Sense  Disambiguation on a Specific Domain, of the SemEval-2 competition. We describe three … Cited by 1 – Related articles – All 15 versions

[CITATION] SemEval 2007 E Agirre, L Marquez… – Proceedings of the 4th International Workshop …, 2007 Cited by 1 – Related articles

SemEval-2 task 15: Infrequent sense identification for Mandarin text to speech systems P Jin… – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract There are seven cases of grapheme to phoneme in a text to speech system  (Yarowsky, 1997). Among them, the most difficult task is disambiguating the homograph  word, which has the same POS but different pronunciation. In this case, different … All 17 versions

[PDF] SemEval-2007 Task 09: Multilevel Semantic Annotation of Catalan and Spanish [PDF] from upenn.edu MA Marti… – acl.ldc.upenn.edu Abstract In this paper we describe SemEval-2007 task number 9 (Multilevel Semantic  Annotation of Catalan and Spanish). In this task, we aim at evaluating and comparing  automatic systems for the annotation of several semantic linguistic levels for Catalan and … Related articles – View as HTML – All 31 versions

[PDF] SemEval-2010 Task 1: Coreference Resolution in Multiple Languages [PDF] from upc.edu M Taulé, V Hoste, M Poesio… – lsi.upc.edu Abstract This paper presents the SemEval-2010 task on Coreference Resolution in Multiple  Languages. The goal was to evaluate and compare automatic coreference resolution  systems for six different languages (Catalan, Dutch, English, German, Italian, and Spanish … Related articles – View as HTML – All 3 versions

[PDF] The SEMEVAL English Lexical Substitution Task: Results [PDF] from dianamccarthy.co.uk D McCarthy… – dianamccarthy.co.uk In this document we show precision (P) and recall (R) and mode precision (mode P) and  mode recall (mode R) as described in our scoring documentation1. In tables 1 to 4 we have  ordered systems according to recall on the best task, and in tables 5 to 8 according to … Related articles – View as HTML – All 3 versions

OpAL: Applying opinion mining techniques for the disambiguation of sentiment ambiguous adjectives in SemEval-2 task 18 [PDF] from mercubuana.ac.id A Balahur… – Proceedings of the 5th International Workshop …, 2010 – dl.acm.org Abstract The task of extracting the opinion expressed in text is challenging due to different  reasons. One of them is that the same word (in particular, adjectives) can have different  polarities depending on the context. This paper presents the experiments carried out by … Cited by 1 – Related articles – All 18 versions

[PDF] SemEval-2007 task 12: Turkish lexical sample task [PDF] from upenn.edu Z Orhan, E Çelik… – Proceedings of the 4th …, 2007 – acl.ldc.upenn.edu Abstract This paper presents the task definition, resources, and the single participant system  for Task 12: Turkish Lexical Sample Task (TLST), which was organized in the SemEval-2007  evaluation exercise. The methodology followed for developing the specific linguistic … Cited by 1 – Related articles – View as HTML – All 32 versions

[TXT] Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)} [TXT] from aclweb.org E Agirre, B Magnini, O Lopez de Lacalle… – … Evaluations (SemEval- …, 2007 – aclweb.org @Book{SemEval-2007:2007, editor = {Eneko Agirre and Llu'{i}s M`arquez and Richard Wicentowski}, title = {Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)}, month = {June}, year = {2007}, address = {Prague, Czech Republic … Cached – All 2 versions

[PDF] Semeval-2007 Task 2: Evaluating Word Sense Induction and Discrimination Systems [PDF] from psu.edu A Soroa, BC Donostia… – Citeseer Word Sense Disambiguation (WSD) is a key enabling-technology. Supervised WSD  techniques are the best performing in public evaluations, but need large amounts of hand- tagging data. Existing hand-annotated corpora like SemCor (Miller et al., 1993), which is … Related articles – View as HTML – All 2 versions

[PDF] SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific [PDF] from nus.edu.sg SN Kim, O Medelyan, MY Kan, T Baldwin… – iiwas.comp.nus.edu.sg Abstract This paper describes Task 5 of the Workshop on Semantic Evaluation 2010  (SemEval-2010). Systems are to automatically assign keyphrases or keywords to given  scientific articles. The participating systems were evaluated by matching their extracted … Related articles – View as HTML – All 17 versions

SemEval-2007 Task 16: evaluation of wide coverage knowledge resources [PDF] from upc.edu M Cuadros Oller… – 2007 – upcommons.upc.edu This task tries to establish the relative quality of available semantic resources (derived by  manual or automatic means). The quality of each large-scale knowledge resource is  indirectly evaluated on a Word Sense Disambiguation task. In particular, we use Senseval … Related articles – All 4 versions

ISTI@ SemEval-2 task# 8: Boosting-based multiway relation classification [PDF] from mercubuana.ac.id A Esuli, D Marcheggiani… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract We describe a boosting-based supervised learning approach to the” Multi-Way  Classification of Semantic Relations between Pairs of Nominals” task# 8 of SemEval-2.  Participants were asked to determine which relation, from a set of nine relations plus” … Related articles – All 18 versions

Computational semantic analysis of language: SemEval-2007 and beyond E Agirre, L Màrquez… – Language Resources and …, 2009 – Springer SemEval-2007, the Fourth International Workshop on Semantic Evaluations (Agirre et al.  2007) took place on June 23-24, 2007, as a co-located event with the 45th Annual Meeting  of the ACL. It was the fourth semantic evaluation exercise, continuing on from the series of … Cited by 1 – Related articles – Library Search – All 2 versions

KP-Miner: Participation in SemEval-2 [PDF] from mercubuana.ac.id SR El-Beltagy… – … of the 5th International Workshop on …, 2010 – dl.acm.org Abstract This paper briefly describes the KP-Miner system which is a system developed for  the extraction of keyphrases from English and Arabic documents, irrespective of their nature.  The paper also outlines the performance of the system in the” Automatic Keyphrase … Related articles – All 15 versions

FCC: Modeling Probabilities with GIZA++ for Task# 2 and# 3 of SemEval-2 [PDF] from rug.nl D Vilarino, C Balderas, D Pinto… – Proceedings of the 5th …, 2010 – dl.acm.org Abstract In this paper we present a naïve approach to tackle the problem of cross-lingual  WSD and cross-lingual lexical substitution which correspond to the Task# 2 and# 3 of the  SemEval-2 competition. We used a bilingual statistical dictionary, which is calculated with … Related articles – All 17 versions

HR-WSD: System description for all-words word sense disambiguation on a specific domain at SemEval-2010 [PDF] from mercubuana.ac.id MH Shih – Proceedings of the 5th International Workshop on …, 2010 – dl.acm.org Abstract The document describes the knowledge-based Domain-WSD system using  heuristic rules (knowledge-base). This HR-WSD system delivered the best performance  (55.9%) among all Chinese systems in SemEval-2010 Task 17: All-words WSD on a … Related articles – All 16 versions

PengYuan@ PKU: Extracting infrequent sense instance with the same N-gram pattern for the SemEval-2010 task 15 [PDF] from aclweb.org PY Liu, S Liu, SW Yu… – … of the 5th International Workshop on …, 2010 – dl.acm.org Abstract This paper describes our infrequent sense identification system participating in the  SemEval-2010 task 15 on Infrequent Sense Identification for Mandarin Text to Speech  Systems. The core system is a supervised system based on the ensembles of Naïve … Related articles – All 15 versions

??? ?SemEval-2 ??????????????????????? ???? – ??????= Journal of natural language processing, 2011 – ci.nii.ac.jp … ???, ???, ????, ????, ISSN, ?, ?, ???, ???, ????, ???, ??? ???. ??? CiNii????? CiNii??????????????????????. ????. ????. ????. ????SemEval-2 ???????????????????????. … Cached

?????????????????????????????????????????? ([SemEval-2 ??????????????????????]) ????… – ??????, 2011 – ci.nii.ac.jp … ?????????????????????????????????????????? ([SemEval-2??????????????????????]) Semi-supervised Japanese word sense disambiguation based on two-stage classification of unlabeled data and ensemble … Cached

???????????????????????? ([SemEval-2 ??????????????????????]) ????… – ??????, 2011 – ci.nii.ac.jp … ????????????????. ???????????????????????? ([SemEval-2??????????????????????]) Effectiveness of automatic expansion of training data for Japanese word sense disambiguation. … Cached

On SemEval-2010 Japanese WSD Task M Okumura, K Shirai, K Komiya… – Information and Media …, 2011 – J-STAGE An overview of the SemEval-2 Japanese WSD task is presented. The new characteristics of  our task are (1) the task will use the first balanced Japanese sense-tagged corpus, and (2)  the task will take into account not only the instances that have a sense in the given set but … Related articles

SemEval-2010 Task 1: coreference resolution in multiple languages M Recasens Potau, T Martí, M Taulé… – 2009 – upcommons.upc.edu This paper presents the task” Coreference Resolution in Multiple Languages” to be run in  SemEval-2010 (5th International Workshop on Semantic Evaluations). This task aims to  evaluate and compare automatic coreference resolution systems for three different … Cached

Construction of context models for Word Sense Disambiguation ([SemEval-2 ??????????????????????]) B Brosseau-Villeneuve, N Kando… – ??????, 2011 – ci.nii.ac.jp … ISSN, ?, ?, ???, ???, ????, ???, ??? ???. ??? CiNii????? CiNii??????????????????????. Construction of context models for Word Sense Disambiguation ([SemEval-2??????????????????????]). … Cached

On SemEval-2010 Japanese WSD task ([SemEval-2 ??????????????????????]) M Okumura, K Shirai… – ??????, 2011 – ci.nii.ac.jp … ???, ???, ????, ????, ISSN, ?, ?, ???, ???, ????, ???, ??? ???. ??? CiNii????? CiNii??????????????????????. On SemEval-2010 Japanese WSD task ([SemEval-2??????????????????????]). … Cached

(Visited 84 times, 1 visits today)