Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/309005 
Erscheinungsjahr: 
2023
Quellenangabe: 
[Journal:] International Journal on Digital Libraries [ISSN:] 1432-1300 [Volume:] 25 [Issue:] 2 [Publisher:] Springer [Place:] Berlin, Heidelberg [Year:] 2023 [Pages:] 273-285
Verlag: 
Springer, Berlin, Heidelberg
Zusammenfassung: 
Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.
Schlagwörter: 
Crowdsourcing microtasks
Knowledge graph validation
Scholarly knowledge graphs
User interface evaluation
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.