Use this url to cite publication: https://cris.mruni.eu/cris/handle/007/48029
Self-Evaluation of Generative AI Prompts for Linguistic Linked Open Data Modelling in Diachronic Analysis
Type of publication
Straipsnis konferencijos medžiagoje Scopus duomenų bazėje / Article in conference proceedings in Scopus database (P1a2)
Author(s)
Author | Affiliation | |
---|---|---|
Armaselu, Florentina | ||
Title [en]
Self-Evaluation of Generative AI Prompts for Linguistic Linked Open Data Modelling in Diachronic Analysis
Publisher
Language Resources Association (ELRA), 2024
Date Issued
Date |
---|
2024 |
Extent
p. 86-91.
Series/Report no.
LREC-COLING; 2024
Abstract
This article addresses the question of evaluating generative AI prompts designed for specific tasks such as linguistic linked open data modelling and refining of word embedding results. The prompts were created to assist the pre-modelling phase in the construction of LLODIA, a linguistic linked open data model for diachronic analysis. We present a self-evaluation framework based on the method known in literature as LLM-Eval. The discussion includes prompts related to the RDF-XML conception of the model, and neighbour list refinement, dictionary alignment and contextualisation for the term revolution in French, Hebrew and Lithuanian, as a proof of concept.
Is Referenced by
Part Of
DLnLD: Deep Learning and Linked Data (DLnLD) @ LREC-COLING 2024 : Workshop Proceedings, 21 May, 2024 Torino, Italia / editors Gilles Sérasset, Hugo Gonçalo Oliveira and Giedre Valunaite Oleskeviciene.
Type of document
text::conference output::conference proceedings::conference paper
ISBN (of the container)
9782493814166
ISSN (of the container)
2951-2093
2522-2686
SCOPUS
2-s2.0-85195220486
eLABa
198363416
Coverage Spatial
Italija / Italy (IT)
Language
Anglų / English (en)
Bibliographic Details
18
Date Reporting
2024
Creative Commons License
Access Rights
Atviroji prieiga / Open Access
File(s)
Journal | Cite Score | SNIP | SJR | Year | Quartile |
---|---|---|---|---|---|
Proceedings - International Conference on Computational Linguistics, COLING | 6.6 | 1.702 | 0 | 2024 | Q1 |