Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

The Lexical Data Masterclass – An Overview

From 4 to 8 December 2017, 21 participants met together with 8 trainers and 2 keynote speakers to work jointly and improve their digital dictionary projects.

The meeting, co-organized by the Centre Marc Bloch, DARIAH-EU, the Berlin Brandenburg Academy of Sciences (BBAW), Inria (Paris, France) and the Belgrade Center for Digital Humanities (BCDH, Serbia), with the support of the German Ministry of Education and Research (BMBF), CLARIN, DARIAH-DE and the EU H2020 project Humanities at Scale (HaS) has been construed as a master class, i.e. a series of training and working sessions where most of the knowledge transfer is issued through the concrete work on the participants’ projects. We want to reflect here on what everyone has obviously considered as a very successful meeting by providing an overview on the instructional sessions and the actual projects that been brought by the participants as reflected in the final symposium that took place on 8th December. 

A specific outcome of the meeting has been all the potential contributions to the standardisation landscape, in particular the TEI guidelines, as they have been identified over the week during the very lively discussions that took place.

Instructional sessions

These sessions were conceived as a way to introduce specific methods that would in turn be used during the workshops.

  • Toma Tasovac, XPath for searching dictionaries
  • Laurent Romary, Overview of lexical models and introduction to the TEI dictionary chapter
  • Benoit Sagot and Axel Herold, Representing etymological processes
  • Laurent Romary, Querying and presenting TEI dictionary data with XSLT
  • Alexander Geyken, The DWDS workflow
  • Marie Puren, Data management practices and recommendations
  • Piotr Banski, The CLARIN infrastructure for lexical data
  • Toma Tasovac, Customizing oXygen for lexicographic work

An additional training thread that was pursued all over the week was dedicated to experimenting Grobid-dictionary to extract lexical entries from legacy print dictionaries available as pdf documents (cf. Khemakhem et alii, 2017).

Overview of project presentations in the final symposium

The variety of dictionary projects on which the participants worked over the week is presented below in order under which they have been presented at the final symposium, grouped together in sessions according to similarities of objectives or activities as we observed them during the master class. When available we point to the corresponding blog posts written by both trainees and trainers.

Born digital projects

Three projects were actual lexical developments that did not resulted from the encoding of an existing lexical resource or dictionary but fulfilled a specific role in the context of a larger linguistic or literary project.

Diehr Franziska, Text Database and Dictionary of Classic Mayan
Beyond the difficulty that Mayan characters are not described in a standard way (e.g. in ISO 10646 – Unicode), the project is characterised by being highly articulated with the underlying corpus of the mayan language. The master class was the opportunity to explore issues related to such relation to sources as well as to the description of etymological processes. Questions related to the appropriate use of the <gloss> element in alternation to <def> and <cit> in eliciting senses. Another important issue was to be able to refer precisely to an underlying corpus: a request to be able to use <citedRange> in <ref> this emerged.

Gabay Simon, Glossary of Mme de Sévignée’s correspondance
The project deals with the specific case of a lexicon of words and expressions occurring in a literary source, dealing for instance with etymological phenomena as intertextual influences. One proposal that emerged from the work is the necessity to have a @status attribute on <form> to be able to qualify the form with regards the source (e.g. if it has been observed). The issue of hosting the data on the Nakala server also emerged for a project that is not located in France but in Switzerland.

Batinic Dolores, Lexicographic resource with information about salient terms in everyday spoken German
The work on a dictionary of spoken German was the opportunity to relate the actual TEI based lexicographical representation with complementary standards, in particular ISO 24617-2 (see Bunt et alii, 2010) for communicative functions. This provided with a first opportunity to discuss a better formalisation of the TEI <usg> element for such applications. The project also contained an application of (Mörth et alii, 2015) for the representation of frequency and ranking information from the corpus, as well as from (Ide et alii, 2000) for the inheritance of communicative functions in lexical entries. See https://digilex.hypotheses.org/286

Specific resources (manuscript, etymology, dialectology)

Several projects were related to special lexicographic or applicative objectives which have allowed the group to go in-depth into precise dictionary encoding in TEI.

Kovalenko Kira, Russian Manuscript Lexicons: Digital Representation
The project aims at transforming manuscript lexical descriptions into an as highly conformant model as possible to the TEI guidelines. The relation to the source material (original layout) is important and therefore the use of the more flexible <entryFree> element has been considered. All resources are doomed to be made available in open access.

Tadjou Lionel, Lexical content for the Yemba language
In the context of a larger project of producing learning resources for the Yemba dialect as a first language (when the language is being superseded by a majority one in a country, in that case French), Lionel Tadjou experimented the generation of a TEI confirmant dictionary on the basis of an existing online resource by means of Java conversion routines.

Molochieva Zarina, Bilingual talking Luruuli/Lunyara-English dictionary
The project raises the general question of the re-use of less structured lexical information as they are available from legacy descriptions with tools such as Toolbox. It also aims at producing a glossary of reference data categories that could be made widely available.

Kramaric Martina, The guidelines for creation a data model for encoding the Miklosic dictionary Old Slavonic -Greek / Latin Dictionary
With a focus on etymological descriptions, an important component of the work carried out during the master class was dedicated to the description of complex sources (abbreviations, implicit etymological processes etc.). The work also raised the issue of a better model for the <cit> element so that a variety of text elements could be used in alternation to <quote>. The introduction of a model class has been suggested to this purpose.
Source: Miklosic dictionary Old Slavonic -Greek / Latin Dictionary (Lexicon Palaeoslovenico – Graeco-Latinum) published in 1865 by F. Miklosic in TEI

Petrovic Snezana & Ana Tesic, Etymological Dictionary of the Serbian Language
The work on the Etymological Dictionary of the Serbian Language was the opportunity to address etymological issues at large and in particular to identify how to represent folk etymology for which there is no standardised category at present. This was seen as a possible contribution to the new part on etymology in the context of the revision of ISO 24613:2008 as a multi-part standard. See Encoding the Etymological Dictionary of the Serbian Language

Savvidou Paraskevi, Digitization of a part of the `Great Dictionary of the Greek Language’
The project is at the intersection between word formation study, corpus linguistics and Lexicography. The project had quite a strong etymological orientation in relation to word formation rules of greek, with emphasis on derivation and compounding. A specific lexicographic (and encoding) issue was raised regarding morphemes and how far they should be considered as acceptable dictionary entries.

Retro conversion from existing sources

PDF sources

Three project dedicated their activities in exploring the possibilities of automatically extracting structured lexical entries from legacy “paper” dictionaries. To this purpose, two technical settings were prepared by trainers:

  • Mohamed Khemakhem made a self installing version of Grobid-dictionary on Docker so that participants could do their own test on their machine;
  • Axel Herold prepared a configuration of the author mode on the Oxygen XML editor to facilitate the development of training data from Grobid.

Lindemann David, Linking Basque Lexical Resources
David wrote an introduction to the Grobid-dictionary experiment in a blog post. See GROBID Dictionaries: Experiments with the General Basque Dictionary

English-Greek Dictionary, Koutsombogera Maria & Vakalopoulou Anna
The experiment on this dictionary was the opportunity to address several issues related to multilinguality and character encoding. It also proved to confirm that a more refined model for <sense> should be one of the main priorities for the further development of the tool.

Dictionary of Bavarian Dialects in Austria, Wahl Sabine
Although the dictionary relates to an onomasiological form of lexical content, the application of Grobid-dictionary produced very useful results. The one issue that will require some evolution to the software is the existence of column numbering instead or in complement to page numbering in the source dictionary, and which was not part of the existing machine learning models of.

XML sources

Johannsson Ellert & Battista Simonetta, Dictionary of Old Norse Prose
The project correspond to a typical case of a dictionary which has been developed as a database in its initial conception. During the master class, the work was thus focused on the conversion of an export from the database (available in a proprietary XML format) into a TEI conformant output. Since there also exists a reference corpus of Old Norse Prose, we also experimented the automatic extraction of examples from the corpus, as described in https://digilex.hypotheses.org/276.

Emrah Özcan, Turkish Dictionary
The work of the week was to transform an existing Turkish dictionary available in a legacy proprietary XML format into a TEI conformant representation using XSLT. The complete transformation was achieved over the week! The main issue that remained to be solved are the actual rights attached to the material so that it can be disseminated openly. See https://digilex.hypotheses.org/275

Rueter Jack & Hamalainen Mika, Work with multilingual facilitation through xml-based dictionaries of Uralic minority languages
This project has all characteristics of an heritage dictionary projects, with a necessary transformation of a legacy format, available as an XML representation coupled with a corpus from which examples could be derived (for which the work described in https://digilex.hypotheses.org/276 was also reused). The additional feature that had to be dealt with was the reference to audio files of native speakers.

Mechura Michal Boleslav, Retro-digitization of three dictionaries of the Irish language
Not only led the project to the definition of a fully TEI-compliant representation of the existing source dictionaries coupled with an XSLT representation, but the Michal Boleslav Mechura initiated the creation of a reference XSLT stylesheet for displaying TEI dictionaries as highly readable HTML files.

Contribution to the standardisation landscape

The following items have been identifying during the master class as action points to be contemplate to actuate some changes to the standardization landscape (ISO or TEI):

  • TEI ticket to describe the content model of <cit> on the basis of a model class for possible text elements (not just <q> or <quote>, but also <s>, <u> etc.)
  • For etymological description and following up from Bowers and Romary 2017, we need a TEI ticket for having <oRef> in <etym>
  • TEI ticket to have @status on <form> (cf. Simon Gabay)
  • Have a category “folk etymology” in the future ISO 24613-3 project (etymology/diachrony)
  • TEI ticket: allow <citedRange> within <ref> to account for precise references within a corpus (cf. Maya project)
  • TEI ticket: make <usg> member of att.typed to allow the availability of @subtype

Discussion points and further steps

The list above already show how much the lexical master class has been successful in contributing to the assessment of the standardisation landscape so that it reflects even better the constraints occuring in concrete lexical projects. Many more topics have been raised and I would only mention here the importance of providing more awareness to data management issue in order to increase the amount of openly available resources. There are aspects that we may quickly summarize here on this topic:

  • Which reuse conditions bear upon the various dictionaries, how to ensure a wide dissemination of the results, comprising the encoded source?
  • Which licence should be attached to such resources, how to ensure a fluid dissemination when the recommended creative commons CC-BY licence is not applicable?
  • How to go towards a network of available lexica in the context of stable hosting capacities offered by European infrastructures such as DARIAH and CLARIN?
  • How to build up infrastructures that also allow the correction, improvement or enrichment of existing resources?
  • How to deploy a reference subset of the TEI guidelines (TEI Lex) that would serve as a target deployment format readily usable by a variety of tools (presentation, query, hosting/LTA)?

At the end of the week, it was clear for all that the format of a master class was particularly appropriate for lexical projects where comparing practices and improving one’s own technical skills directly contributes to the quality of the end result. A series of Grobid-dictionary workshop have already been planned for 2018.

References
Jack Bowers, Laurent Romary. Deep encoding of etymological information in TEI. Journal of the Text Encoding Initiative, TEI Consortium, 2017, 〈https://jtei.revues.org/1643〉. 〈10.4000/jtei.1643〉. 〈hal-01296498v2〉
Harry Bunt, Jan Alexandersson, Jean Carletta, Jae-Woong Choe, Alex Chengyu Fang, et al.. Towards an ISO Standard for Dialogue Act Annotation. Seventh conference on International Language Resources and Evaluation (LREC’10), May 2010, La Valette, Malta. 2010. 〈inria-00544997〉


OpenEdition suggests that you cite this post as follows:
Laurent Romary (February 27, 2018). The Lexical Data Masterclass – An Overview. DigiLex. Retrieved December 6, 2024 from https://doi.org/10.58079/nmnw


Published by

Laurent Romary

Directeur de Recherche at Inria ISO TC 37, chair data models for digital humanities, standards, TEI, XML, open access