Notes from the participants’ symposium, December 7, 2018
After a whole week of intense work, the participants of the DARIAH Lexical Data Masterclass presented their projects and results and discussed a variety of encoding and transformation issues which are summarized here.
Specialized dictionaries
Claudia Bonsi – Encoding an Italian meta-dictionary
The project provided a good illustration of the tension between the typographic, editorial and lexical views of dictionaries encoded in TEI. Since the meta-dictionary is a dictionary about another dictionary, an editorial view preserving the original source seemed to be the best choice. Claudia Bonsi created a taxonomy of concepts to organize the content (<taxonomy>). The dictionary is encoded as a combination of actual <entries> and descriptive content (<div>), with then embedded entries where the description is in turn conceived as a lexicographic description. Claudia Bonsi encoded a sample of the dictionary and created a first XSLT transform to produce an HTML visualisation. The next step may be creating a resource of all persons mentioned in the metadictioanry.
Marija Zarkovic – Spanish Legal Terms Through Time: Digitization
Marija Zarkovic analysed legal documents to extract legal terms according to the first Spanish dictionary published by the Royal Academy. The actual edition is important to consider both for lexicographic and historical-legal reasons. Marija created a big corpus (<teiCorpus>) from the various resources used in the project, each represented as a single TEI document (<TEI>). XSLT was used to produce an HTML output so that the various entries for a given lemma could be seen in a row from the various sources and organized along a timeline, thus allowing an immediate comparison of the actual lexicographic practices in time. The representation is a good basis for the future extension of the project towards additional resources.
Martin Wynne (Bodlean library, Oxford) – Enhancing a lexicon of variant word forms in seventeenth-century French
The underlying corpus is a resource of XVII Cent. correspondence. The dictionary is intended to record variant spellings and contribute to tune a dedicated POS tagger for the corpus. The source data was represented by means of a basic spreadsheet, which was then transformed by means of an XSLT transfer into a TEI representation that would allow one to have richer content in the future (inflections, citations etc.) — something which would have not been possible with a spreadsheet. The encoding makes intensive use of variant forms (<form type=”variant”>) within inflected form groups (<form type=”inflected”>). The TEI represented is also converted back to the original format because it is the format expected by TreeTagger. The masterclass was the opportunity to make connection with a team working on similar linguistic data at ATILF (in Nancy, contact: Gilles Souvay). The next step will be to put the LGeRN dictionary (from Nancy) in the same format to compare content and linguistic coverage.
Joanna Aleksandra Bilińska – Slavic Corpora Terminology Dictionary
Joanna Aleksandra Bilińska has been working on a project involving corpus linguistics terminology in several Slavic languages, including Slovenian, with appropriate English equivalents. The dictionary combines several definitions to explore how the concepts are actually understood in various sources. The dictionary has been encoded in TEI with a lot of citations (<cit type=”original”>) together with translations (<cit type=”translation”>) . When extended to the additional languages, separate TEI documents will be maintained, with sample entries that the future editor will be able to use as templates. Joanna used XPath for testing the actual content and provide some quality control mechanisms. Future work may consist in creating an independant bibliographic resource that could be referenced from the various dictionaries.
From PDF to TEI using GROBID dictionary
Nikolche Mickoski (Lexicographic Centre at Macedonian Academy of Sciences and Arts) – Using GROBID for OCR-ized multilingual dictionary
Mohamed Khemakhem was the voice of Nikolche, who had to leave early, and reported on the work carried out on extracting TEI dictionary entries automatically from a multilingual dictionary in 6 languages (Macedonian together with Serbian, English, French, Russian and German) by means of the GROBID-Dictionaries software over the week. Nikolche trained all the models of GROBID. The “Dictionary Segmentation” model reached 100% score, the two additional models reached 98.85% (entry segmentation; form sense etc.) and 93.49 (subfield analysis, e.g. lemma, examples, translation equivalents). The annotation of training data was done in the the Oxygen XML Editor’s Author Mode with a dedicated CSS stylesheet and RNG schema. The work was the opportunity to identify a few fixes that remain to be done on the annotation and training process. The final good results correspond to the manual annotations of 12 pages.
Emrah Özcan (Yildiz Technical University) – Retro-digitizing Turkish dictionaries using GROBID-dictionaries and XSLT
The project was based on a dictionary produced by the Turkish Language Institute which was only available in print format. One important issues that was explored was to identify a good OCR process producing enough reliable data from the original source, with the possibility that the original document has to be re-OCRed. Emrah experienced some difficulties providing enough annotations for the fine grained models.
Interestingly the levels for models in GROBID-Dictionaries reflects the actual levels from the TEI in libraries recommendation.
Emrah also converted another dictionary available as an SQL database and transformed it automatically into a full blown TEI representation!
Biljana Lazić – Using GROBID-Dictionaries to encode the German-Serbian Mining Dictionary
Biljana Lazić presented how she used GROBID-Dictionaries for her bilingual legacy dictionary project. The entry segmentation level was very good but the next stage (field level) generated a lot of wrongly represented content. With 30 pages, there were still a lot of issues, which means that GROBID has to be extended to additional features that would help GROBID converge better.
Marija Gmitrović (Institute of Serbian Language) – Using GROBID-Dictionaries to encode the Dictionary of the Jablanica Dialect
The first experience gathered by Marija Gmitrović on GROBID is that Windows 7 can be a pain :-}. The original Serbian dialect dictionary contained 322 pages of entries. The results were very good as a whole for the first level but dropped at entry-segmentation level and field level. After more annotations (14 pages in total), the results improved and related entries were also well recognized.
The experiment was an opportunity to identify the shallow difference between a sense and a related entry.
SSK update
Lionel Tadjou (Inria, team ALMAnaCH) – Current state of the lexical scenario in the SSK
Lionel Tadjou presented the main features of the Standardisation Survival Kit (SSK – http://ssk.huma-num.fr) and focuses on the specific scenario that was developed together with Charly Moerth (our keynote speaker) and Charles Riondet from Inria, with the help of several participants to the masterclass, available under: https://ssk.parthenos-project.eu/ssk/#/scenarios/SSK_sc_dictionaryInTei/1
TEI based dictionary projects
Boris Lehečka (Czech Language Institute) – Electronic Dictionary of Old Czech: TEI Modeling and Transforming
Old Czech covers the period from 1300-1500 and Boris Lehečka worked on a dictionary project that started in 2006 which is now enriched with new entries, citations etc. The source is encoded in MS Word and was transformed into a TEI representation, then presented as an HTML output. Boris wanted to keep to the editorial view of his original source for the sake of respect to the original lexicographers. Following a precise study of the TEI Guidelines, Boris put together a TEI-based model that would cover all the necessary features from the source and updated a first encoding attempt from 2015. The important lessons learned from the masterclass is to document one’s model by means of a TEI ODD specification and this may in turn impact on new features for the TEI Guidelines as a whole.
Nikola Zdravković – Deutsches Wörterbuch – Refurbished (Letter A)
Nikola Zdravković came to the masterclass to improve his XML skills and the actual results went beyond his personal expectations! The work was carried out on the basis of the actual source data of the Deutsches Wörterbuch (with the personal and efficient support of Axel Herold from the BBAW). By means of XSLT transforms (and XPath), Nikola explored, thorough a variety of presentational profiles, the actual content of the dictionary and identify various lexicographic characteristics thereof.
Language documentation projects
Jonas Lau – Advancements in the Àbèsàbèsì Dictionary
Jonas Lau was working jointly on the the dictionary and the grammar of Àbèsàbèsì, both encoded in TEI. The dictionary data was obtained by converting LIFT format into TEI and then searching/presenting the data by means of XQuery (on top of an eXist database). A specific difficulty was to sort the content according to IPA representation. The trick was to use the German profile (!).
Simona Olivieri (Humboldt Research Fellow – FU Berlin) – TEI-encoding of linguistic terminology and Qurʾānic quotations in the Kitāb Sībawayhi
The project is about extracting grammatical terminology from a grammar of classical Arabic (1500 pages so far) and linking them to citations (most of them from the Quran). Simona Olivieri created a cluster of TEI documents corresponding to the grammar, the lexical entries and the quranic quotations. Simona discovered the strength of the TEI header to document the project precisely and keep track of actual encoding choices for instance. Finally, XSLT was used to provide a presentation of the grammar.
From legacy formats and data bases to TEI
Fraser Dallachy – A TEI XML Version of the “Historical Thesaurus of English” Legacy Database
The Historical Thesaurus of English was available as a relational database with a specific model combining dictionary (semasiological) and thesaurus (onomasiological) characteristics. The workflow was based on the provision of an initial dump of the database, followed by a transformation of the corresponding flat tables into a TEI conformant (dictionary) structure. The complexity of the source model forced Fraser to stretch the semantics of some tags such as <form>.
Ana de Castro Salgado (FCSH, NOVA, CLUNL, Lisbon, Portugal / Academia das Ciências de Lisboa, Lisbon, Portugal) – Switching the Academy of Sciences Portuguese Dictionary to TEI Lex-0
Ana Salgado worked on the normalization of a pre-TEI export from the database within which the Academy of Sciences Portuguese Dictionary was encoded. Ana identified a lot of procedural aspects that she documented to proceed with this project. The output comes with a full-fledged TEI header and a precise typology of dictionary entries. One of the specific working areas was the representation of collocations which are now seen as real entries within the main ones. Another issue was to record the forms that have been impacted by the Portuguese spelling reform.
Zara Kancheva and Ivaylo Radev (IICT-BAS) – BTB-WordNet. From LMF to TEI with XSLT
Zara and Ivaylo worked on the transformation of a legacy representation of the Bulgarian Wordnet expressed in the old LMF serialization proposal in order to align it with the TEI Guidelines. Since the original model was clearly reflecting a semasiological view on the actual data, with forms pointing to one or more senses, with a mapping on Wordnet synset, the process was relatively straightforward and carried out as an XSLT transform.
Acknowledgements
Last but not least, we should not forget all the wonderful people that have made it possible to have such a successful event. Beyond my team-twin @digilex and myself, a whole group of trainers have jumped in to bring in their skills: Mohamed Khemakhem coordinated the GROBID-dictionary workshops, Jack Bowers brought his TEI skills, Lionel Tadjou and Charles Riondet went around compiling an SSK scenario, and last but not least, beyond his own TEI literacy and intimate knowledge of the GRIOMM dictionary, Axel was the one thanks to whom we have managed to have the perfect setting (‘and catering) at the Berlin Brandenburg Academy of Sciences.