Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

From Àbèsàbèsì to XPath: An Overview of the Lexical Data Masterclass 2018

Notes from the participants’ symposium, December 7, 2018

After a whole week of intense work, the participants of the DARIAH Lexical Data Masterclass presented their projects and results and discussed a variety of encoding and transformation issues which are summarized here. 

Specialized dictionaries

Claudia Bonsi – Encoding an Italian meta-dictionary

The project provided a good illustration of the tension between the typographic, editorial and lexical views of dictionaries encoded in TEI. Since the meta-dictionary is a dictionary about another dictionary, an editorial view preserving the original source seemed to be the best choice. Claudia Bonsi created a taxonomy of concepts to organize the content (<taxonomy>). The dictionary is encoded as a combination of actual <entries> and descriptive content (<div>), with then embedded entries where the description is in turn conceived as a lexicographic description. Claudia Bonsi encoded a sample of the dictionary and created a first XSLT transform to produce an HTML visualisation. The next step may be creating a resource of all persons mentioned in the metadictioanry.

Marija Zarkovic – Spanish Legal Terms Through Time: Digitization

Marija Zarkovic analysed legal documents to extract legal terms according to the first Spanish dictionary published by the Royal Academy. The actual edition is important to consider both for lexicographic and historical-legal reasons. Marija created a big corpus (<teiCorpus>) from the various resources used in the project, each represented as a single TEI document (<TEI>). XSLT was used to produce an HTML output so that the various entries for a given lemma could be seen in a row from the various sources and organized along a timeline, thus allowing an immediate comparison of the actual lexicographic practices in time. The representation is a good basis for the future extension of the project towards additional resources.

Martin Wynne (Bodlean library, Oxford) – Enhancing a lexicon of variant word forms in seventeenth-century French

The underlying corpus is a resource of XVII Cent. correspondence. The dictionary is intended to record variant spellings and contribute to tune a dedicated POS tagger for the corpus. The source data was represented by means of a basic spreadsheet, which was then transformed by means of an XSLT transfer into a TEI representation that would allow one to have richer content in the future (inflections, citations etc.) — something which would have not been possible with a spreadsheet. The encoding makes intensive use of variant forms (<form type=”variant”>) within inflected form groups (<form type=”inflected”>). The TEI represented is also converted back to the original format because it is the format expected by TreeTagger. The masterclass was the opportunity to make connection with a team working on similar linguistic data at ATILF (in Nancy, contact: Gilles Souvay). The next step will be to put the LGeRN dictionary (from Nancy) in the same format to compare content and linguistic coverage.

Joanna Aleksandra Bilińska  – Slavic Corpora Terminology Dictionary

Joanna Aleksandra Bilińska has been working on a project involving corpus linguistics terminology in several Slavic languages, including Slovenian, with appropriate English equivalents. The dictionary combines several definitions to explore how the concepts are actually understood in various sources. The dictionary has been encoded in TEI with a lot of citations (<cit type=”original”>) together with translations (<cit type=”translation”>) . When extended to the additional languages, separate TEI documents will be maintained, with sample entries that the future editor will be able to use as templates. Joanna used XPath for testing the actual content and provide some quality control mechanisms. Future work may consist in creating an independant bibliographic resource that could be referenced from the various dictionaries.

From PDF to TEI using GROBID dictionary

Nikolche Mickoski (Lexicographic Centre at Macedonian Academy of Sciences and Arts) – Using GROBID for OCR-ized multilingual dictionary

Mohamed Khemakhem was the voice of Nikolche, who had to leave early, and reported on the work carried out on extracting TEI dictionary entries automatically from a multilingual dictionary in 6 languages (Macedonian together with Serbian, English, French, Russian and German)  by means of the GROBID-Dictionaries software over the week. Nikolche trained all the models of GROBID. The “Dictionary Segmentation” model reached 100% score, the two additional models reached 98.85% (entry segmentation; form sense etc.) and 93.49 (subfield analysis, e.g. lemma, examples, translation equivalents). The annotation of training data was done in the the Oxygen XML Editor’s Author Mode with a dedicated CSS stylesheet and RNG schema. The work was the opportunity to identify a few fixes that remain to be done on the annotation and training process. The final good results correspond to the manual annotations of 12 pages.

Emrah Özcan (Yildiz Technical University) – Retro-digitizing Turkish dictionaries using GROBID-dictionaries and XSLT

The project was based on a dictionary produced by the Turkish Language Institute which was only available in print format. One important issues that was explored was to identify a good OCR process producing enough reliable data from the original source, with the possibility that the original document has to be re-OCRed. Emrah experienced some difficulties providing enough annotations for the fine grained models.  

Interestingly the levels for models in GROBID-Dictionaries reflects the actual levels from the TEI in libraries recommendation.

Emrah also converted another dictionary available as an SQL database and transformed it automatically into a full blown TEI representation!

Biljana Lazić – Using GROBID-Dictionaries to encode the German-Serbian Mining Dictionary

Biljana Lazić presented how she used GROBID-Dictionaries for her bilingual legacy dictionary project. The entry segmentation level was very good but the next stage (field level) generated a lot of wrongly represented content. With 30 pages, there were still a lot of issues, which means that GROBID has to be extended to additional features that would help GROBID converge better.

Marija Gmitrović (Institute of Serbian Language) – Using GROBID-Dictionaries to encode the Dictionary of the Jablanica Dialect

The first experience gathered by Marija Gmitrović on GROBID is that Windows 7 can be a pain :-}. The original Serbian dialect dictionary contained 322 pages of entries. The results were very good as a whole for the first level but dropped at entry-segmentation level and field level. After more annotations (14 pages in total), the results improved and related entries were also well recognized.

The experiment was an opportunity to identify the shallow difference between a sense and a related entry.

SSK update

Lionel Tadjou (Inria, team ALMAnaCH) – Current state of the lexical scenario in the SSK

Lionel Tadjou presented the main features of the Standardisation Survival Kit (SSK – http://ssk.huma-num.fr) and focuses on the specific scenario that was developed together with Charly Moerth (our keynote speaker) and Charles Riondet from Inria, with the help of several participants to the masterclass, available under: https://ssk.parthenos-project.eu/ssk/#/scenarios/SSK_sc_dictionaryInTei/1

TEI based dictionary projects

Boris Lehečka (Czech Language Institute) – Electronic Dictionary of Old Czech: TEI Modeling and Transforming

Old Czech covers the period from 1300-1500 and Boris Lehečka worked on a dictionary project that started in 2006 which is now enriched with new entries, citations etc. The source is encoded in MS Word and was transformed into a TEI representation, then presented as an HTML output. Boris wanted to keep to the editorial view of his original source for the sake of respect to the original lexicographers. Following a precise study of the TEI Guidelines, Boris put together a TEI-based model that would cover all the necessary features from the source and updated a first encoding attempt from 2015. The important lessons learned from the masterclass is to document one’s model by means of a TEI ODD specification and this may in turn impact on new features for the TEI Guidelines as a whole. 

Nikola Zdravković – Deutsches Wörterbuch – Refurbished (Letter A)

Nikola Zdravković came to the masterclass to improve his XML skills and the actual results went beyond his personal expectations! The work was carried out on the basis of the actual source data of the Deutsches Wörterbuch (with the personal and efficient support of Axel Herold from the BBAW). By means of XSLT transforms (and XPath), Nikola explored, thorough a variety of presentational profiles, the actual content of the dictionary and identify various lexicographic characteristics thereof.

Language documentation projects

Jonas Lau – Advancements in the Àbèsàbèsì Dictionary

Jonas Lau was working jointly on the the dictionary and the grammar of  Àbèsàbèsì, both encoded in TEI. The dictionary data was obtained by converting LIFT format into TEI and then searching/presenting the data by means of XQuery (on top of an eXist database). A specific difficulty was to sort the content according to IPA representation. The trick was to use the German profile (!).

Simona Olivieri (Humboldt Research Fellow – FU Berlin) – TEI-encoding of linguistic terminology and Qurʾānic quotations in the Kitāb Sībawayhi

The project is about extracting grammatical terminology from a grammar of classical Arabic (1500 pages so far) and linking them to citations (most of them from the Quran). Simona Olivieri created a cluster of TEI documents corresponding to the grammar, the lexical entries and the quranic quotations.  Simona discovered the strength of the TEI header to document the project precisely and keep track of actual encoding choices for instance. Finally, XSLT was used to provide a presentation of the grammar.

From legacy formats and data bases  to TEI

Fraser Dallachy – A TEI XML Version of the “Historical Thesaurus of English” Legacy Database

The Historical Thesaurus of English was available as a relational database with a specific model combining dictionary (semasiological) and thesaurus (onomasiological) characteristics. The workflow was based on the provision of an initial dump of the database, followed by a transformation of the corresponding flat tables into a TEI conformant (dictionary) structure. The complexity of the source model forced Fraser to stretch the semantics of some tags such as <form>.

Ana de Castro Salgado (FCSH, NOVA, CLUNL, Lisbon, Portugal / Academia das Ciências de Lisboa, Lisbon, Portugal) – Switching the Academy of Sciences Portuguese Dictionary to TEI Lex-0

Ana Salgado worked on the normalization of a pre-TEI export from the database within which the Academy of Sciences Portuguese Dictionary was encoded. Ana identified a lot of procedural aspects that she documented to proceed with this project. The output comes with a full-fledged TEI header and a precise typology of dictionary entries. One of the specific working areas was the representation of collocations which are now seen as real entries within the main ones. Another issue was to record the forms that have been impacted by the Portuguese spelling reform.

Zara Kancheva and Ivaylo Radev (IICT-BAS) –  BTB-WordNet. From LMF to TEI with XSLT

Zara and Ivaylo worked on the transformation of a legacy representation of the Bulgarian Wordnet expressed in the old LMF serialization proposal in order to align it with the TEI Guidelines. Since the original model was clearly reflecting a semasiological view on the actual data, with forms pointing to one or more senses, with a mapping on Wordnet synset, the process was relatively straightforward and carried out as an XSLT transform.

Acknowledgements

Last but not least, we should not forget all the wonderful people that have made it possible to have such a successful event. Beyond my team-twin @digilex and myself, a whole group of trainers have jumped in to bring in their skills: Mohamed Khemakhem coordinated the GROBID-dictionary workshops, Jack Bowers brought his TEI skills, Lionel Tadjou and Charles Riondet went around compiling an SSK scenario, and last but not least, beyond his own TEI literacy and intimate knowledge of the GRIOMM dictionary, Axel was the one thanks to whom we have managed to have the perfect setting (‘and catering) at the Berlin Brandenburg Academy of Sciences.

Тhe Lexical Data Masterclass is back!

Co-organized by DARIAH-EU, the Berlin Brandenburg Academy of Sciences (BBAW), Inria and the Belgrade Center for Digital Humanities, with the support of the French Ministry of Higher Education and Research (MESRI), CLARIN and the European Lexicographic Infrastructure (ELEXIS), the 2018 edition of the Lexical Data Masterclass will take place in Berlin at the BBAW from 3 to 7 December.

LexMC2018 will bring together 20 advanced trainees together with experts to share experiences, methods and techniques for the creation, management and use of lexical data.

The masterclass will cover a number of topics ranging from general models for lexical content and TEI-based representation of lexical data to working efficiently with XML editors. The participants will have a chance to attend different sessions, consult with experts on their own dictionary projects and get to know and test TEI Lex-0, a newly proposed baseline encoding for lexicographic data.

The masterclass will also feature keynotes by Karlheinz Moerth, Director of the Austrian Centre for Digital Humanities at the Austrian Academy of Sciences and Benoit Sagot, head of the ALMAnaCH research team at Inria.

Potential applicants should submit a short proposal presenting their background and interest in the field together with a description of a concrete project involving lexical data that they would like to pursue during the masterclass.

Participation is free of charge. Travel costs and accommodation will be covered for all participants up to a maximum of 600€.

Applications should be made via the Lexical Master Class website: https://lexmc18.sciencesconf.org

Application deadline: 26th October 2018
Notification of acceptance: 5th November 2018

Further inquiries can be made to: lexmc18@sciencesconf.org

For an overview of the last year’s masterclass, see https://digilex.hypotheses.org/386.

The Lexical Data Masterclass – An Overview

From 4 to 8 December 2017, 21 participants met together with 8 trainers and 2 keynote speakers to work jointly and improve their digital dictionary projects.

The meeting, co-organized by the Centre Marc Bloch, DARIAH-EU, the Berlin Brandenburg Academy of Sciences (BBAW), Inria (Paris, France) and the Belgrade Center for Digital Humanities (BCDH, Serbia), with the support of the German Ministry of Education and Research (BMBF), CLARIN, DARIAH-DE and the EU H2020 project Humanities at Scale (HaS) has been construed as a master class, i.e. a series of training and working sessions where most of the knowledge transfer is issued through the concrete work on the participants’ projects. We want to reflect here on what everyone has obviously considered as a very successful meeting by providing an overview on the instructional sessions and the actual projects that been brought by the participants as reflected in the final symposium that took place on 8th December.  Continue reading The Lexical Data Masterclass – An Overview

Simple XSLT experiment: building up a concordance dictionary from a corpus

 

During the lexical master class in Berlin, we worked together with Simonetta Battista and Ellert Johannsson on the Dictionary of Old Norse Prose. One of the little exercises we have done was to use XSLT in order to generate lexical entries automatically from an existing annotated textual corpus of Old Norse.

The corpus

The starting point is a corpus encoded according to the TEI guidelines done to the level of token. The core unit in the corpus is the sentence (available under /TEI/text/body/div/p):

<s><w lemma="ljúka" type="sfg3en">Lýkur</w><w lemma="hér" type="aa">hér</w><w lemma="saga" type="nveo">sögu</w><w lemma="grettir" type="nkee-s">Grettis</w><w lemma="ásmundarsonar" type="nkee-s">Ásmundarsonar</w><pc>,</pc><w lemma="vor" type="fekee">vors</w><w lemma="samlandi" type="nkfe">samlanda</w><pc>.</pc></s>

where each token (<w>) is associated to a reference lemma (@lemma attribute).

The task at hand

The objective is, for each lemma occurring in the annotated corpus, to create a TEI lexical entry that groups together all sentences from the corpus. Going one step further, we have create a specific TEI document for each entry.

The XSLT stylesheet

Although quite simple, the stylesheet contains several XSLT techniques that we describe in the forthcoming sections.

Architecture of the stylesheet and first variables

The stylesheet relies on a a single template associate to the root element of our corpus:

<xsl:template match="/">…</xsl:template>

We first create the following variables (by mean of the XSLT element <xsl:variable/>):

  • theRoot: stores the root element of the source document so that it can be used later in the loop on lemmas, which looses track of the context node;
<xsl:variable name="theRoot" select="."/>
  • folderName: the name of the directory where we will store all the dictionary entry documents ;
  • allLemmas: that contains the sequence of all the different lemmas encountered in the corpus by means of a concise XPath instruction:
<xsl:variable name="allLemmas" select="distinct-values(descendant::w/@lemma)"/>

The XPath expression operates in two steps:

  • descendant::w/@lemma: extracts all lemma attributes from all words which are descendant of the current node (i.e. the root of our document in the current template) ;
  • the powerful XPath function distinct-values() that creates a sequence of all distinct values occurring in the argument sequence.

Creating each entry

The process of creating entries is based upon a main loop iterating on the different lemmas, within which we use <xsl:result-document/> to create an ouput document for each lemma.

<xsl:for-each select="$allLemmas">
   <xsl:sort/>
   <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
   …
   </xsl :result-document>
</xsl:for-each>

We have left here a sorting instruction (<xsl:sort/>) as an illustration on how to use it, knowing that it is less useful when entries are created in separate files.

Note the use of the {} notation in attributes to resolve the value of specific XPath fragments which, put together, builds up the name of the output file.

For each entry, we generate:

  • a <form> element whose <orth> child element contains the lemma
  • a series of <cit> elements for each detected sentence in the corpus (see below)
<entry>
   <form type="lemma">
      <orth>
         <xsl:value-of select="."/>
      </orth>
   </form>
   <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
      <cit>
         <quote>
            <xsl:copy-of select="parent::s"/>
         </quote>
      </cit>
   </xsl:for-each>
</entry>

The XPath expression “$theRoot/descendant::w[@lemma = current()] reads: find all the <w> elements descendent of the root element (of the corpus) whose @lemma attribute equals the current context node (i.e. the text value of the lemma from the encompassing <xsl:for-each/>). Note that the example sentence is simply obtained by looking for the parent <s> element of the current <w>.

The full XSLT

Here below, you can find the full XSLT, where you will notice how the header is built up in taking up some parts of the input corpus.

The full set of resources can be found in the DARIAH lexical working group GitHub space (note the @xpath-default-namespace attribute which ensures that all generated element are set in the TEI namespace):

<?xml version="1.0" encoding="UTF-8"?>

<xsl:stylesheet
   xmlns:xsl="http://www.w3.org/1999/XSL/Transform"    
   version="2.0" xmlns="http://www.tei-c.org/ns/1.0" 
   xpath-default-namespace="http://www.tei-c.org/ns/1.0">

<xsl:output method="xml" indent="yes"/>

<xsl:template match="/">
   <xsl:variable name="theRoot" select="."/>
   <xsl:variable name="folderName" select="'Entries'"/>
   <xsl:variable name="allLemmas"
      select="distinct-values(descendant::w/@lemma)"/>
   <xsl:for-each select="$allLemmas">
      <xsl:sort/>
      <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
         <TEI>
            <teiHeader>
               <fileDesc>
                  <titleStmt>
                     <title>Concordance lexical entry for lemma: 
                        <xsl:value-of select="."/>
                     </title>
                  </titleStmt>
                  <publicationStmt>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/distributor"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/address"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/availability"/>
                 </publicationStmt>
                 <sourceDesc>
                    <bibl>
                       <xsl:copy-of select="$theRoot/descendant::titleStmt/title"/>
                   </bibl>
                 </sourceDesc>
              </fileDesc>
            </teiHeader>
            <text>
               <body>
                  <entry>
                     <form type="lemma">
                        <orth>
                           <xsl:value-of select="."/>
                        </orth>
                     </form>
                     <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
                        <cit>
                           <quote>
                              <xsl:copy-of select="parent::s"/>
                           </quote>
                        </cit>
                     </xsl:for-each>
                  </entry>
               </body>
            </text>
         </TEI>
      </xsl:result-document>
   </xsl:for-each>
</xsl:template>
</xsl:stylesheet>