The Lexical Data Masterclass – An Overview

From 4 to 8 December 2017, 21 participants met together with 8 trainers and 2 keynote speakers to work jointly and improve their digital dictionary projects.

The meeting, co-organized by the Centre Marc Bloch, DARIAH-EU, the Berlin Brandenburg Academy of Sciences (BBAW), Inria (Paris, France) and the Belgrade Center for Digital Humanities (BCDH, Serbia), with the support of the German Ministry of Education and Research (BMBF), CLARIN, DARIAH-DE and the EU H2020 project Humanities at Scale (HaS) has been construed as a master class, i.e. a series of training and working sessions where most of the knowledge transfer is issued through the concrete work on the participants’ projects. We want to reflect here on what everyone has obviously considered as a very successful meeting by providing an overview on the instructional sessions and the actual projects that been brought by the participants as reflected in the final symposium that took place on 8th December.  Continue reading The Lexical Data Masterclass – An Overview

Encoding the Etymological Dictionary of the Serbian Language

We came to the Lexical Data Masterclass in Berlin with a clear idea in our minds – to explore the possibilities of retrodigitizing the Etymological Dictionary of the Serbian Language, which is a project both of us are working on, together with other colleagues at the Etymological department of the Institute for the Serbian Language of SASA. Little did we know it would turn out to be such a challenging task, yet extremely exciting at the same time.

We came to Berlin hoping to begin our work on the retrodigizitation of the Etymological Dictionary of the Serbian Language by encoding several entries as test models. In this blog post, we will present one of them – безбатан. Even though it is a shorter entry, it has all the important structural elements – grammatical information, several attestations, a semantic definition, examples, complex etymological information etc.

Continue reading Encoding the Etymological Dictionary of the Serbian Language

An XML Version of Turkish Dictionary

In order to anchor international or multinational lexicographic projects on existing Turkish dictionaries, we should have a common understanding of the way we make reference resources available, as is the case for the digital version of our Turkish dictionary project. Although there work has been being done on digitizing Turkish dictionaries, both old Turkish dictionaries and current versions of it, these few examples do not follow a standard way of encoding the source file. In order to overcome this obstacle, during the Lexical Data Masterclass in Berlin, on December 4-8, 2017, I worked on an XSLT transformation document to process an existing dictionary into an output conformant to the TEI standard. Seeing that almost all current Turkish dictionaries give the same category of lexical information in a very similar page layout, this XSLT could work on other digitalized or OCRized Turkish dictionaries. Even if they do not have a digital version, GROBID based projects can easily transform OCRized PDFs into digital file format like the works of Khemakhem et al. 2017. Continue reading An XML Version of Turkish Dictionary

Creating a prototype for lexicographic entries for spoken German

Since dictionaries are mostly based on written language data, creating a dictionary of spoken language requires new types of lexicographic descriptions and an elaborate microstructure. When analyzing spoken language material, a remarkable part of lexicographic work consists in analyzing interactional contributions of one or more speakers, and focusing on the lexicalized units used for organizing conversation as well as expressing one’s attitude and reacting to other speakers’ turns. In the project Lexicon of Spoken German (LeGeDe: Lexikon des gesprochenen Deutsch), we are creating a prototype for a lexical resource with the aim of describing the common practices and preferences in spoken German by exploring lexicographic representations for interjections, multiword expressions, such as passt schon and mal gucken, and delexicalized verb forms. None of these have been extensively elaborated in German lexicographic tradition.

Representing pragmatic information

One of the biggest challenges of describing spoken language data is determining the function of lexical units in interactional setting. For instance, the expression oh Gott (en: oh my God) can have multiple functions in an interaction: it can express surprise, astonishment, horror, indignation, annoyance, pain, excitement, etc. and it can be used, for example, as a mean of confirming or agreeing with the interlocutor’s position.

As an attempt to describe these functions in a TEI representation within the Lexical Data Masterclass in Berlin in December 2017, I defined an <usg> element with a type “commFunc” for every information regarding word usage or communicative function that was described in my entry drafts. For sake of example, I categorized talk organization, speaker alignment and speaker attitude as some of the possible subtypes of communicative functions. Given that the TEI-attribute @subtype is not yet allowed in this specification, I used the @value attribute to specify the subcategories of communicative functions.

<entry>
     <form type="lemma">
         <orth>Gott</orth>
     </form>
     <sense n="2">
         <gramGrp>
             <pos>NG</pos>
         </gramGrp>
         <usg type="commFunc" value="speakerAtt">Aufregung</usg>
         <usg type="commFunc" value="speakerAtt">Überraschung</usg>
         <usg type="commFunc" value="speakerAtt">Entsetzen</usg>
         <usg type="commFunc" value="speakerAtt">Erstaunen</usg>
         <usg type="commFunc" value="speakerAtt">Ungeduld</usg>
         <usg type="commFunc" value="speakerAlign">Kooperation</usg>
         <usg type="commFunc" value="talkOrg">Response</usg>
         <usg type="commFunc" value="talkOrg">Diskursmarker</usg>
     </sense>
</entry>

Since each <usg> element contains only one item, this structure would allow querying the dictionary according to the onomasiological features. For future references, the conventions in ISO 24617-2 Dialogue Acts (See Bunt et al., 2010) may be a good basis for further work on systematizing communicative functions.

An immediate issue that arises in this type of representations is the one of inheritance and cross-reference. On the one hand, one must consider representing the uses of multiword expressions that are related to a particular sense and do not inherit all of its functions. On the other hand, it is also inevitable having to represent those, which inherit all of the communicative functions defined for the parent node and which can have other functions as well. One way of resolving the issue of inheritance would be to specify whether the features in the dictionary are to be interpreted as cumulative, overwriting or local (Ide, Kilgarriff, Romary 2000).

Although the questions of inheritance and representation of communicative functions become apparent in an attempt of XML dictionary modeling, they must be discussed in the conceptual part of the lexicographic work. The like can be said in the case of descriptions of multiword expressions and lexicographic descriptions of conversions (schauen, verb > schau, interjection). In TEI terms, the latter can be represented as related entries as well as entries in their own right, and deciding how to model them in the starting phase of a dictionary creation is indispensable for a sustainable development of that dictionary.

Integrating frequency information

Working with corpora of spoken language, which are annotated on multiple levels, allows the use of lexicographic descriptions of metadata information, such as pronunciation, geographical information, gender, age, etc. Annotating and sorting the dictionary entries with these features could be the next development for digital born dictionaries of spoken language.

Corpora of spoken German are still relatively small for allowing a fine-grained description of particular lexical units (for instance, FOLK- the largest corpus of spoken German in interaction contains less than 2 million tokens). However, since interjections are highly frequent in spoken language, a quantitative lexicographic description of their frequency distribution would be something to consider.

With the exception of quantitative description of metadata in dictionary entries, sorting the entries according to the most frequent senses is an issue that can be accessed with the TEI tag <f>, that has also been used for representing corpus frequencies in the dictionaries (Mörth et al., 2015). Other than storing absolute frequencies or ranks regarding corpus counts, the <f> tag can be used to represent the rank of the senses or uses in the entries.


<fs type="corpFreq">
    <f name="rank"><numeric value="1"/></f>
</fs>

          
<fs type="corpFreq">
    <f name="rank"><numeric value="2"/></f>
</fs>

Thanks to the extended pos-tags developed for spoken German and integrated into the FOLK corpus (Westpfahl 2014), disambiguating between certain senses as well as checking their corpus frequency has been made possible. For instance, Gott is tagged as either noun (NN) or interjection (non-grammatical-element: NG) and the frequency of Gott as interjection in FOLK surpasses in great measure the frequency of Gott as a noun. Hence, we can set Gott (NG) as the most frequent sense (rank=1). However, for many lexical units having same parts of speech but different senses, the ranking can not be automatized as easily, and it would depend in great measure on the sample size considered in the lexicographic analysis.

Simple XSLT experiment: building up a concordance dictionary from a corpus

 

During the lexical master class in Berlin, we worked together with Simonetta Battista and Ellert Johannsson on the Dictionary of Old Norse Prose. One of the little exercises we have done was to use XSLT in order to generate lexical entries automatically from an existing annotated textual corpus of Old Norse.

The corpus

The starting point is a corpus encoded according to the TEI guidelines done to the level of token. The core unit in the corpus is the sentence (available under /TEI/text/body/div/p):

<s><w lemma="ljúka" type="sfg3en">Lýkur</w><w lemma="hér" type="aa">hér</w><w lemma="saga" type="nveo">sögu</w><w lemma="grettir" type="nkee-s">Grettis</w><w lemma="ásmundarsonar" type="nkee-s">Ásmundarsonar</w><pc>,</pc><w lemma="vor" type="fekee">vors</w><w lemma="samlandi" type="nkfe">samlanda</w><pc>.</pc></s>

where each token (<w>) is associated to a reference lemma (@lemma attribute).

The task at hand

The objective is, for each lemma occurring in the annotated corpus, to create a TEI lexical entry that groups together all sentences from the corpus. Going one step further, we have create a specific TEI document for each entry.

The XSLT stylesheet

Although quite simple, the stylesheet contains several XSLT techniques that we describe in the forthcoming sections.

Architecture of the stylesheet and first variables

The stylesheet relies on a a single template associate to the root element of our corpus:

<xsl:template match="/">…</xsl:template>

We first create the following variables (by mean of the XSLT element <xsl:variable/>):

  • theRoot: stores the root element of the source document so that it can be used later in the loop on lemmas, which looses track of the context node;
<xsl:variable name="theRoot" select="."/>
  • folderName: the name of the directory where we will store all the dictionary entry documents ;
  • allLemmas: that contains the sequence of all the different lemmas encountered in the corpus by means of a concise XPath instruction:
<xsl:variable name="allLemmas" select="distinct-values(descendant::w/@lemma)"/>

The XPath expression operates in two steps:

  • descendant::w/@lemma: extracts all lemma attributes from all words which are descendant of the current node (i.e. the root of our document in the current template) ;
  • the powerful XPath function distinct-values() that creates a sequence of all distinct values occurring in the argument sequence.

Creating each entry

The process of creating entries is based upon a main loop iterating on the different lemmas, within which we use <xsl:result-document/> to create an ouput document for each lemma.

<xsl:for-each select="$allLemmas">
   <xsl:sort/>
   <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
   …
   </xsl :result-document>
</xsl:for-each>

We have left here a sorting instruction (<xsl:sort/>) as an illustration on how to use it, knowing that it is less useful when entries are created in separate files.

Note the use of the {} notation in attributes to resolve the value of specific XPath fragments which, put together, builds up the name of the output file.

For each entry, we generate:

  • a <form> element whose <orth> child element contains the lemma
  • a series of <cit> elements for each detected sentence in the corpus (see below)
<entry>
   <form type="lemma">
      <orth>
         <xsl:value-of select="."/>
      </orth>
   </form>
   <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
      <cit>
         <quote>
            <xsl:copy-of select="parent::s"/>
         </quote>
      </cit>
   </xsl:for-each>
</entry>

The XPath expression “$theRoot/descendant::w[@lemma = current()] reads: find all the <w> elements descendent of the root element (of the corpus) whose @lemma attribute equals the current context node (i.e. the text value of the lemma from the encompassing <xsl:for-each/>). Note that the example sentence is simply obtained by looking for the parent <s> element of the current <w>.

The full XSLT

Here below, you can find the full XSLT, where you will notice how the header is built up in taking up some parts of the input corpus.

The full set of resources can be found in the DARIAH lexical working group GitHub space (note the @xpath-default-namespace attribute which ensures that all generated element are set in the TEI namespace):

<?xml version="1.0" encoding="UTF-8"?>

<xsl:stylesheet
   xmlns:xsl="http://www.w3.org/1999/XSL/Transform"    
   version="2.0" xmlns="http://www.tei-c.org/ns/1.0" 
   xpath-default-namespace="http://www.tei-c.org/ns/1.0">

<xsl:output method="xml" indent="yes"/>

<xsl:template match="/">
   <xsl:variable name="theRoot" select="."/>
   <xsl:variable name="folderName" select="'Entries'"/>
   <xsl:variable name="allLemmas"
      select="distinct-values(descendant::w/@lemma)"/>
   <xsl:for-each select="$allLemmas">
      <xsl:sort/>
      <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
         <TEI>
            <teiHeader>
               <fileDesc>
                  <titleStmt>
                     <title>Concordance lexical entry for lemma: 
                        <xsl:value-of select="."/>
                     </title>
                  </titleStmt>
                  <publicationStmt>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/distributor"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/address"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/availability"/>
                 </publicationStmt>
                 <sourceDesc>
                    <bibl>
                       <xsl:copy-of select="$theRoot/descendant::titleStmt/title"/>
                   </bibl>
                 </sourceDesc>
              </fileDesc>
            </teiHeader>
            <text>
               <body>
                  <entry>
                     <form type="lemma">
                        <orth>
                           <xsl:value-of select="."/>
                        </orth>
                     </form>
                     <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
                        <cit>
                           <quote>
                              <xsl:copy-of select="parent::s"/>
                           </quote>
                        </cit>
                     </xsl:for-each>
                  </entry>
               </body>
            </text>
         </TEI>
      </xsl:result-document>
   </xsl:for-each>
</xsl:template>
</xsl:stylesheet>

 

 

GROBID Dictionaries: Experiments with the General Basque Dictionary (OEH) PDF

GROBID Dictionaries is a tool for structuring dictionaries (conversion from PDF format to TEI XML), with a supervised machine learning approach (CRF models). Details are explained in (Khemakhem et al. 2017), which is also the paper to cite in relation to GROBID Dictionaries.

At LexMC, I have taken part at the GROBID Dictionaries tutorial and workshop; Mohamed Khemakhem, the developer of GROBID Dictionaries has been our instructor. Here you find the initial guidelines for the tutorial sessions.

After receiving a tutorial about installing and running the tool using a Docker image prepared by Mohamed, we got familiar with the tool and the annotation and training workflow using test data samples stemming from a random paper dictionary (PDF version).

We then repeated the steps using our own data, in my case sample pages from a CC-BY-NC-SA licensed PDF version of the General Basque Dictionary “Orotariko Euskal Hiztegia” (Mitxelena & Sarasola 1986), that counts more than 16,000 pages and due to its structure has to be considered a hard task for structured representation in XML.

OEH dictionary on the shelf and as PDF

I created the XML files to be manually annotated and then used as training data using the “create training data” function in GROBID Dictionaries. The annotation has then carried out using oXygen’s author mode feature. The manually annotated xml files are put into a folder where GROBID will find them and use for training; the commands for that are executed in the terminal (see guidelines).

On this screenshot, we see training data to be annotated in oXygen, and the terminal having run the command to create that XML from the PDF original, together with the other files necessary for GROBID

The files created by GROBID Dictionaries are (1) the pdf-to-text conversion raw text file, (2) the features file, and (3) the XML file. The vertical features file contains for each token of the raw text a set of values derived from the presentational markup present in the PDF.

Screenshot: Features File and PDF

The cascaded parsing performed by GROBID follows this path:

  1. Dictionary segmentation: Isolates the dictionary page content (body) from headers, footers and irrelevant material present in the pdf2text (so called DictScrap).
  2. Dictionary body segmentation: Marks up the entries inside the body.
  3. Lexical entry segmentation: Marks up the blocks for form, sense and related entries (re) inside  entry.
  4. Further inner segmentation of the form and the sense blocks.
  5. […] (see Annotation Guidelines).

Cascaded segmentation in GROBID

Screenshot: Annotated segments of entries (form and sense blocks),
next to original PDF

For the OEH dictionary data, I trained GROBID on manually annotated XML files, for the three models “Dictionary Segmentation”, “Dictionary Body Segmentation”, and “Lexical Entry Segmentation”. After every step, I re-trained GROBID with the annotated training data.

GROBID Dictionaries comes with a RESTful web service for a user-friendly execution of the annotation algorithm on the basis of the training performed beforehand, which is started from the docker image shell prompt and then available at localhost:808080.

We have found some issues that partly could be solved:

  • In one of the versions of the OEH PDF, the pages contained no headers and no footers, so that nothing could be marked up as such; this produced an error (presence of headers was missing); that has been solved by Mohamed by fixing the code in that respect.
  • The size of OEH entries ranges from 3 words in a single line to several pages. At the moment, GROBID wouldn’t accept entries that contain more than two page brakes.
  • Some special characters are not recognized by the pdf-to-text script, and they won’t appear in the features file either. This is due to non-unicode encodings of some special characters in some PDFs, and it should be solved, because these special characters are used as separators inside entries (e.g. at the beginning of each sense description. In the case of OEH, the inner segmentation of entries did not yield very good results, in spite of having annotated about 100 entries. We think that the separating symbol character (a diamond) being recognized could help significantly. In our case, sense blocks are very often preceded by a diamond symbol (see screenshot below), and also square bullets and stars appear with similar functions.

After all, we now know how GROBID dictionaries works. We have clearly seen that the precision of the output gets better, the more annotated training material is provided to the tool. In my case, I did not find a parsing error in the Dictionary Body Segmentation step until I had seen a large number of examples. After going back and annotating more material I re-trained the tool, and the parsing error was not repeated.

Screenshot: Noisy segmentation of entries (left), and PDF original (right)

We will now parse the whole dictionary (Dictionary Segmentation and Dictionary Body Segmentation), then isolate the headword and infra-headword strings and compare this list to another OEH lemma list produced beforehand by other means, in order to evaluate this step. After possibly annotating more pages (examples that contain noisy results), and re-evaluating the results, in a circular process until reaching full precision.

Then we will move on to the next step, annotate, re-train and evaluate, until we get to a noise free result. As the dictionary PDF I want to structure has a very rich  microstructure, the evaluation of GROBID’s performance for every step will take some time.

In the workshop, we have also got familiar with the development workflow of GROBID dictionaries, and will certainly cause more than one headache to the developer by writing tickets on GitHub.

Thanks for this very interesting workshop, and for your patience, Mohamed!

From a Legacy Dictionary to New Lexica: An Alternative Time-Machine to Discover Neologisms

The History of Greek Lexicography has many examples of exceptional and quite “crazy” pioneer researchers, amateur lexicographers, linguist authors who travel all the country to collect data for their lexica and dictionaries. Nikos Kazantzakis travelled most areas of Greece to collect “beautiful” words for his works, especially for his epic saga, Odyssey – as a sequel of the Homer’s Odyssey. He describes the following story.

Kazantzakis was taking pictures of rare flowers and trees in the countryside of Crete. On the outskirts of a village he was stunned by a beautiful flower. He asked the children that were playing nearby what the name of the flower was. They did not know the name; however, they informed him that only one, a very old lady of the village, would know the name of the flower. Kazantzakis went to the house of that lady to ask her about that name. Unfortunately, the neighbors informed him that she had just passed away. Kazantzakis became sad and he then said: “Our language lost a glorious member, since a word just died with that lady…”. Continue reading From a Legacy Dictionary to New Lexica: An Alternative Time-Machine to Discover Neologisms

Digitised and Born-Digital in One Application: Dutch Historical Dictionaries Online

Dutch historical language has been described in four separate comprehensive dictionaries: the Woordenboek der Nederlandsche Taal (WNT, Dictionary of the Dutch Language, 1500-1976) the Middelnederlandsch Woordenboek (MNW, Dictionary of Middle Dutch, ~1250 – 1550), the Vroegmiddelnederlands Woordenboek (VMNW, Dictionary of Early Middle Dutch, 1200-1300). and the Oudnederlands Woordenboek (‘ONW’, Dictionary of Old Dutch, ca. 500–1200). Both WNT and MNW were paper dictionaries, that were digitised by keying and made available on CD-ROM. ONW and VMNW were born digitial dictionaries. To have a comprehensive view of the Dutch language, it was decided to put the four dictionaries online in one portal. Making it possible for users to query one ore more dictionaries simultaneously was a logical step to take because these dictionaries complement each other. The challenge was not only to give the user optimal access to the dictionary information, but also to do so without compromising the uniqueness of each individual dictionary. Continue reading Digitised and Born-Digital in One Application: Dutch Historical Dictionaries Online

Digitising 150 Years of the Swiss German Dictionary

The scholars of the Swiss German Dictionary (Schweizerisches Idiotikon) have collected more than 15000 pages of highly concentrated information over the last 150 years. When we began retro-digitising the dictionary a few years ago, we were unsure if we were up to the task of dealing with such a massive amount of data. Of course, it was not a hardware problem – the 100 million characters will easily fit into any storage device sold nowadays. The real challenge is how to translate a 19th-century information structure into one which today’s computers can handle.

One of the major concerns of dictionary makers before the electronic age was how to save space. Dictionary makers achieved this by using abbreviations, using typography (styles and special characters) to mark different parts of an article and using context to convey information. This specifically means that certain elements can either be omitted depending on the context or bear a different meaning.  Continue reading Digitising 150 Years of the Swiss German Dictionary

A New Life for an Old Dictionary: Notes on Digitizing the Dictionary of Russia

I was a PhD student just finishing my thesis when my doctoral studies advisor, Professor Viktor Kabakchi, a grey-haired Russian scholar with many years of university teaching experience, invited me to take part in updating his brainchild  – The Dictionary of Russia. The first print edition of this dictionary about Russian cultural terms in English, which was published more than 10 years ago, required a thorough editorial revision. My excitement to meet new challenges as well as new perspectives in life knew no bounds.

However, at the time I didn’t have the slightest idea what to start with in order to digitize a print dictionary and produce its electronic version. To my great joy, the eLex 2013 Conference in Tallinn, organized by the Institute of Estonian Language and the Trojina Institute of Applied Slovene Studies shed some light on my understanding of what e-lexicography is. But that was only a mere introduction to the field.

Soon afterwards, I was lucky to start discussing the digitization of The Dictionary of Russia with K Dictionaries from Tel Aviv, who agreed to run the project and provide all the necessary technical support and guidance. I wrote this post to share with you some personal notes on a retro-digitization project taken by someone who is not a computer geek.  So please do not expect find too much ‘technical stuff’ in it, as my idea was to give an overview of the main milestones and show where we have come by now. Continue reading A New Life for an Old Dictionary: Notes on Digitizing the Dictionary of Russia

How Can I OCR My Dictionary?

One way to digitise a dictionary is using Optical Character Recognition or OCR. But is OCR feasible at all for my dictionary? And if so, which OCR program should I used, trainable or omnifont? And how about the workflow: should I train the OCR engine or not? And, finally, what should be the output format of my OCR? For those wanting to take the OCR adventure, here a very brief introduction.

To OCR or not to OCR

Some texts are totally  un-ocr-able:

Continue reading How Can I OCR My Dictionary?

Legacy Dictionaries Reloaded: Why Should We Bother? 

The closest I’ve ever come to glimpsing hell was several years ago, reading an article in the New York Times, entitled “Justices Turning More Frequently to Dictionary, and Not Just for Big Words.” The article cited the example of a certain Chief Justice John G. Roberts Jr. who had apparently parsed the meaning of a federal law by consulting the usual legal precedents (X vs Y) —  and no less than five dictionaries. One of the words he looked up was “of”. He discovered, lo and behold, that its meaning had something to do with belonging or possession.

Dictionaries lie at the core of the human ability to conceptualize, systematize and convey meaning. But they are hardly positivistic, objective repositories of knowledge or truth about language, let alone life.

Continue reading Legacy Dictionaries Reloaded: Why Should We Bother? 

From Books to Bytes: Turning Paper Dictionaries into Digital Format

When working on the Deutsches Wörterbuch, Jacob Grimm felt depressed by his life as a lexicographer. In the preface to the first volume, published in 1854, Jacob wrote:

“As if for days fine and tight flakes were falling down from the sky and soon the whole area is covered with vast snow I am quasi snowed in under the mass of words pressing towards me from all corners and crevices. Sometimes I want to get up and get rid of everything.”

Dorothea Grimm, his brother’s wife, observed Jacob and Wilhelm anxiously. They have to be liberated, otherwise they will get mouldy, she thought, watching them work on the dictionary from the early morning until late at night.

Continue reading From Books to Bytes: Turning Paper Dictionaries into Digital Format