Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Creating a prototype for lexicographic entries for spoken German

Since dictionaries are mostly based on written language data, creating a dictionary of spoken language requires new types of lexicographic descriptions and an elaborate microstructure. When analyzing spoken language material, a remarkable part of lexicographic work consists in analyzing interactional contributions of one or more speakers, and focusing on the lexicalized units used for organizing conversation as well as expressing one’s attitude and reacting to other speakers’ turns. In the project Lexicon of Spoken German (LeGeDe: Lexikon des gesprochenen Deutsch), we are creating a prototype for a lexical resource with the aim of describing the common practices and preferences in spoken German by exploring lexicographic representations for interjections, multiword expressions, such as passt schon and mal gucken, and delexicalized verb forms. None of these have been extensively elaborated in German lexicographic tradition.

Representing pragmatic information

One of the biggest challenges of describing spoken language data is determining the function of lexical units in interactional setting. For instance, the expression oh Gott (en: oh my God) can have multiple functions in an interaction: it can express surprise, astonishment, horror, indignation, annoyance, pain, excitement, etc. and it can be used, for example, as a mean of confirming or agreeing with the interlocutor’s position.

As an attempt to describe these functions in a TEI representation within the Lexical Data Masterclass in Berlin in December 2017, I defined an <usg> element with a type “commFunc” for every information regarding word usage or communicative function that was described in my entry drafts. For sake of example, I categorized talk organization, speaker alignment and speaker attitude as some of the possible subtypes of communicative functions. Given that the TEI-attribute @subtype is not yet allowed in this specification, I used the @value attribute to specify the subcategories of communicative functions.

<entry>
     <form type="lemma">
         <orth>Gott</orth>
     </form>
     <sense n="2">
         <gramGrp>
             <pos>NG</pos>
         </gramGrp>
         <usg type="commFunc" value="speakerAtt">Aufregung</usg>
         <usg type="commFunc" value="speakerAtt">Überraschung</usg>
         <usg type="commFunc" value="speakerAtt">Entsetzen</usg>
         <usg type="commFunc" value="speakerAtt">Erstaunen</usg>
         <usg type="commFunc" value="speakerAtt">Ungeduld</usg>
         <usg type="commFunc" value="speakerAlign">Kooperation</usg>
         <usg type="commFunc" value="talkOrg">Response</usg>
         <usg type="commFunc" value="talkOrg">Diskursmarker</usg>
     </sense>
</entry>

Since each <usg> element contains only one item, this structure would allow querying the dictionary according to the onomasiological features. For future references, the conventions in ISO 24617-2 Dialogue Acts (See Bunt et al., 2010) may be a good basis for further work on systematizing communicative functions.

An immediate issue that arises in this type of representations is the one of inheritance and cross-reference. On the one hand, one must consider representing the uses of multiword expressions that are related to a particular sense and do not inherit all of its functions. On the other hand, it is also inevitable having to represent those, which inherit all of the communicative functions defined for the parent node and which can have other functions as well. One way of resolving the issue of inheritance would be to specify whether the features in the dictionary are to be interpreted as cumulative, overwriting or local (Ide, Kilgarriff, Romary 2000).

Although the questions of inheritance and representation of communicative functions become apparent in an attempt of XML dictionary modeling, they must be discussed in the conceptual part of the lexicographic work. The like can be said in the case of descriptions of multiword expressions and lexicographic descriptions of conversions (schauen, verb > schau, interjection). In TEI terms, the latter can be represented as related entries as well as entries in their own right, and deciding how to model them in the starting phase of a dictionary creation is indispensable for a sustainable development of that dictionary.

Integrating frequency information

Working with corpora of spoken language, which are annotated on multiple levels, allows the use of lexicographic descriptions of metadata information, such as pronunciation, geographical information, gender, age, etc. Annotating and sorting the dictionary entries with these features could be the next development for digital born dictionaries of spoken language.

Corpora of spoken German are still relatively small for allowing a fine-grained description of particular lexical units (for instance, FOLK- the largest corpus of spoken German in interaction contains less than 2 million tokens). However, since interjections are highly frequent in spoken language, a quantitative lexicographic description of their frequency distribution would be something to consider.

With the exception of quantitative description of metadata in dictionary entries, sorting the entries according to the most frequent senses is an issue that can be accessed with the TEI tag <f>, that has also been used for representing corpus frequencies in the dictionaries (Mörth et al., 2015). Other than storing absolute frequencies or ranks regarding corpus counts, the <f> tag can be used to represent the rank of the senses or uses in the entries.


<fs type="corpFreq">
    <f name="rank"><numeric value="1"/></f>
</fs>

          
<fs type="corpFreq">
    <f name="rank"><numeric value="2"/></f>
</fs>

Thanks to the extended pos-tags developed for spoken German and integrated into the FOLK corpus (Westpfahl 2014), disambiguating between certain senses as well as checking their corpus frequency has been made possible. For instance, Gott is tagged as either noun (NN) or interjection (non-grammatical-element: NG) and the frequency of Gott as interjection in FOLK surpasses in great measure the frequency of Gott as a noun. Hence, we can set Gott (NG) as the most frequent sense (rank=1). However, for many lexical units having same parts of speech but different senses, the ranking can not be automatized as easily, and it would depend in great measure on the sample size considered in the lexicographic analysis.

Simple XSLT experiment: building up a concordance dictionary from a corpus

 

During the lexical master class in Berlin, we worked together with Simonetta Battista and Ellert Johannsson on the Dictionary of Old Norse Prose. One of the little exercises we have done was to use XSLT in order to generate lexical entries automatically from an existing annotated textual corpus of Old Norse.

The corpus

The starting point is a corpus encoded according to the TEI guidelines done to the level of token. The core unit in the corpus is the sentence (available under /TEI/text/body/div/p):

<s><w lemma="ljúka" type="sfg3en">Lýkur</w><w lemma="hér" type="aa">hér</w><w lemma="saga" type="nveo">sögu</w><w lemma="grettir" type="nkee-s">Grettis</w><w lemma="ásmundarsonar" type="nkee-s">Ásmundarsonar</w><pc>,</pc><w lemma="vor" type="fekee">vors</w><w lemma="samlandi" type="nkfe">samlanda</w><pc>.</pc></s>

where each token (<w>) is associated to a reference lemma (@lemma attribute).

The task at hand

The objective is, for each lemma occurring in the annotated corpus, to create a TEI lexical entry that groups together all sentences from the corpus. Going one step further, we have create a specific TEI document for each entry.

The XSLT stylesheet

Although quite simple, the stylesheet contains several XSLT techniques that we describe in the forthcoming sections.

Architecture of the stylesheet and first variables

The stylesheet relies on a a single template associate to the root element of our corpus:

<xsl:template match="/">…</xsl:template>

We first create the following variables (by mean of the XSLT element <xsl:variable/>):

  • theRoot: stores the root element of the source document so that it can be used later in the loop on lemmas, which looses track of the context node;
<xsl:variable name="theRoot" select="."/>
  • folderName: the name of the directory where we will store all the dictionary entry documents ;
  • allLemmas: that contains the sequence of all the different lemmas encountered in the corpus by means of a concise XPath instruction:
<xsl:variable name="allLemmas" select="distinct-values(descendant::w/@lemma)"/>

The XPath expression operates in two steps:

  • descendant::w/@lemma: extracts all lemma attributes from all words which are descendant of the current node (i.e. the root of our document in the current template) ;
  • the powerful XPath function distinct-values() that creates a sequence of all distinct values occurring in the argument sequence.

Creating each entry

The process of creating entries is based upon a main loop iterating on the different lemmas, within which we use <xsl:result-document/> to create an ouput document for each lemma.

<xsl:for-each select="$allLemmas">
   <xsl:sort/>
   <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
   …
   </xsl :result-document>
</xsl:for-each>

We have left here a sorting instruction (<xsl:sort/>) as an illustration on how to use it, knowing that it is less useful when entries are created in separate files.

Note the use of the {} notation in attributes to resolve the value of specific XPath fragments which, put together, builds up the name of the output file.

For each entry, we generate:

  • a <form> element whose <orth> child element contains the lemma
  • a series of <cit> elements for each detected sentence in the corpus (see below)
<entry>
   <form type="lemma">
      <orth>
         <xsl:value-of select="."/>
      </orth>
   </form>
   <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
      <cit>
         <quote>
            <xsl:copy-of select="parent::s"/>
         </quote>
      </cit>
   </xsl:for-each>
</entry>

The XPath expression “$theRoot/descendant::w[@lemma = current()] reads: find all the <w> elements descendent of the root element (of the corpus) whose @lemma attribute equals the current context node (i.e. the text value of the lemma from the encompassing <xsl:for-each/>). Note that the example sentence is simply obtained by looking for the parent <s> element of the current <w>.

The full XSLT

Here below, you can find the full XSLT, where you will notice how the header is built up in taking up some parts of the input corpus.

The full set of resources can be found in the DARIAH lexical working group GitHub space (note the @xpath-default-namespace attribute which ensures that all generated element are set in the TEI namespace):

<?xml version="1.0" encoding="UTF-8"?>

<xsl:stylesheet
   xmlns:xsl="http://www.w3.org/1999/XSL/Transform"    
   version="2.0" xmlns="http://www.tei-c.org/ns/1.0" 
   xpath-default-namespace="http://www.tei-c.org/ns/1.0">

<xsl:output method="xml" indent="yes"/>

<xsl:template match="/">
   <xsl:variable name="theRoot" select="."/>
   <xsl:variable name="folderName" select="'Entries'"/>
   <xsl:variable name="allLemmas"
      select="distinct-values(descendant::w/@lemma)"/>
   <xsl:for-each select="$allLemmas">
      <xsl:sort/>
      <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
         <TEI>
            <teiHeader>
               <fileDesc>
                  <titleStmt>
                     <title>Concordance lexical entry for lemma: 
                        <xsl:value-of select="."/>
                     </title>
                  </titleStmt>
                  <publicationStmt>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/distributor"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/address"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/availability"/>
                 </publicationStmt>
                 <sourceDesc>
                    <bibl>
                       <xsl:copy-of select="$theRoot/descendant::titleStmt/title"/>
                   </bibl>
                 </sourceDesc>
              </fileDesc>
            </teiHeader>
            <text>
               <body>
                  <entry>
                     <form type="lemma">
                        <orth>
                           <xsl:value-of select="."/>
                        </orth>
                     </form>
                     <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
                        <cit>
                           <quote>
                              <xsl:copy-of select="parent::s"/>
                           </quote>
                        </cit>
                     </xsl:for-each>
                  </entry>
               </body>
            </text>
         </TEI>
      </xsl:result-document>
   </xsl:for-each>
</xsl:template>
</xsl:stylesheet>

 

 

GROBID Dictionaries: Experiments with the General Basque Dictionary (OEH) PDF

GROBID Dictionaries is a tool for structuring dictionaries (conversion from PDF format to TEI XML), with a supervised machine learning approach (CRF models). Details are explained in (Khemakhem et al. 2017), which is also the paper to cite in relation to GROBID Dictionaries.

At LexMC, I have taken part at the GROBID Dictionaries tutorial and workshop; Mohamed Khemakhem, the developer of GROBID Dictionaries has been our instructor. Here you find the initial guidelines for the tutorial sessions.

After receiving a tutorial about installing and running the tool using a Docker image prepared by Mohamed, we got familiar with the tool and the annotation and training workflow using test data samples stemming from a random paper dictionary (PDF version).

We then repeated the steps using our own data, in my case sample pages from a CC-BY-NC-SA licensed PDF version of the General Basque Dictionary “Orotariko Euskal Hiztegia” (Mitxelena & Sarasola 1986), that counts more than 16,000 pages and due to its structure has to be considered a hard task for structured representation in XML.

OEH dictionary on the shelf and as PDF

I created the XML files to be manually annotated and then used as training data using the “create training data” function in GROBID Dictionaries. The annotation has then carried out using oXygen’s author mode feature. The manually annotated xml files are put into a folder where GROBID will find them and use for training; the commands for that are executed in the terminal (see guidelines).

On this screenshot, we see training data to be annotated in oXygen, and the terminal having run the command to create that XML from the PDF original, together with the other files necessary for GROBID

The files created by GROBID Dictionaries are (1) the pdf-to-text conversion raw text file, (2) the features file, and (3) the XML file. The vertical features file contains for each token of the raw text a set of values derived from the presentational markup present in the PDF.

Screenshot: Features File and PDF

The cascaded parsing performed by GROBID follows this path:

  1. Dictionary segmentation: Isolates the dictionary page content (body) from headers, footers and irrelevant material present in the pdf2text (so called DictScrap).
  2. Dictionary body segmentation: Marks up the entries inside the body.
  3. Lexical entry segmentation: Marks up the blocks for form, sense and related entries (re) inside  entry.
  4. Further inner segmentation of the form and the sense blocks.
  5. […] (see Annotation Guidelines).

Cascaded segmentation in GROBID

Screenshot: Annotated segments of entries (form and sense blocks),
next to original PDF

For the OEH dictionary data, I trained GROBID on manually annotated XML files, for the three models “Dictionary Segmentation”, “Dictionary Body Segmentation”, and “Lexical Entry Segmentation”. After every step, I re-trained GROBID with the annotated training data.

GROBID Dictionaries comes with a RESTful web service for a user-friendly execution of the annotation algorithm on the basis of the training performed beforehand, which is started from the docker image shell prompt and then available at localhost:808080.

We have found some issues that partly could be solved:

  • In one of the versions of the OEH PDF, the pages contained no headers and no footers, so that nothing could be marked up as such; this produced an error (presence of headers was missing); that has been solved by Mohamed by fixing the code in that respect.
  • The size of OEH entries ranges from 3 words in a single line to several pages. At the moment, GROBID wouldn’t accept entries that contain more than two page brakes.
  • Some special characters are not recognized by the pdf-to-text script, and they won’t appear in the features file either. This is due to non-unicode encodings of some special characters in some PDFs, and it should be solved, because these special characters are used as separators inside entries (e.g. at the beginning of each sense description. In the case of OEH, the inner segmentation of entries did not yield very good results, in spite of having annotated about 100 entries. We think that the separating symbol character (a diamond) being recognized could help significantly. In our case, sense blocks are very often preceded by a diamond symbol (see screenshot below), and also square bullets and stars appear with similar functions.

After all, we now know how GROBID dictionaries works. We have clearly seen that the precision of the output gets better, the more annotated training material is provided to the tool. In my case, I did not find a parsing error in the Dictionary Body Segmentation step until I had seen a large number of examples. After going back and annotating more material I re-trained the tool, and the parsing error was not repeated.

Screenshot: Noisy segmentation of entries (left), and PDF original (right)

We will now parse the whole dictionary (Dictionary Segmentation and Dictionary Body Segmentation), then isolate the headword and infra-headword strings and compare this list to another OEH lemma list produced beforehand by other means, in order to evaluate this step. After possibly annotating more pages (examples that contain noisy results), and re-evaluating the results, in a circular process until reaching full precision.

Then we will move on to the next step, annotate, re-train and evaluate, until we get to a noise free result. As the dictionary PDF I want to structure has a very rich  microstructure, the evaluation of GROBID’s performance for every step will take some time.

In the workshop, we have also got familiar with the development workflow of GROBID dictionaries, and will certainly cause more than one headache to the developer by writing tickets on GitHub.

Thanks for this very interesting workshop, and for your patience, Mohamed!