Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Lexical resources for processing 18th-century French correspondence with NLP tools

Background

Electronic Enlightenment is a scholarly digital editions of letters and correspondence. The collection started with Voltaire & Rousseau, expanded into other Enlightenment thinkers and writers, and has expanded into other eras, languages and domains. EE now has more than 80 000 letters, involving more than 15 000 people as writers or recipients.


Access to EE is via institutional subscription managed by Oxford University Press. While downloads of the full texts remain behind the paywall, the plan is to make online search and exploration of this text collection freely available. In order to search historical French texts effectively, users need to be able to find inflected forms and variant spellings. How can we make that search possible?

I have been developing lexical resources to enable, test, refine and improve the automatic lemmatization and wordclass tagging of (mostly) eighteenth-century French correspondence in the Electronic Enlightenment collection. As part of this project, I gratefully accepted a place on the Lexical Data Masterclass, and during the workshop I aimed to pursue a project to transform and enhance our lexical resources to make them more easily reusable.

There was also a secondary aim to find out more about existing work in this area, and to bring together our lexical information with existing datasets, or lexical information such as wordclasses and lemmas extracted from existing resources, to produce a re-usable, open access lexical resource for historical French. The resulting lexicon could be a customized TEI document, to be shared with colleagues, used for tagging the texts, and refined and improved through this process, and made available to other scholars.

I have been using natural language processing (NLP) tools to annotate the texts with dictionary headword (also known as lemma) and part of speech (or word class). Experiments with the widely-used programme TreeTagger trained on 20th and 21th century French) have confirmed that there is, not surprisingly, a need for enhanced lexical resources to deal with the vocabulary found in the Electronic Enlightenment collections. While we could probably reach our short-term goals by some manual tagging, re-training the tools, and hacking the lexicon, we would prefer to create, or contribute to the creation of, high-quality, standards-conformant, and re-usable lexical resources.

Initial research has shown that many of the digital texts of works in French from the 17th and 18th centuries based on editions with modernized spellings, and as a result, most software applications to work with French texts have not been adapted to work with original spellings from these periods. We had also not been able to yet find computer-usable lexicons with older variants of word forms.

Most freely available NLP software and lexical resources that we have discovered are, like TreeTagger, for modern (21st-century) French. The texts that we are working with exhibit the grammar, orthography and lexis of (mostly) learned writing of the eighteenth century, but also some variations from the standard language as found in published works. The characteristics of the texts include the following:

  • non-standard spelling, including elision of accents (“tollerance”, “tolerance”)
  • citations in a number of languages esp. Latin and Greek
  • writing by non-native speakers (e.g. Hobbes)
  • spelling mistakes (“convervation”)

In some more extreme cases, there is phonetic writing. This seems to be typical of women writers who have been only partially formally educated, for example Catherine Dorothée de Saint-Pierre, who writes with non-standard orthography, capitalization, tokenization, and often with little or no punctuation, e.g. “Etant da cor je me rendis avecque mon nésessere che luy je presentay mon premier cartieé de pension“). These forms might prove to be too idiosyncratic to merit inclusion of the non-standard forms in the lexicon.

We have made a start with this project.  Starting with the initial (imperfect) tagging with TreeTagger, I made a wordlist to deal with the most common erroneous forms in the output, such as forms of verbs ‘être’, ‘avoir’, and in all, some 472 words. TreeTagger was then run using the additional customized lexicon, with much improved results. The resulting corpus with POS tags and lemmas was then put into CQPweb, with good results for improved searching for the words which had been corrected. The example below shows hits for a search for all forms of the verb ‘pouvoir’.

The customized wordlist, or lexicon, has a simple form, in a tab-separated file with the original word form as attested in the corpus, a modernized version of that word form, POS tag and lemma. This is the format required for input to TreeTagger .

There are two problems with this format. Firstly, this won’t be adequate for dealing with more complex forms of polysemy, or for adding additional information, such as senses, translations, or citations. Secondly, since we want this lexical data to be re-usable , multipurpose, and shared, we need to have it in a standardized, well-documented format that others can easily understand, process and transform into other formats.  Probably people have addressed this problem before, but their lexical resources are not being shared, and can’t be re-used. We don’t want to make that mistake with our lexicon. In order to share data, expertise and know-how, we want to make a re-usable dictionary of variant forms in 16-19th century French in appropriate formats.

Therefore, I want to find out about using the TEI dictionary format, as a better model for a sharable, sustainable and re-usable lexicon.

We will also be interested in pursuing a similar project with German, in order to process a large collection of the letters of Immanuel Kant, and with the existing large numbers of letters in English and Italian in the EE collections.

What happened in the workshop

The Lexical Data Masterclass has proved to be extremely valuable in a number of ways. Firstly, I have been able to transform and enhance the wordlist, turning it into a TEI dictionary with the help, thanks to having the time to spend on the project, and with the help of Europe’s leading experts in the field. The results will be shown below.

Just as valuable has been the help that the organisers, especially Laurent Romary, have given me in finding out about previous work and existing resources in this area. Laurent helped me to find Gilles Souvay at ATILF in Nancy, who worked on the ANR/DFG Presto project http://presto.ens-lyon.fr/ and European Project Impact http://www.impact-project.eu/, and created LGeRM lexicon for XVIth XVIIth French, which is an adaption of Morphalou dictionary with older words and word forms.  Gilles has sent me a copy of the dictionary, and I’m working on adapting it to the same format as my dictionary, converting the POS tags to the TreeTagger tagset, and merging the resources. While extremely valuable for lemmatization, LGeRM doesn’t have POS tags for all inflected forms, so there is work to do to merge and enhance the dictionary!

A third area in which the workshop was valuable was in finding out about the work of the other participants, and making new academic contacts, which will certainly prove to be extremely useful. Finally, and by no means least, the decision to locate the workshop at the Berlin-Brandenburg Akademie der Wissenschaften was an excellent choice, and I have had the opportunity to talk to other researchers here who were not directly involved in the workshop, about collaborating on aspects of my project, such as putting the EE corpus onto the BBAW platforms for use with tools like DStar and DiaCollo, and possible collaborations with CLARIN partners to share and spread expertise in adapting NLP tools to work with historical varieties of languages.

The Results

Now, what you’ve all been waiting for.

I was able to use XSLT to transform my custom lexicon into the TEI dictionary format. A fragment of this is show below:

     <entry>
        <form type="lemma">
           <orth>temps</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="temps">tems</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tête</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tête">tete</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>théatre</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="théatre">theatre</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tolérable</orth>
           <gramGrp>
              <pos>ADJ</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérable">tolerable</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tolérance</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tolerance</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tolerence</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tollerançe</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tollérance</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tollerence</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tolérant</orth>
           <gramGrp>
              <pos>ADJ</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tolerans</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tolérans</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tolerants</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérant">tolerant</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tollérans</orth>
           </form>
        </form>
     </entry>

Before the XSLT was run, it was necessary to put the lexicon into a basic but well-formed XML format, which XSLT requires as input. Then, as well as having developed a re-usable XSLT script for this transformation, I have also written an XSLT script for transforming the dictionary back into the TreeTagger format. This is not entirely circular. It means that additional lexical information, such as citations, for example, can be added to the TEI version, which will be the master dictionary, and version can be extracted in different formats for various purposes when necessary. Below is the XSLT used for the main conversion of the flat content to TEI:

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xpath-default-namespace="http://www.tei-c.org/ns/1.0"
    version="2.0">
    
  <xsl:output method="xml" indent="yes"/>
  
  <xsl:template name="main" match="/TEI">
  <TEI xmlns="http://www.tei-c.org/ns/1.0">
    <teiHeader>
      <fileDesc>
        <titleStmt>
          <title>Dictionaire de formes de français du dix-septième siècle</title>
        </titleStmt>
        <publicationStmt>
          <p>Work in progress</p>
        </publicationStmt>
        <sourceDesc>
          <p>Born digital</p>
        </sourceDesc>
      </fileDesc>
    </teiHeader>
    <text>
      <body>
        
    <xsl:for-each-group select="body/entry" group-by='lemma'>
     
      <entry>
        <form type="lemma">
        <orth><xsl:value-of select="current-grouping-key()"/></orth>
            
           <gramGrp>
              <pos><xsl:value-of select="pos"/></pos>
           </gramGrp>
        </form>
        <xsl:for-each select="current-group()">
          
        <form type="inflected">
          <form type="variant">
            <orth> <xsl:attribute name="norm"><xsl:value-of select="norm"/></xsl:attribute><xsl:value-of select="orig"/></orth>
          </form>
        </form>
        </xsl:for-each>
      </entry>
    </xsl:for-each-group>
      </body></text></TEI>
  </xsl:template>
</xsl:stylesheet>

The LGeRM dictionary was encoded in XML in consistent way, but not entirely well-formed, and in a different format to mine. SO I also transformed that into the same format as mine so that the two can be merged, or at least so that TreeTagger output can be derived from it. 

Next steps

There is still some work to do in dealing with some of the trickier problems of polysemy; merging lexicons and tagsets; and then the iterative process of re-tagging the corpus, analysing errors and  improving the lexicon.

When the corpus is good enough to use, we’ll make it available via CQPweb from Oxford, and, we hope, on other platforms too, and we’ll make the lexicon available from the Oxford Text Archive.

Then we’ll start to work on the English, German and Italian texts, and to talk to others about sharing lexical resources, corpora, tools and expertise.