Toward TEI Lex-0 Publisher: A Workshop Announcement

When: December 16-17th, 2019
Where: DARIAH Coordination Office, Germaine-Tillion-Saal (7th Floor), Friedrichstr. 191, Berlin

Instructors: Magdalena Turska and Wolfgang Meier, eXist Solutions
Sponsor: Belgrade Center for Digital Humanities
Local Organizer: DARIAH WG “Lexical Resources”

Goal

The goal of the two-day workshop/hackathon is to:

  • introduce members of the DARIAH WG “Lexical Resources” and other interested parties to TEI Publisher (https://teipublisher.com/index.html), a highly customizable, open-source publication toolbox based on the TEI Processing Model;
  • kickstart the development of the TEI Lex-0 Publisher, а generic publication framework for dictionaries and other lexical data; and
  • build a pool of knowledge as a starting point for creating good documentation and training materials on TEI Lex-0 Publisher for DARIAH-Campus

Background

The DARIAH WG “Lexical Resources” is the spiritus movens behind the community-based initiative to develop TEI Lex-0, a stricter subset of TEI, to be used specifically for encoding dictionaries, pooling lexical data together and performing lexical research across national and linguistic boundaries.

During our Lexical Data Masterclasses, we established that our community very much needs a generic TEI Lex-0 publication framework:

  1. in the educational context, when we teach the principles of TEI Lex-0 and best practices in encoding lexical data, we need an easy-to-use publication platform to show immediately the affordances of well-structured lexical data; and
  2. in the context of scholarly editing projects, we need, in the long term, a solution which will make it significantly easier for individual scholars and/or smaller, underresourced institutions to publish high-quality editions of historical dictionaries and other types of lexical data, with functionalities which will include, among others, basic and advanced search, facetted browsing and geobrowsing.

Who can apply?

The workshop is aimed at developers and/or researchers with basic technical skills.

Familiarity with TEI, XML, XPATH, HTML, CSS and JavaScript will be assumed.

How to participate

The workshop is free of charge, but the travel expenses need to be covered by the participants or their institutions.

The number of available places is very limited.

To apply, please send an email with a short statement of interest together with information on your academic and technical background to both ttasovac@humanistika.org and laurent.romary@inria.fr by the end of the day on Friday, December 6th.

Notification of acceptance: Monday, December 9th.

Even if you don’t end up participating, let us know if you would be, in principle, interested in participating in the further developments of TEI Lex-0 Publisher. Also feel free to contribute to our Features wishlist. (You need to be signed in with your GitHub account in order to change this document.)

Schedule

Monday, December 16th (10:00 – 18:00)

  • Introduction to TEI Publishing Model and the architecture of TEI Publisher
  • The specificity of lexical data: formalizing and prioritizing our user needs and translating them into feature requests
  • Hands-on work

Tuesday, Decembeer 17th (09:00 – 16:00)

  • Hands-on work

The more preparatory work on formalizing and prioritizing feature requests we do before the workshop, the better. To contribute to the TEI Lex-0 Publisher Features Wishlist, go to http://tacheles.humanistika.org/s/SyaZ6HR2B To contribute to the document you will need to sign in using your GitHub account.

A TEI XML Version of the Historical Thesaurus of English Database

My project during the Lexical Data Masterclass was to trial recreating a section of the Historical Thesaurus of English in TEI-compliant XML, to which I could potentially add further information not presently contained in the Thesaurus database. In particular, I’m interested in adding words which collocate with word senses throughout the lifetime of those senses.  For those unfamiliar with lexical collocation, the theory is that a word’s meaning is (at least in part) composed by the context in which it is used. This means that you form an idea of what a word means partly through the other words which are frequently used alongside it. This is not the whole story – if I say the word ‘penguin’ your brain will likely return to you a mental image of a penguin and perhaps its associated properties, not the list of words you’ve heard in sentences next to ‘penguin’! – nonetheless, collocates may well be part of how we build and understand meaning. Many linguists are busily engaged exploring whether collocation can be used to map the ways in which concepts are associated with each other.

For my project, the collocates rapidly became a secondary consideration after the challenge of taking a relational database and turning it into TEI-compliant XML. For some, this may be an hour’s work. However, my knowledge of TEI XML and XSLT was limited, so expanding it was the main purpose for me of attending LexMC.

The current TEI guidelines are exceptional in providing guidance for marking up dictionary data. They have to be twisted to some extent to fit the style of data contained in a thesaurus. Categories in the Historical Thesaurus are divided into semantic fields ranging from high-level concepts such as ‘The world’ and ‘Society’ to intricate detail such as ‘Animal body, general parts, consisting of segments’. These are arranged into a hierarchy by their relationship to each other, with the most fine-grained detail contained in ‘subcategories’ nested inside full categories. The data in each semantic field may be distributed across multiple ‘parts of speech’ (POS, i.e. noun, adjective, verb, adverb, etc.). Within each POS division are the word senses which have expressed that meaning, each accompanied by the date-range for which the sense has been attested as active. Subcategories (as opposed to the full, mainline categories) also belong to specific POS divisions.

My provisional method for encoding this data using TEI XML was to treat each semantic field, with its associated parts of speech, as an entry. These entries contained a <form> element with a ‘category’ type attribute, and this element contains the semantic field’s position in the hierarchy as defined by the category numbering system used in the Historical Thesaurus. Belonging to these fields are the POS divisions, each recorded using a <sense> element, containing a <def> for the category heading, a <gramGrp> for the POS, followed by a number of <form type=”lexeme”> entries for the word senses themselves. Each lexeme has an <orth> element giving its written form and then two <date> elements for the earliest and latest citation date. This stretches the TEI dictionary guidelines considerably, treating the category as equivalent to a dictionary headword and the lexemes inside it as ‘forms’ of the concept which the category represents. There is no objective base to a concept beyond the sequence of category headings which lead to it (which is, itself, not objective – a fascinating subject, but let’s not get into it right now!); instead, HT semantic fields exist as a positions in a hierarchy labelled with a positioning code, and so this code is perhaps the closest thing to a headword which exists for a category.

Extracting category numbering from ‘category’ tables to replicate the Historical Thesaurus category codes
Creating further structure for the Thesaurus categories and their entries

This structure could then be populated automatically using sample data from the database of the Historical Thesaurus, selected via an SQL query and output as XML. The intermediate XML form presented database rows as tables where each cell was given as a <column> element with the column heading as a name attribute. Two files were used here – one containing category details and the other lexeme details. With generous assistance from Laurent Romary, it was possible to create an XSLT script which read the tables for categories and lexemes into variables. Important details could then be plucked from these variables to create entries in the TEI XML output document. I spent more time than I would like to admit experimenting with reproducing the category numbering system via XSLT selection of the appropriate columns.

An example of using the semantic search capabilities from corpus.byu.edu/eebo to obtain collocate data for lexemes

To retrieve some preliminary collocation detail for the sample categories, I used the version of Early English Books Online (EEBO-TCP, phase I) which was annotated with semantic labels by the SAMUELS project. The data is currently available in a corpus interface on Mark Davies’ corpus site and will be deposited in raw text form with the Oxford Text Archive. This was another proof-of-concept trial rather than a detailed exploration of the data, but it was successful enough to give me hope that a more complete and carefully considered version is an achievable goal.

A proof of concept trial at adding collocate details to lexemes in the XML version of the Historical Thesaurus data

Overall, the result of these efforts was a serviceable foundation for a TEI XML version of the Historical Thesaurus. What most interested me about the process was the requirement to think about and experiment with the structure of the data. I am still not satisfied with my representation of the resulting data layers in XML, although discussion at the LexMC closing symposium gave me further possibilities to test. Further work would also need to engage more extensively with the Thesaurus system for representing dates of lexeme activity. There is considerable complexity to this information, and for the purposes of the project I employed very basic (and thus, slightly misleading) ‘earliest’ and ‘latest’ attestation dates as a placeholder for complete activity data. There would still be a fair amount of work ahead to do this, but the Lexical Data Master Class really stretched the way I think about the data, how it can be structured, and the alternatives for its presentation.

I would like to thank the organisers and participants of the workshop for their feedback, and especially Laurent Romary, Martin Wynne, and Emrah Özcan for saving me from drowning in XSLT issues.

Slavic Corpora Terminology Dictionary in TEI

The project

The corpora linguistics research group at the Institute of Western and Southern Slavic Studies (University of Warsaw) has recently started a project collecting Slavic corpora terminology with definitions so as to be able to investigate this type of lexica.

The collected data will be stored in the form of a TEI encoded dictionary. There are already started works on Polish, Czech, Bulgarian, Slovene and Slovak terms and in the near future we will add at least Serbian, Croatian and Russian as well. There will be also provided translations into English but only within the entries as equivalents for future comparative studies.

I came to the workshop with sample entries prepared and during the week I reorganised and developed it a little thanks to the classes and discussions with trainers and other workshop participants. Below you may see a sample of an entry. 

Our goal at the beginning was to prepare dictionaries for each language where entries would be linked between the languages so that we could later refer from one language to another one. However it turned out that we should need to prepare ontology first to point to it to link our data. Eventually we decided that there is no need and sense to start the research from the ontology because it would limit our investigation. Therefore there are going to be separate dictionaries but every non-Polish entry will be accompanied with Polish and English equivalent.

There is being prepared a list of Polish and English data that will serve as a starting point for the dictionary editors so that they can always check what should they search for in the languages covered. However the research is not going to be limited strictly to those lists. When the work on the dictionaries is done we will extract both Polish and English equivalents and prepare glossaries.

Changes in the TEI file

During the workshop the TEI Header has been completed, as it was quite basic before the masterclass and contained very little information. We supplied the file with licence information, publisher, corrected <respStmt> information giving names of editors and translators as they may not be the same. There were also added xml:id to the entries which will be useful later when preparing cross references. Also <gramGrp> was corrected – at the beginning there was only information concerning part of speech and we decided to give also gender values because they are important for Slavic languages. At the beginning there also were mistakenly <form> types always called lemmas and now we distinguish lemmas and phrases.

Sample entry

Our entry contains following information:

<form> – lemma or phrase

<gramGrp> – <pos> and <gen>

<sense> with multiple <def> (with counters) to provide multiple definitions with their translations

<cit type=”translation> and <cit type=”original”> within <def>element, because the definitions are going to be translated into Polish and will have one common bibliographic information for each of them

<cit type=”example”> for usage examples

<xr type=”syn”> for synonyms

<cit type=”translation”> for Polish and English equivalents

As the dictionaries editors working in the project have little knowledge on TEI and most of them are not working in Oxygen XML Editor but XML Copy Editor, they are going to use sample entries as forms and mainly fill them in. Thus there were prepared such files for every language that we deal with – they differ only in the parts where there is xml:lang attribute given. Only Polish subdictionary, as there will not be translated definitions, has little differences in the <sense> structure – there is no <cit type=”translation”> within <def> but only <cit type=”original”> and obviously there are no <cit type=”example”> in foreign language, too.

Introduction to CSS and XPath

During the workshop I also learned some CSS and XPath.

Below you may find a sample entry with CSS – we will use CSS files while editing entries so as to check easily the work progress, especially because XML Copy Editor does not provide much help for TEI editing.

Thanks to the fact that I have learned some XPath during the workshop, too, I can now perform some queries on my files to check our work and to provide the team with the needed information on the project progress.

For instance //entry[.//gen=”n.”]//orth  search shows the headwords (orths) of the entries containing grammar information with gender value “n.” (neuter) and //form[@type=”phrase”]//orth  allows us to find the headwords (orths) containing phrases (not lemmas), giving the results eg. besedilni korpus, and not označevanje.

Further work

What we still have to discuss is, what bibliographic information are we going to preserve within the entries, whether to present definitions in original and translated form or only translations (but we preserve both in the TEI file) and how to present data on a website

Unfortunately I have not learned during the workshop any XSLT as I attended other wonderful and useful classes at the same time (it would be much better if we were able to attend both of them). However thanks to one of the participants, Boris, after the workshop I was able to prepare XSLT stylesheet that is sorting the entries in our dictionary in the alphabetical order. I also decided to use XSL stylesheet for TEI dictionaries prepared by Michal Boleslav Měchura to transform the XML file to nice HTML file. Of course we will have to adapt the stylesheet for the needs of the project so there is still much learning and work on XSLT of us ahead.

Moreover we discussed with another participant of the workshop, Nikolche, that we could possibly cooperate in the project and his colleagues could contribute to the Macedonian part of the dictionary.

I am happy that I could attend DARIAH Lexical Data Masterclass in lovely Berlin, learn a lot but also meet other people working on their fabulous projects and exchange ideas. Thank you all!

TEI-encoding of classical Arabic grammatical sources

The project

The major grammatical treatises of classical Arabic, widely investigated by scholars, are unfortunately accessible online mostly in form of digitized or scanned copies of modern printed editions, but standards-compliant digital collections, and semantic as well as linguistic annotations of their contents are not available.  For this reason, I have started working on a project aiming at the creation of a digital collection of TEI-encoded classical Arabic texts, marking the relevant information, such as the linguistic terminology, the grammatical sources employed by grammarians, and the references.

For the masterclass, my objective was to design a general schema I could apply to a number of texts, and to test it on the first of my sources, the Kitāb Sībawayhi (8th cent.)

Provisional TEI-schema (before the masterclass)

Before the masterclass, I had been working on a provisional TEI schema for my texts, designed as in this example:

<!–General book structure: –>
         <pb ed=”Harun” n=”I:12″/>
         <div n=”I:1″ type=”chapter_bab”>
            <head>هذا بابعلم الكلم من العربية</head><lb/>
<!–Specific information: –>
            <p>[…]
               <ref type=”grammar” ana=”accusative”>النصب</ref>
               […]
               <ref type=”source” ana=”Quran” n=”II:275″>فمن جاءهموعظة من ربه</ref>
               […]</p>
         </div>

The first tags refer to the basic elements of the book, namely the page numbers as per the edition, the number of the chapter (n=“#Volume:#Chapter), and the title of the chapter (<head>[…]</head>).

The second section refers to the elements to highlight in the text: grammatical terms and external sources. Both had been encoded using the tag <ref>, then providing additional information in the attributes within. So, the attribute @typeserved to distinguish between grammar and sources, while the attribute @ana provided either the translation of the grammatical term or the name of the source. In this last case, the additional attribute @n served to provide the number of the sūra and the verses quoted.

What happened in the workshop

– The tags

The very first day, with the help of the instructors and of Laurent Romary in particular, I completely redesigned the schema, eventually opting for more fitting tags. I kept unchanged the general tags (those for chapters, page numbers, and titles), and used <term> and <seg> for grammatical terms and external sources respectively:

<term type=”grammar” corresp=”entries.xml#naSb”>النصب</term>

<seg corresp=”bibl.xml#Q_II_275″>فمن جاءهموعظة من ربه</seg>

Both tags now contain a @corresp attribute, that points to external files by means of a given @xml:id.

In the case of the grammatical terms, the corresponding file “entries.xml” is structured as a dictionary:            

<div>
          <entry xml:id=”naSb”>
               <!–              <gramGrp><pos/></gramGrp>–>
               <form><orth>نصب</orth>
               </form>
               <sense>accusative</sense>
            </entry>

[…]

<div>

Whereas the file “bibl.xml” contains a list of references:  

<listBibl>
            <head>Reference from the Quran</head>

<bibl xml:id=”Q_II_275″><title>Quran</title><citedRange unit=”sura:verse”>
              II:275</citedRange></bibl>

[…]</listBibl>

– The TEI Header

Another valuable addition to my work was the TEI header, whose importance I had never really considered but that represents a fundamental element when it comes to providing information on the setting and development of the project. I have then been working on its drafting during the masterclass, taking notes on how to represent the info I’d want to add:

<TEIxmlns=”http://www.tei-c.org/ns/1.0″>
            <teiHeader>
                        <fileDesc>
                                    <titleStmt>
                                               <title>title of the project(/current document)</title>
                                               <funder>source of funding for the project</funder>
                                               <author>author of the project</author>
                                               <!–<respStmt>
                                                           <resp>Responsible for the project</resp>
                                                           <name></name>
                                                           <!-\-Here goes the info on the person who works on it (the field is not necessary if the resp coincides with the author)-\->
                                               </respStmt>–>
                                    </titleStmt>
                                    <publicationStmt>
                                               <!–                                         <p>Publication information of the source</p>–>
                                               <distributor>body under which you publish the project</distributor>
                                               <authority>(release authority) the name of person or agency for making a work available, other than a publisher or distributor</authority>
                                    </publicationStmt>
                                    <sourceDesc>
                                               <p>Information about the source (i.e. bibliographical reference if it’s a book)</p>
                                    </sourceDesc>
                        </fileDesc>
                        <encodingDesc>
                                    <!–      <appInfo>
                                               <!-\-<applicationident=”TEI_fromDOC”version=”…”></application>-\->
                                               Info on the software used for conversion of texts (doc to TEI, forinstance)</appInfo>–>
                                    <tagsDecl>
                                               <namespace name=”http://www.tei-c.org/ns/1.0″>
                                                           <tagUsage gi=”NameOfTheTag”>declaration of how tags have been used throughout the project (=specific purposes the tag has been used for)</tagUsage>
                                               </namespace>
                                    </tagsDecl>
                        </encodingDesc>
                        <profileDesc>
                                    <langUsage>
                                               <language ident=”en” rendition=”rtl/ltr”>language</language>
                                               <!–description of the languages appearing in the project–>
                                    </langUsage>
                        </profileDesc>
                        <revisionDesc>
                                    <change>=version, meaning the big stages/phases of the project</change>
                        </revisionDesc>
            </teiHeader>

Next steps

So far, I have encoded the Kitāb Sībawayhi only partially, so the next immediate step is to complete the task and apply the same TEI-schema to the other textual resources I want to include in my project.

Also, I will have to get a little more familiar with XSLT, on which I started working during the masterclass to have a visual representation of the work I was doing, but without having a clear idea on what exactly I want to produce as a final output.

Besides that, there is still some work to do in dealing with what other information would be important to encode in each text and improving the lexicon.

Special thanks

To all #LexMC18 team, both instructors and participants, and especially to Laurent and Toma!

BTB – WordNet: From LMF to TEI with XSLT

Why do we need this transformation?

The short answer is visibility, accessibility and reuse of our data. WordNet is language resource used in various NLP tasks and we want researchers to have access to the Bulgarian WordNet to do experiments and to be able to reproduce our results. In the LMF standard the data is not so human readable and it is divided in two XML siblings – LexycalEntry and Synset. The first has the information about the lemma, POS, and few ID numbers. The second has confidence score, POS (again), more ID’s, and has Definition and Examples with information about the confidence, language and source for the example sentences.

How we did it?

First we had to come up with a scheme that can hold all the source information in TEI. For that we transferred as much as possible from the equal XML tags (<tag>)and attributes (@) like <LexycalEntry> to <entry> and@language to @xml:lang. Then we looked at what is left and how to represent it in TEI. The LMF source had more attributes than TEI allows for us to use so the decision was to transfer the attributes as tags, for example: @pos became <pos> and @ili became <idno type=”ili”>. When the transferring scheme was done we had to set up our XSLT stylesheet. This was pretty much standard, but we needed to use xsl:key to link every <LexycalEntry> to it’s corresponding <Synset> via the attributes @synset and @id so that the transformation templates will apply correctly.

The last thing we did was the first thing one should do – to set up our header. The reason is that we wanted first to have something to put the header on.

The result:

What is next?

We will take our Bulgarian WordNet expansion which is in raw XML format, that we as annotators use, and try to transform it directly to TEI Standard.

Ask, and it will be given to you:

Our BTB-WordNet and our XSLT stylesheet will be on the GitHub

If you want more of our resources you can find us on: http://bultreebank.org

Side note

Visit Bulgaria; Come and see us at RANLP 2019: http://lml.bas.bg/ranlp2019/

Special thanks:

To: LaurentRomary and Toma Tasovac for the opportunity to be part of the masterclass

To: Axel Herold for the organization

To: Boris Lehečka for the help with the xsl:key

And everybody else for the friendship

FROM LEGACY FORMATS AND DATABASES TO TEI: Converting the Academy of Sciences Portuguese Dictionary to TEI Lex-0

Ana de Castro Salgado
NOVA CLUNL
Lisbon, Portugal

1. Project
In Portugal, the last, unique and complete print edition of an academic Portuguese dictionary was published in 2001. At that time, the authors decided on a computational approach, developing a database using Microsoft Access. In 2015, the Academy wanted a new dictionary. With the goal of updating the dictionary, making it available through the web, a team was put together in order to develop a database and prepare a back-end to manage the revision, through a protocol between the Academy and a computer science team of the University of Minho, Portugal. 
The only media that survived these last 15 years was the PDF that originated the printed version. The inexistence of a database led us to convert the print version into an XML document and, later on, to import the data into a SQL database (eXist-db). The next step was to transform the obtained document into a valid XML file. Instead of trying to define our own XML format, we decided to target the Text Encoding Initiative, with a syntax adapted to our needs. Now, to create, edit, delete and validate entries, we are using Oxygen XML Editor (Simões et al., 2016).
I’m coordinating the Portuguese Academy Dictionary and I’m also a researcher at NOVA CLUNL (Lisbon, Portugal), and my interest in following the Lexical DataMasterclass 2018 was to switch the Academy of Sciences Portuguese Dictionary to TEI-Lex0 format, deepen my knowledge in managing digital data as online resources and improving my competences in XML editors.

2. Workflow
During this week, I worked on the normalization of a pre-TEI export from the database within which the Academy of Sciences Portuguese Dictionary was encoded.
A lot of procedural aspects were identified. All of them were documented since they are very important to proceed with my project’s work.
«TEI Lex-0 should not be thought of as a replacement of the Dictionary Chapter in the TEI Guidelines or as the format that must be used for editing or managing individual resources, […] should be primarily seen as a format that existing TEI dictionaries can be univocally transformed to in order to be queried, visualised, or mined in an uniform way.»(cf. [Romary, 2015])
We worked in Oxygen XML editor and with a TEI Lex-0 schema.

a) The TEI Header
After attending the session («Introduction to encoding dictionaries withthe TEI guidelines», by Laurent Romary), the first output was a full-fledged TEI header.

b) Typology of entries
The first step was to identify the structural components of the dictionary. I only considered an entry (the core of all lexicographic encoding) of each type.
The lexical units could be: a Portuguese/foreign single word (trimensário, workshop); a Portuguese/foreign compound word (decreto-lei, self-government); a morpheme (ab-, (-)carpo(-),-agem); a multiword expression (bilhete de identidade; pena capital); a phrase (fiat lux); an abbreviation (Ag, Cf., VIP). Each of these different types of entries were encoded in TEI-Lex0.
I also analysed some «special entries», like: part of speech homonymous (capital1, capital2, capital3, etc.); homographs (bola1 /ó/, bola2 /ô/); two variant forms (ouro, oiro); trademarks…
The goal was to model and create a standard format for each of the different entries types.

c) ACL Dictionary Schema (XML changings)
In the context of TEI-Lex-0, the elements “entryFree”, “superEntry”,“hom” are not allowed. To encode the basic element of the dictionary (the entry) microstructure that groups all the information, we use only “entry” (compounds, phrases, homographs) with these attributes “xml:id” and “xml:lang”. The entries invariably start with a lemma, so we use now <form type=”lemma”, and no <term>.

A typical entry has a structure like this:
<entry>
    <form type=”lemma”>…
      <orth>
    </form>
        <gramGrp>
          <pos>…
         <gen>…
      </gramGrp>…
         <sense…
   </entry>

d) Data worked in the sessions
I reviewed the proper use of several elements (cf. see the images below with some examples).

One of our goals was to know how we could encode multiword expressions, such as “poço de ciência” or “pena capital”, and now they are seen as real entries:

          <orth>capital</orth>
            <lbl>:1</lbl>
              <pron>kɐpitˈał</pron>
               </form>
             <gramGrp>…
              <entryxml:id=”DACL.PENA.CAPITAL” xml:lang=”pt”>
                 <form>
               <orth>pena capital</orth>
              </form>
             </entry>

Prepositions also deserved our attention. The microstructure of prepositions entries is very specific, so the creation of a script has not proved very adequate. The number of entries classified as preposition is 43 therefore the correction can be manually.

Pre-TEI export:

Convert to TEI Lex-0 format:

Another issue was to record the forms that have been impacted by the Portuguese spelling reform that concerns the Community of Portuguese Speaking Countries and how it impacts lexicographical outputs.

We would like to emphasize that our customization of the TEI Lex-0 dictionary module has proved to be a good module to this lexicographic work.

Some examples of encoding of lexicographic information in TEI-Lex0 can be found here: https://github.com/DARIAH-ERIC/lexicalresources/blob/master/Events/LexMC2018/Participants/Salgado,%20Ana/Test_ACL_TE_ILex0L.xml

3. Future work
Continue working on the adaptation.
How to differentiate collocations from lexical co-occurrences in the encoding?
Deep encoding of etymological information.
More illustrative samples
Test with the TEI-Lex0 schema updates.
Comment what is missing.
Fix what is wrong.

Special thanks
The sessions were amazing, very useful and the instructors (Laurent Romary, Toma Tasovac, Jack Bowers, Axel Herold) were relentless. Thank you also to Charly Moerth for his fabulous keynote on Arabic dictionaries developed at the Austrian Centre for Digital Humanities of the Austrian Academy of Sciences. In the near future, I would like to attend GROBID-Dictionaries workshop series with Mohamed Khemakhem. With so much work ahead, switching the Academy of Sciences Portuguese Dictionary to TEI-Lex0 format, I had to make an option. I hope there are more workshops in the near future.
Last but not least, I am very grateful to my PhD supervisor’s, Rute Costa (NOVACLUNL), for all her guidance and great inspiration.

References
DLPC = Dicionário da Língua Portuguesa Contemporânea, 2001, João Malaca Casteleiro (coord.), 2 vols. Lisboa: Academia das Ciências de Lisboa & Editorial Verbo.
TEI Consortium, eds. (2016). TEI P5: Guidelines for Electronic Text Encoding and Interchange. [Version 3.1.0]. [Last updated on 15th December 2016]. TEI Consortium. URL: http://www.tei-c.org/Guidelines/P5/.
Alberto SIMÕES, J. J. ALMEIDA, A. SALGADO, 2016. Building a Dictionary using XML Technology. In Marjan Mernik, José Paulo Leal e Hugo Gonçalo Oliveira, Eds., 5th Symposium on Languages, Applications and Technologies (SLATE’16), vol. 51 of OpenAccess Series in Informatics (OASIcs), pp. 14:1-14:8, Dagstuhl, Germany. Schloss Dagstuhl-Leibniz-Zentrum fur Informatik. DOI: http://dx.doi.org/10.4230/OASIcs.SLATE.2016.14.
Jack BOWERS and Laurent ROMARY, «Deep Encoding of Etymological Information in TEI», Journal of the Text Encoding Initiative [Online], Issue 10 | 2016. URL: [http://journals.openedition.org/jtei/1643]
Piotr BAŃSKI, Jack BOWERS, Tomaz ERJAVEC, «TEI-Lex0 guidelines for the encoding of dictionary information on written and spoken forms», Electronic Lexicography in the 21st Century: Proceedings of ELex 2017 Conference, Sep 2017, Leiden, Netherlands. 〈hal-01757108>


From Àbèsàbèsì to XPath: An Overview of the Lexical Data Masterclass 2018

Notes from the participants’ symposium, December 7, 2018

After a whole week of intense work, the participants of the DARIAH Lexical Data Masterclass presented their projects and results and discussed a variety of encoding and transformation issues which are summarized here. 

Specialized dictionaries

Claudia Bonsi – Encoding an Italian meta-dictionary

The project provided a good illustration of the tension between the typographic, editorial and lexical views of dictionaries encoded in TEI. Since the meta-dictionary is a dictionary about another dictionary, an editorial view preserving the original source seemed to be the best choice. Claudia Bonsi created a taxonomy of concepts to organize the content (<taxonomy>). The dictionary is encoded as a combination of actual <entries> and descriptive content (<div>), with then embedded entries where the description is in turn conceived as a lexicographic description. Claudia Bonsi encoded a sample of the dictionary and created a first XSLT transform to produce an HTML visualisation. The next step may be creating a resource of all persons mentioned in the metadictioanry.

Marija Zarkovic – Spanish Legal Terms Through Time: Digitization

Marija Zarkovic analysed legal documents to extract legal terms according to the first Spanish dictionary published by the Royal Academy. The actual edition is important to consider both for lexicographic and historical-legal reasons. Marija created a big corpus (<teiCorpus>) from the various resources used in the project, each represented as a single TEI document (<TEI>). XSLT was used to produce an HTML output so that the various entries for a given lemma could be seen in a row from the various sources and organized along a timeline, thus allowing an immediate comparison of the actual lexicographic practices in time. The representation is a good basis for the future extension of the project towards additional resources.

Martin Wynne (Bodlean library, Oxford) – Enhancing a lexicon of variant word forms in seventeenth-century French

The underlying corpus is a resource of XVII Cent. correspondence. The dictionary is intended to record variant spellings and contribute to tune a dedicated POS tagger for the corpus. The source data was represented by means of a basic spreadsheet, which was then transformed by means of an XSLT transfer into a TEI representation that would allow one to have richer content in the future (inflections, citations etc.) — something which would have not been possible with a spreadsheet. The encoding makes intensive use of variant forms (<form type=”variant”>) within inflected form groups (<form type=”inflected”>). The TEI represented is also converted back to the original format because it is the format expected by TreeTagger. The masterclass was the opportunity to make connection with a team working on similar linguistic data at ATILF (in Nancy, contact: Gilles Souvay). The next step will be to put the LGeRN dictionary (from Nancy) in the same format to compare content and linguistic coverage.

Joanna Aleksandra Bilińska  – Slavic Corpora Terminology Dictionary

Joanna Aleksandra Bilińska has been working on a project involving corpus linguistics terminology in several Slavic languages, including Slovenian, with appropriate English equivalents. The dictionary combines several definitions to explore how the concepts are actually understood in various sources. The dictionary has been encoded in TEI with a lot of citations (<cit type=”original”>) together with translations (<cit type=”translation”>) . When extended to the additional languages, separate TEI documents will be maintained, with sample entries that the future editor will be able to use as templates. Joanna used XPath for testing the actual content and provide some quality control mechanisms. Future work may consist in creating an independant bibliographic resource that could be referenced from the various dictionaries.

From PDF to TEI using GROBID dictionary

Nikolche Mickoski (Lexicographic Centre at Macedonian Academy of Sciences and Arts) – Using GROBID for OCR-ized multilingual dictionary

Mohamed Khemakhem was the voice of Nikolche, who had to leave early, and reported on the work carried out on extracting TEI dictionary entries automatically from a multilingual dictionary in 6 languages (Macedonian together with Serbian, English, French, Russian and German)  by means of the GROBID-Dictionaries software over the week. Nikolche trained all the models of GROBID. The “Dictionary Segmentation” model reached 100% score, the two additional models reached 98.85% (entry segmentation; form sense etc.) and 93.49 (subfield analysis, e.g. lemma, examples, translation equivalents). The annotation of training data was done in the the Oxygen XML Editor’s Author Mode with a dedicated CSS stylesheet and RNG schema. The work was the opportunity to identify a few fixes that remain to be done on the annotation and training process. The final good results correspond to the manual annotations of 12 pages.

Emrah Özcan (Yildiz Technical University) – Retro-digitizing Turkish dictionaries using GROBID-dictionaries and XSLT

The project was based on a dictionary produced by the Turkish Language Institute which was only available in print format. One important issues that was explored was to identify a good OCR process producing enough reliable data from the original source, with the possibility that the original document has to be re-OCRed. Emrah experienced some difficulties providing enough annotations for the fine grained models.  

Interestingly the levels for models in GROBID-Dictionaries reflects the actual levels from the TEI in libraries recommendation.

Emrah also converted another dictionary available as an SQL database and transformed it automatically into a full blown TEI representation!

Biljana Lazić – Using GROBID-Dictionaries to encode the German-Serbian Mining Dictionary

Biljana Lazić presented how she used GROBID-Dictionaries for her bilingual legacy dictionary project. The entry segmentation level was very good but the next stage (field level) generated a lot of wrongly represented content. With 30 pages, there were still a lot of issues, which means that GROBID has to be extended to additional features that would help GROBID converge better.

Marija Gmitrović (Institute of Serbian Language) – Using GROBID-Dictionaries to encode the Dictionary of the Jablanica Dialect

The first experience gathered by Marija Gmitrović on GROBID is that Windows 7 can be a pain :-}. The original Serbian dialect dictionary contained 322 pages of entries. The results were very good as a whole for the first level but dropped at entry-segmentation level and field level. After more annotations (14 pages in total), the results improved and related entries were also well recognized.

The experiment was an opportunity to identify the shallow difference between a sense and a related entry.

SSK update

Lionel Tadjou (Inria, team ALMAnaCH) – Current state of the lexical scenario in the SSK

Lionel Tadjou presented the main features of the Standardisation Survival Kit (SSK – http://ssk.huma-num.fr) and focuses on the specific scenario that was developed together with Charly Moerth (our keynote speaker) and Charles Riondet from Inria, with the help of several participants to the masterclass, available under: https://ssk.parthenos-project.eu/ssk/#/scenarios/SSK_sc_dictionaryInTei/1

TEI based dictionary projects

Boris Lehečka (Czech Language Institute) – Electronic Dictionary of Old Czech: TEI Modeling and Transforming

Old Czech covers the period from 1300-1500 and Boris Lehečka worked on a dictionary project that started in 2006 which is now enriched with new entries, citations etc. The source is encoded in MS Word and was transformed into a TEI representation, then presented as an HTML output. Boris wanted to keep to the editorial view of his original source for the sake of respect to the original lexicographers. Following a precise study of the TEI Guidelines, Boris put together a TEI-based model that would cover all the necessary features from the source and updated a first encoding attempt from 2015. The important lessons learned from the masterclass is to document one’s model by means of a TEI ODD specification and this may in turn impact on new features for the TEI Guidelines as a whole. 

Nikola Zdravković – Deutsches Wörterbuch – Refurbished (Letter A)

Nikola Zdravković came to the masterclass to improve his XML skills and the actual results went beyond his personal expectations! The work was carried out on the basis of the actual source data of the Deutsches Wörterbuch (with the personal and efficient support of Axel Herold from the BBAW). By means of XSLT transforms (and XPath), Nikola explored, thorough a variety of presentational profiles, the actual content of the dictionary and identify various lexicographic characteristics thereof.

Language documentation projects

Jonas Lau – Advancements in the Àbèsàbèsì Dictionary

Jonas Lau was working jointly on the the dictionary and the grammar of  Àbèsàbèsì, both encoded in TEI. The dictionary data was obtained by converting LIFT format into TEI and then searching/presenting the data by means of XQuery (on top of an eXist database). A specific difficulty was to sort the content according to IPA representation. The trick was to use the German profile (!).

Simona Olivieri (Humboldt Research Fellow – FU Berlin) – TEI-encoding of linguistic terminology and Qurʾānic quotations in the Kitāb Sībawayhi

The project is about extracting grammatical terminology from a grammar of classical Arabic (1500 pages so far) and linking them to citations (most of them from the Quran). Simona Olivieri created a cluster of TEI documents corresponding to the grammar, the lexical entries and the quranic quotations.  Simona discovered the strength of the TEI header to document the project precisely and keep track of actual encoding choices for instance. Finally, XSLT was used to provide a presentation of the grammar.

From legacy formats and data bases  to TEI

Fraser Dallachy – A TEI XML Version of the “Historical Thesaurus of English” Legacy Database

The Historical Thesaurus of English was available as a relational database with a specific model combining dictionary (semasiological) and thesaurus (onomasiological) characteristics. The workflow was based on the provision of an initial dump of the database, followed by a transformation of the corresponding flat tables into a TEI conformant (dictionary) structure. The complexity of the source model forced Fraser to stretch the semantics of some tags such as <form>.

Ana de Castro Salgado (FCSH, NOVA, CLUNL, Lisbon, Portugal / Academia das Ciências de Lisboa, Lisbon, Portugal) – Switching the Academy of Sciences Portuguese Dictionary to TEI Lex-0

Ana Salgado worked on the normalization of a pre-TEI export from the database within which the Academy of Sciences Portuguese Dictionary was encoded. Ana identified a lot of procedural aspects that she documented to proceed with this project. The output comes with a full-fledged TEI header and a precise typology of dictionary entries. One of the specific working areas was the representation of collocations which are now seen as real entries within the main ones. Another issue was to record the forms that have been impacted by the Portuguese spelling reform.

Zara Kancheva and Ivaylo Radev (IICT-BAS) –  BTB-WordNet. From LMF to TEI with XSLT

Zara and Ivaylo worked on the transformation of a legacy representation of the Bulgarian Wordnet expressed in the old LMF serialization proposal in order to align it with the TEI Guidelines. Since the original model was clearly reflecting a semasiological view on the actual data, with forms pointing to one or more senses, with a mapping on Wordnet synset, the process was relatively straightforward and carried out as an XSLT transform.

Acknowledgements

Last but not least, we should not forget all the wonderful people that have made it possible to have such a successful event. Beyond my team-twin @digilex and myself, a whole group of trainers have jumped in to bring in their skills: Mohamed Khemakhem coordinated the GROBID-dictionary workshops, Jack Bowers brought his TEI skills, Lionel Tadjou and Charles Riondet went around compiling an SSK scenario, and last but not least, beyond his own TEI literacy and intimate knowledge of the GRIOMM dictionary, Axel was the one thanks to whom we have managed to have the perfect setting (‘and catering) at the Berlin Brandenburg Academy of Sciences.

A born-digital author lexicon for 17th c. French: Sévigné’s case

Preparing an edition of Madame de Sévigné’s correspondance encoded in TEI, we are currently facing two problems. First, while French medievalists have a long experience of establishing lexicons, specialists of 17th c. French literature traditionally do not provide such a study in their editions. Second, we are not aware of any born-digital author lexicon in TEI for (17th c.) French language. We therefore have to tackle two problems at the same time, and create both a scientific methodology, and a digital solution.

1. Project

We are currently preparing an online edition of Madame de Sévigné’s autograph Correspondance. This project, funded for three years by the FNS, is now reaching its end: most of the material has been collected, described in a catalogue, and is currently being transcribed in TEI. Its peculiarity is that it focuses only on the autograph letters, i.e. the rare documents written by the author itself still available, and not on manuscript or printed copies which represent the majority of the corpus. By doing so, we hope to provide a in-depth study of Sévigné’s spelling practices, and thus replace her in the (orthographic) quarrel between the Ancients and the Moderns.

2. Program

For the lexicon, a basic program would be the following:

  1. Since it is an author lexicon based on an edition, we need to digitally link words and their definitions.
  2. Provide a definition
  3. Words being often inflected (gender, number, mood, tense, etc.), we need to reconstruct the correct lemma, and distinguish reconstructed and attested lemmas.
  4. We need to record, and potentially comment interesting spellings.
  5. We need to “frame” both diachronically and synchronically Sévigné’s vocabulary.

3. Lexicographic sources

As we have just said, our main objective is to provide a definition to complex words or locutions, but we also want to “frame” both diachronically and synchronically Sévigné’s vocabulary. Diachronically, by looking in dictionaries of medieval and Renaissance French (is the word old or new?) Synchronically, by looking in 17th c. French dictionaries (is the word colloquial of formal?).

Some of these dictionaries are available online, but in different formats. Some are published in pdf/image format:

Some already are available as an online app:

The FEW, with many other lexicographic sources, is described in the <sourceDesc> of the TEI header. For instance

<bibl xml:id="FEW">
 <author>
  <forename>Walther</forename>
  <surname>von Wartburg</surname>
 </author>
 <title>Französisches Etymologisches Wörterbuch (FEW)</title>
 <extent>25 vols.</extent>
 <orgName>UMR ATILF (CNRS – Nancy Université)</orgName>
 <ref type="app" target="https://apps.atilf.fr/lecteurFEW"/>
</bibl>

4. Encoding

4.0 Linking to the definition

In order to present our methodology and the digital modelling of our lexicon, we will concentrate our effort on a sample of the letter n°852 (Duchêne’s numbering). We have added the sign ° to mark words that we will study in the present article:

mais iamais vn trait dorgueil na esté ſy mal placé ny ſy mal receu de tout le monde, ne me cités° pas, ſy lenuie vous prent den parler come les autres, vous-me dires auſſy come° la comporte° noſtre carcaſſone

We need first to link words to the future lexicon entry. To do so wrap interesting occurrences with , and link them to the correct entry with @corresp (we follow the manuscript layout for ):

For the language of the etymon, we use the ISO 639-3 standard [https://en.wikipedia.org/wiki/ISO_639-3]. However, for the date of the first occurrence, we use the standard abbreviation of lexicographers, divided in three main periods:

<lb/>mais iamais vn trait
<lb/>dorgueil na esté ſy mal
<lb/>placé ny ſy mal receu de
<lb/>tout le monde, ne me <w corresp="#citer">cités</w>
<lb/>pas, ſy lenuie vous prent
<lb/>den parler come les autres,
<lb/>vous me dires auſſy <w corresp="#come">come</w>
<lb/>la <w corresp="#comporter">comporte</w> noſtre carcaſſone

4.1 Lexicon entry

As we have said, a basic entry needs to provide a headword, which traditionally is the lemma, record the inflected form and comment the spelling. For come, which can be written alternatively with double <m> (comme) or with one <m> (come), we propose the following encoding:

<entry xml:id="comme">
 <form type="lemma"type="reconstructed">
  <orth>comme</orth>
  <gramGrp>
   <pos norm="adv"/>
  </gramGrp>
 </form>
<form type="variant" type="attested">
  <orth type="orig">come</orth>
  <orth type="reg">comme</orth>
 <note type=“spelling">Réduction de la géminée -mm-.</note>
</form>

For cités however, what needs to be commented is the morpheme of the fifth-person of the present imperative (Sévigné uses an -<s> rather than a -<z>). The information is not discussed under @type=“variant” but @type=“inflected” with the relevant linguistic information.

<entry xml:id="citer">
 <form type="lemma" type="reconstructed">
  <orth>citer</orth>
  <gramGrp>
   <pos norm="v"/>
  </gramGrp>
 </form>
 <form type="inflected">
  <orth type="orig">cités</orth>
  <orth type="reg">citez</orth>
  <gramGrp>
   <per norm="5"/>
   <tns norm="present"/>
   <mood norm="imperative"/>
  </gramGrp>
  <note type="spelling">Emploi moderne de -s pour -z.</note>
 </form>

Additional informations about the spelling could be encoded with and @type=“geo” for regionally marked spellings:

<usg type="geo">bourguignon</usg>

or @type=“soc” to comment socially marked spellings:

<usg type="soc">aristocratique</usg>

4.1.1 Note on the TEI

It is not valid to use @status with <form>. However, in an author lexicon recording only words in the text, to differentiate reconstructed lemmas (traditionally presented between square brackets, e.g. [Comment]) from attested lemmas (e.g. Comment). We propose to use @status=“reconstructed” and @status=“attested” to distinguish the two.

4.2 Definition of the occurrence

We need now to provide a definition. We propose the following encoding:

<sense type="myDefinition">
 <def>
  <gloss> comment</gloss>
  <bibl corresp="#FEW" facs="https://apps.atilf.fr/lecteurFEW/lire/20/1542">
  <citedRange unit="entry">quōmŏdo</citedRange>
 </bibl>
 </def>
</sense>

Specific constructions, such as citer quelqu’un, can be encoded the following way:

<sense type="myDefinition">
 <form type="construction">citer qqn.</orth>
 <gramGrp>
  <subc norm="tr"/>
 </gramGrp>
 <def>
  <gloss>Nommer celui de qui l'on tient une information.</gloss>
 </def>
</sense>

4.2.1 Note on the TEI

It is not valid to use @type with . However, as we will see later, we need to differentiate different senses: the one we give () from the one in diachrony () and the one in synchrony ().

4.3 Diachronic commentary

A specificity of our dictionary is to propose a diachronic commentary of the word, to identify innovations, archaisms, hapaxes or first occurrences. To do so, we take up L. Romary and J. Bower’s propositions in their article “Deep Encoding of Etymological Information in TEI” [http://journals.openedition.org/jtei/1643].

<etym>
 <cit type="etymon">
  <oRef>cĭtare</oRef>
  <lang>la</lang>
 </cit>
 <date type="firstOccurrence">mfr</date>
 <note type="comm">Sens déjà connu en moyen français.</note>
 <listBibl>
  <bibl corresp="#TLFi" facs="http://www.cnrtl.fr/etymologie/citer">
   <citedRange unit="entry">citer</citedRange>
  </bibl>
  <bibl corresp="#DMF" facs="http://www.atilf.fr/dmf/definition/citer">
   <citedRange unit="entry">citer</citedRange>
  </bibl>
  <bibl corresp="#FEW" facs="https://apps.atilf.fr/lecteurFEW/lire/20/717">
   <citedRange unit="vol" n="II-1"/>
   <citedRange unit="p" n="716b"/>
   <citedRange unit="entry">cĭtare</citedRange>
   <citedRange unit="subentry" n="2.a"/>
  </bibl>
 </listBibl>
</etym>

For the language of the etymon, we use the ISO 639-3 standard [https://en.wikipedia.org/wiki/ISO_639-3]. However, for the date of the first occurrence, we use the standard abbreviation of lexicographers, divided in three main periods:

  • “afr” (ancien français) for old French (842-1400),
  • “mfr” (moyen français) for middle French (1400-1600),
  • “frm” (français moderne) for moderne French (1600 and after).

4.4 Synchronic commentary

The last part of the lexicon is made of commentary about the sense in the 17th c. according to dictionaries produced at the end of this century. Each commentary is followed by bibliographic references to the articles of several dictionaries, and the definitions they provide.

<sense type="synchronic">
 <note type="comm">Emploi récent (mfr.), toujours inconnu de Nicot. Rich1680 enregistre un emploi littéraire (~ un auteur), dont Ac1694 fait une locution verbale (~ son auteur) qui, par dérivation, possède le sens de « donner sa source ». Seul Fur1690 propose ce sens pour ~ (« nommer celui dont on tient qch »).</note>
 <def>
  Citer, ou adjourner, In ius vocare, Dicam scribere vel impingere, Diem dicere, Il vient de Citare, Usez des locutions de Adjourner et Adjournement.
  <bibl corresp="#nicot_1606" facs="http://gallica.bnf.fr/ark:/12148/bpt6k50808z/f129.image">
   <citedRange unit="p" n="125b"/>
   <citedRange unit="entry">citer</citedRange>
  </bibl>
 </def>
 <def>
  Alléguer, aporter quelques passages d'Auteurs, ou quelques Auteurs graves.
  <bibl corresp="#richelet_1680" facs="http://gallica.bnf.fr/ark:/12148/bpt6k509323/f244.image">
   <citedRange unit="p" n="140b"/>
   <citedRange unit="entry">citer</citedRange>
  </bibl>
 </def>
 <def>
  On dit aussi, citer son autheur, pour dire, Nommer celuy de qui on tient une nouvelle, ou quelque chose de semblable.
  <bibl corresp="#academie_1694" facs="http://gallica.bnf.fr/ark:/12148/bpt6k503971/f213.image">
   <citedRange unit="t" n="2"/>
   <citedRange unit="p" n="279b"/>
   <citedRange unit="entry">citer</citedRange>
  </bibl>
 </def>
 <def>
  signifie aussi, Alleguer un passage, une autorité, nommer celuy duquel on tient quelque chose.
  <bibl corresp="#furetiere_1690" facs="http://gallica.bnf.fr/ark:/12148/bpt6k50614b/f411.image">
   <citedRange unit="entry">citer</citedRange>
  </bibl>
 </def>
</sense>

5. Attached documentation

The encoding of lexicographic informations of the three words in TEI according to the principle we have just exposed is provided availaible [Cf. GitHub/e-ditiones]. An accompanying XSLT document provides a simple HTML output [GitHub/e-ditiones].

References

  • Jack BOWERS and Laurent ROMARY, « Deep Encoding of Etymological Information in TEI », Journal of the Text Encoding Initiative [Online], Issue 10 | 2016. URL : [http://journals.openedition.org/jtei/1643]
  • Piotr BAŃSKI, Jack BOWERS, Tomaz ERJAVEC, « TEI-Lex0 guidelines for the encoding of dictionary information on written and spoken forms », Electronic Lexicography in the 21st Century: Proceedings of ELex 2017 Conference, Sep 2017, Leiden, Netherlands. 〈hal-01757108〉
  • SÉVIGNÉ, Mme de -, Correspondance, Roger DUCHÊNE (éd.), Paris : Gallimard, 1972-1978.
  • Simon GABAY, « A born-digital author lexicon for 17th c. French: Sévigné’s case », e-ditiones, [https://editiones.hypotheses.org/975].

Special thanks

To #LexMC team: Laurent Romary, Alexander Geyken, Toma Tasovac, Benoît Sagot, Piotr Bański and Mohamed Khemakhem.

Note

This post is also available on our blog.

Lexical resources for processing 18th-century French correspondence with NLP tools

Background

Electronic Enlightenment is a scholarly digital editions of letters and correspondence. The collection started with Voltaire & Rousseau, expanded into other Enlightenment thinkers and writers, and has expanded into other eras, languages and domains. EE now has more than 80 000 letters, involving more than 15 000 people as writers or recipients.


Access to EE is via institutional subscription managed by Oxford University Press. While downloads of the full texts remain behind the paywall, the plan is to make online search and exploration of this text collection freely available. In order to search historical French texts effectively, users need to be able to find inflected forms and variant spellings. How can we make that search possible?

I have been developing lexical resources to enable, test, refine and improve the automatic lemmatization and wordclass tagging of (mostly) eighteenth-century French correspondence in the Electronic Enlightenment collection. As part of this project, I gratefully accepted a place on the Lexical Data Masterclass, and during the workshop I aimed to pursue a project to transform and enhance our lexical resources to make them more easily reusable.

There was also a secondary aim to find out more about existing work in this area, and to bring together our lexical information with existing datasets, or lexical information such as wordclasses and lemmas extracted from existing resources, to produce a re-usable, open access lexical resource for historical French. The resulting lexicon could be a customized TEI document, to be shared with colleagues, used for tagging the texts, and refined and improved through this process, and made available to other scholars.

I have been using natural language processing (NLP) tools to annotate the texts with dictionary headword (also known as lemma) and part of speech (or word class). Experiments with the widely-used programme TreeTagger trained on 20th and 21th century French) have confirmed that there is, not surprisingly, a need for enhanced lexical resources to deal with the vocabulary found in the Electronic Enlightenment collections. While we could probably reach our short-term goals by some manual tagging, re-training the tools, and hacking the lexicon, we would prefer to create, or contribute to the creation of, high-quality, standards-conformant, and re-usable lexical resources.

Initial research has shown that many of the digital texts of works in French from the 17th and 18th centuries based on editions with modernized spellings, and as a result, most software applications to work with French texts have not been adapted to work with original spellings from these periods. We had also not been able to yet find computer-usable lexicons with older variants of word forms.

Most freely available NLP software and lexical resources that we have discovered are, like TreeTagger, for modern (21st-century) French. The texts that we are working with exhibit the grammar, orthography and lexis of (mostly) learned writing of the eighteenth century, but also some variations from the standard language as found in published works. The characteristics of the texts include the following:

  • non-standard spelling, including elision of accents (“tollerance”, “tolerance”)
  • citations in a number of languages esp. Latin and Greek
  • writing by non-native speakers (e.g. Hobbes)
  • spelling mistakes (“convervation”)

In some more extreme cases, there is phonetic writing. This seems to be typical of women writers who have been only partially formally educated, for example Catherine Dorothée de Saint-Pierre, who writes with non-standard orthography, capitalization, tokenization, and often with little or no punctuation, e.g. “Etant da cor je me rendis avecque mon nésessere che luy je presentay mon premier cartieé de pension“). These forms might prove to be too idiosyncratic to merit inclusion of the non-standard forms in the lexicon.

We have made a start with this project.  Starting with the initial (imperfect) tagging with TreeTagger, I made a wordlist to deal with the most common erroneous forms in the output, such as forms of verbs ‘être’, ‘avoir’, and in all, some 472 words. TreeTagger was then run using the additional customized lexicon, with much improved results. The resulting corpus with POS tags and lemmas was then put into CQPweb, with good results for improved searching for the words which had been corrected. The example below shows hits for a search for all forms of the verb ‘pouvoir’.

The customized wordlist, or lexicon, has a simple form, in a tab-separated file with the original word form as attested in the corpus, a modernized version of that word form, POS tag and lemma. This is the format required for input to TreeTagger .

There are two problems with this format. Firstly, this won’t be adequate for dealing with more complex forms of polysemy, or for adding additional information, such as senses, translations, or citations. Secondly, since we want this lexical data to be re-usable , multipurpose, and shared, we need to have it in a standardized, well-documented format that others can easily understand, process and transform into other formats.  Probably people have addressed this problem before, but their lexical resources are not being shared, and can’t be re-used. We don’t want to make that mistake with our lexicon. In order to share data, expertise and know-how, we want to make a re-usable dictionary of variant forms in 16-19th century French in appropriate formats.

Therefore, I want to find out about using the TEI dictionary format, as a better model for a sharable, sustainable and re-usable lexicon.

We will also be interested in pursuing a similar project with German, in order to process a large collection of the letters of Immanuel Kant, and with the existing large numbers of letters in English and Italian in the EE collections.

What happened in the workshop

The Lexical Data Masterclass has proved to be extremely valuable in a number of ways. Firstly, I have been able to transform and enhance the wordlist, turning it into a TEI dictionary with the help, thanks to having the time to spend on the project, and with the help of Europe’s leading experts in the field. The results will be shown below.

Just as valuable has been the help that the organisers, especially Laurent Romary, have given me in finding out about previous work and existing resources in this area. Laurent helped me to find Gilles Souvay at ATILF in Nancy, who worked on the ANR/DFG Presto project http://presto.ens-lyon.fr/ and European Project Impact http://www.impact-project.eu/, and created LGeRM lexicon for XVIth XVIIth French, which is an adaption of Morphalou dictionary with older words and word forms.  Gilles has sent me a copy of the dictionary, and I’m working on adapting it to the same format as my dictionary, converting the POS tags to the TreeTagger tagset, and merging the resources. While extremely valuable for lemmatization, LGeRM doesn’t have POS tags for all inflected forms, so there is work to do to merge and enhance the dictionary!

A third area in which the workshop was valuable was in finding out about the work of the other participants, and making new academic contacts, which will certainly prove to be extremely useful. Finally, and by no means least, the decision to locate the workshop at the Berlin-Brandenburg Akademie der Wissenschaften was an excellent choice, and I have had the opportunity to talk to other researchers here who were not directly involved in the workshop, about collaborating on aspects of my project, such as putting the EE corpus onto the BBAW platforms for use with tools like DStar and DiaCollo, and possible collaborations with CLARIN partners to share and spread expertise in adapting NLP tools to work with historical varieties of languages.

The Results

Now, what you’ve all been waiting for.

I was able to use XSLT to transform my custom lexicon into the TEI dictionary format. A fragment of this is show below:

     <entry>
        <form type="lemma">
           <orth>temps</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="temps">tems</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tête</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tête">tete</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>théatre</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="théatre">theatre</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tolérable</orth>
           <gramGrp>
              <pos>ADJ</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérable">tolerable</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tolérance</orth>
           <gramGrp>
              <pos>NOM</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tolerance</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tolerence</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tollerançe</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tollérance</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérance">tollerence</orth>
           </form>
        </form>
     </entry>
     <entry>
        <form type="lemma">
           <orth>tolérant</orth>
           <gramGrp>
              <pos>ADJ</pos>
           </gramGrp>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tolerans</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tolérans</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tolerants</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérant">tolerant</orth>
           </form>
        </form>
        <form type="inflected">
           <form type="variant">
              <orth norm="tolérants">tollérans</orth>
           </form>
        </form>
     </entry>

Before the XSLT was run, it was necessary to put the lexicon into a basic but well-formed XML format, which XSLT requires as input. Then, as well as having developed a re-usable XSLT script for this transformation, I have also written an XSLT script for transforming the dictionary back into the TreeTagger format. This is not entirely circular. It means that additional lexical information, such as citations, for example, can be added to the TEI version, which will be the master dictionary, and version can be extracted in different formats for various purposes when necessary. Below is the XSLT used for the main conversion of the flat content to TEI:

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xpath-default-namespace="http://www.tei-c.org/ns/1.0"
    version="2.0">
    
  <xsl:output method="xml" indent="yes"/>
  
  <xsl:template name="main" match="/TEI">
  <TEI xmlns="http://www.tei-c.org/ns/1.0">
    <teiHeader>
      <fileDesc>
        <titleStmt>
          <title>Dictionaire de formes de français du dix-septième siècle</title>
        </titleStmt>
        <publicationStmt>
          <p>Work in progress</p>
        </publicationStmt>
        <sourceDesc>
          <p>Born digital</p>
        </sourceDesc>
      </fileDesc>
    </teiHeader>
    <text>
      <body>
        
    <xsl:for-each-group select="body/entry" group-by='lemma'>
     
      <entry>
        <form type="lemma">
        <orth><xsl:value-of select="current-grouping-key()"/></orth>
            
           <gramGrp>
              <pos><xsl:value-of select="pos"/></pos>
           </gramGrp>
        </form>
        <xsl:for-each select="current-group()">
          
        <form type="inflected">
          <form type="variant">
            <orth> <xsl:attribute name="norm"><xsl:value-of select="norm"/></xsl:attribute><xsl:value-of select="orig"/></orth>
          </form>
        </form>
        </xsl:for-each>
      </entry>
    </xsl:for-each-group>
      </body></text></TEI>
  </xsl:template>
</xsl:stylesheet>

The LGeRM dictionary was encoded in XML in consistent way, but not entirely well-formed, and in a different format to mine. SO I also transformed that into the same format as mine so that the two can be merged, or at least so that TreeTagger output can be derived from it. 

Next steps

There is still some work to do in dealing with some of the trickier problems of polysemy; merging lexicons and tagsets; and then the iterative process of re-tagging the corpus, analysing errors and  improving the lexicon.

When the corpus is good enough to use, we’ll make it available via CQPweb from Oxford, and, we hope, on other platforms too, and we’ll make the lexicon available from the Oxford Text Archive.

Then we’ll start to work on the English, German and Italian texts, and to talk to others about sharing lexical resources, corpora, tools and expertise.

Тhe Lexical Data Masterclass is back!

Co-organized by DARIAH-EU, the Berlin Brandenburg Academy of Sciences (BBAW), Inria and the Belgrade Center for Digital Humanities, with the support of the French Ministry of Higher Education and Research (MESRI), CLARIN and the European Lexicographic Infrastructure (ELEXIS), the 2018 edition of the Lexical Data Masterclass will take place in Berlin at the BBAW from 3 to 7 December.

LexMC2018 will bring together 20 advanced trainees together with experts to share experiences, methods and techniques for the creation, management and use of lexical data.

The masterclass will cover a number of topics ranging from general models for lexical content and TEI-based representation of lexical data to working efficiently with XML editors. The participants will have a chance to attend different sessions, consult with experts on their own dictionary projects and get to know and test TEI Lex-0, a newly proposed baseline encoding for lexicographic data.

The masterclass will also feature keynotes by Karlheinz Moerth, Director of the Austrian Centre for Digital Humanities at the Austrian Academy of Sciences and Benoit Sagot, head of the ALMAnaCH research team at Inria.

Potential applicants should submit a short proposal presenting their background and interest in the field together with a description of a concrete project involving lexical data that they would like to pursue during the masterclass.

Participation is free of charge. Travel costs and accommodation will be covered for all participants up to a maximum of 600€.

Applications should be made via the Lexical Master Class website: https://lexmc18.sciencesconf.org

Application deadline: 26th October 2018
Notification of acceptance: 5th November 2018

Further inquiries can be made to: lexmc18@sciencesconf.org

For an overview of the last year’s masterclass, see https://digilex.hypotheses.org/386.

The Lexical Data Masterclass – An Overview

From 4 to 8 December 2017, 21 participants met together with 8 trainers and 2 keynote speakers to work jointly and improve their digital dictionary projects.

The meeting, co-organized by the Centre Marc Bloch, DARIAH-EU, the Berlin Brandenburg Academy of Sciences (BBAW), Inria (Paris, France) and the Belgrade Center for Digital Humanities (BCDH, Serbia), with the support of the German Ministry of Education and Research (BMBF), CLARIN, DARIAH-DE and the EU H2020 project Humanities at Scale (HaS) has been construed as a master class, i.e. a series of training and working sessions where most of the knowledge transfer is issued through the concrete work on the participants’ projects. We want to reflect here on what everyone has obviously considered as a very successful meeting by providing an overview on the instructional sessions and the actual projects that been brought by the participants as reflected in the final symposium that took place on 8th December.  Continue reading The Lexical Data Masterclass – An Overview

Encoding the Etymological Dictionary of the Serbian Language

We came to the Lexical Data Masterclass in Berlin with a clear idea in our minds – to explore the possibilities of retrodigitizing the Etymological Dictionary of the Serbian Language, which is a project both of us are working on, together with other colleagues at the Etymological department of the Institute for the Serbian Language of SASA. Little did we know it would turn out to be such a challenging task, yet extremely exciting at the same time.

We came to Berlin hoping to begin our work on the retrodigizitation of the Etymological Dictionary of the Serbian Language by encoding several entries as test models. In this blog post, we will present one of them – безбатан. Even though it is a shorter entry, it has all the important structural elements – grammatical information, several attestations, a semantic definition, examples, complex etymological information etc.

Continue reading Encoding the Etymological Dictionary of the Serbian Language

An XML Version of Turkish Dictionary

In order to anchor international or multinational lexicographic projects on existing Turkish dictionaries, we should have a common understanding of the way we make reference resources available, as is the case for the digital version of our Turkish dictionary project. Although there work has been being done on digitizing Turkish dictionaries, both old Turkish dictionaries and current versions of it, these few examples do not follow a standard way of encoding the source file. In order to overcome this obstacle, during the Lexical Data Masterclass in Berlin, on December 4-8, 2017, I worked on an XSLT transformation document to process an existing dictionary into an output conformant to the TEI standard. Seeing that almost all current Turkish dictionaries give the same category of lexical information in a very similar page layout, this XSLT could work on other digitalized or OCRized Turkish dictionaries. Even if they do not have a digital version, GROBID based projects can easily transform OCRized PDFs into digital file format like the works of Khemakhem et al. 2017. Continue reading An XML Version of Turkish Dictionary

Creating a prototype for lexicographic entries for spoken German

Since dictionaries are mostly based on written language data, creating a dictionary of spoken language requires new types of lexicographic descriptions and an elaborate microstructure. When analyzing spoken language material, a remarkable part of lexicographic work consists in analyzing interactional contributions of one or more speakers, and focusing on the lexicalized units used for organizing conversation as well as expressing one’s attitude and reacting to other speakers’ turns. In the project Lexicon of Spoken German (LeGeDe: Lexikon des gesprochenen Deutsch), we are creating a prototype for a lexical resource with the aim of describing the common practices and preferences in spoken German by exploring lexicographic representations for interjections, multiword expressions, such as passt schon and mal gucken, and delexicalized verb forms. None of these have been extensively elaborated in German lexicographic tradition.

Representing pragmatic information

One of the biggest challenges of describing spoken language data is determining the function of lexical units in interactional setting. For instance, the expression oh Gott (en: oh my God) can have multiple functions in an interaction: it can express surprise, astonishment, horror, indignation, annoyance, pain, excitement, etc. and it can be used, for example, as a mean of confirming or agreeing with the interlocutor’s position.

As an attempt to describe these functions in a TEI representation within the Lexical Data Masterclass in Berlin in December 2017, I defined an <usg> element with a type “commFunc” for every information regarding word usage or communicative function that was described in my entry drafts. For sake of example, I categorized talk organization, speaker alignment and speaker attitude as some of the possible subtypes of communicative functions. Given that the TEI-attribute @subtype is not yet allowed in this specification, I used the @value attribute to specify the subcategories of communicative functions.

<entry>
     <form type="lemma">
         <orth>Gott</orth>
     </form>
     <sense n="2">
         <gramGrp>
             <pos>NG</pos>
         </gramGrp>
         <usg type="commFunc" value="speakerAtt">Aufregung</usg>
         <usg type="commFunc" value="speakerAtt">Überraschung</usg>
         <usg type="commFunc" value="speakerAtt">Entsetzen</usg>
         <usg type="commFunc" value="speakerAtt">Erstaunen</usg>
         <usg type="commFunc" value="speakerAtt">Ungeduld</usg>
         <usg type="commFunc" value="speakerAlign">Kooperation</usg>
         <usg type="commFunc" value="talkOrg">Response</usg>
         <usg type="commFunc" value="talkOrg">Diskursmarker</usg>
     </sense>
</entry>

Since each <usg> element contains only one item, this structure would allow querying the dictionary according to the onomasiological features. For future references, the conventions in ISO 24617-2 Dialogue Acts (See Bunt et al., 2010) may be a good basis for further work on systematizing communicative functions.

An immediate issue that arises in this type of representations is the one of inheritance and cross-reference. On the one hand, one must consider representing the uses of multiword expressions that are related to a particular sense and do not inherit all of its functions. On the other hand, it is also inevitable having to represent those, which inherit all of the communicative functions defined for the parent node and which can have other functions as well. One way of resolving the issue of inheritance would be to specify whether the features in the dictionary are to be interpreted as cumulative, overwriting or local (Ide, Kilgarriff, Romary 2000).

Although the questions of inheritance and representation of communicative functions become apparent in an attempt of XML dictionary modeling, they must be discussed in the conceptual part of the lexicographic work. The like can be said in the case of descriptions of multiword expressions and lexicographic descriptions of conversions (schauen, verb > schau, interjection). In TEI terms, the latter can be represented as related entries as well as entries in their own right, and deciding how to model them in the starting phase of a dictionary creation is indispensable for a sustainable development of that dictionary.

Integrating frequency information

Working with corpora of spoken language, which are annotated on multiple levels, allows the use of lexicographic descriptions of metadata information, such as pronunciation, geographical information, gender, age, etc. Annotating and sorting the dictionary entries with these features could be the next development for digital born dictionaries of spoken language.

Corpora of spoken German are still relatively small for allowing a fine-grained description of particular lexical units (for instance, FOLK- the largest corpus of spoken German in interaction contains less than 2 million tokens). However, since interjections are highly frequent in spoken language, a quantitative lexicographic description of their frequency distribution would be something to consider.

With the exception of quantitative description of metadata in dictionary entries, sorting the entries according to the most frequent senses is an issue that can be accessed with the TEI tag <f>, that has also been used for representing corpus frequencies in the dictionaries (Mörth et al., 2015). Other than storing absolute frequencies or ranks regarding corpus counts, the <f> tag can be used to represent the rank of the senses or uses in the entries.


<fs type="corpFreq">
    <f name="rank"><numeric value="1"/></f>
</fs>

          
<fs type="corpFreq">
    <f name="rank"><numeric value="2"/></f>
</fs>

Thanks to the extended pos-tags developed for spoken German and integrated into the FOLK corpus (Westpfahl 2014), disambiguating between certain senses as well as checking their corpus frequency has been made possible. For instance, Gott is tagged as either noun (NN) or interjection (non-grammatical-element: NG) and the frequency of Gott as interjection in FOLK surpasses in great measure the frequency of Gott as a noun. Hence, we can set Gott (NG) as the most frequent sense (rank=1). However, for many lexical units having same parts of speech but different senses, the ranking can not be automatized as easily, and it would depend in great measure on the sample size considered in the lexicographic analysis.

Simple XSLT experiment: building up a concordance dictionary from a corpus

 

During the lexical master class in Berlin, we worked together with Simonetta Battista and Ellert Johannsson on the Dictionary of Old Norse Prose. One of the little exercises we have done was to use XSLT in order to generate lexical entries automatically from an existing annotated textual corpus of Old Norse.

The corpus

The starting point is a corpus encoded according to the TEI guidelines done to the level of token. The core unit in the corpus is the sentence (available under /TEI/text/body/div/p):

<s><w lemma="ljúka" type="sfg3en">Lýkur</w><w lemma="hér" type="aa">hér</w><w lemma="saga" type="nveo">sögu</w><w lemma="grettir" type="nkee-s">Grettis</w><w lemma="ásmundarsonar" type="nkee-s">Ásmundarsonar</w><pc>,</pc><w lemma="vor" type="fekee">vors</w><w lemma="samlandi" type="nkfe">samlanda</w><pc>.</pc></s>

where each token (<w>) is associated to a reference lemma (@lemma attribute).

The task at hand

The objective is, for each lemma occurring in the annotated corpus, to create a TEI lexical entry that groups together all sentences from the corpus. Going one step further, we have create a specific TEI document for each entry.

The XSLT stylesheet

Although quite simple, the stylesheet contains several XSLT techniques that we describe in the forthcoming sections.

Architecture of the stylesheet and first variables

The stylesheet relies on a a single template associate to the root element of our corpus:

<xsl:template match="/">…</xsl:template>

We first create the following variables (by mean of the XSLT element <xsl:variable/>):

  • theRoot: stores the root element of the source document so that it can be used later in the loop on lemmas, which looses track of the context node;
<xsl:variable name="theRoot" select="."/>
  • folderName: the name of the directory where we will store all the dictionary entry documents ;
  • allLemmas: that contains the sequence of all the different lemmas encountered in the corpus by means of a concise XPath instruction:
<xsl:variable name="allLemmas" select="distinct-values(descendant::w/@lemma)"/>

The XPath expression operates in two steps:

  • descendant::w/@lemma: extracts all lemma attributes from all words which are descendant of the current node (i.e. the root of our document in the current template) ;
  • the powerful XPath function distinct-values() that creates a sequence of all distinct values occurring in the argument sequence.

Creating each entry

The process of creating entries is based upon a main loop iterating on the different lemmas, within which we use <xsl:result-document/> to create an ouput document for each lemma.

<xsl:for-each select="$allLemmas">
   <xsl:sort/>
   <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
   …
   </xsl :result-document>
</xsl:for-each>

We have left here a sorting instruction (<xsl:sort/>) as an illustration on how to use it, knowing that it is less useful when entries are created in separate files.

Note the use of the {} notation in attributes to resolve the value of specific XPath fragments which, put together, builds up the name of the output file.

For each entry, we generate:

  • a <form> element whose <orth> child element contains the lemma
  • a series of <cit> elements for each detected sentence in the corpus (see below)
<entry>
   <form type="lemma">
      <orth>
         <xsl:value-of select="."/>
      </orth>
   </form>
   <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
      <cit>
         <quote>
            <xsl:copy-of select="parent::s"/>
         </quote>
      </cit>
   </xsl:for-each>
</entry>

The XPath expression “$theRoot/descendant::w[@lemma = current()] reads: find all the <w> elements descendent of the root element (of the corpus) whose @lemma attribute equals the current context node (i.e. the text value of the lemma from the encompassing <xsl:for-each/>). Note that the example sentence is simply obtained by looking for the parent <s> element of the current <w>.

The full XSLT

Here below, you can find the full XSLT, where you will notice how the header is built up in taking up some parts of the input corpus.

The full set of resources can be found in the DARIAH lexical working group GitHub space (note the @xpath-default-namespace attribute which ensures that all generated element are set in the TEI namespace):

<?xml version="1.0" encoding="UTF-8"?>

<xsl:stylesheet
   xmlns:xsl="http://www.w3.org/1999/XSL/Transform"    
   version="2.0" xmlns="http://www.tei-c.org/ns/1.0" 
   xpath-default-namespace="http://www.tei-c.org/ns/1.0">

<xsl:output method="xml" indent="yes"/>

<xsl:template match="/">
   <xsl:variable name="theRoot" select="."/>
   <xsl:variable name="folderName" select="'Entries'"/>
   <xsl:variable name="allLemmas"
      select="distinct-values(descendant::w/@lemma)"/>
   <xsl:for-each select="$allLemmas">
      <xsl:sort/>
      <xsl:result-document href="{$folderName}/{.}.xml" method="xml">
         <TEI>
            <teiHeader>
               <fileDesc>
                  <titleStmt>
                     <title>Concordance lexical entry for lemma: 
                        <xsl:value-of select="."/>
                     </title>
                  </titleStmt>
                  <publicationStmt>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/distributor"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/address"/>
                      <xsl:copy-of select="$theRoot/descendant::publicationStmt/availability"/>
                 </publicationStmt>
                 <sourceDesc>
                    <bibl>
                       <xsl:copy-of select="$theRoot/descendant::titleStmt/title"/>
                   </bibl>
                 </sourceDesc>
              </fileDesc>
            </teiHeader>
            <text>
               <body>
                  <entry>
                     <form type="lemma">
                        <orth>
                           <xsl:value-of select="."/>
                        </orth>
                     </form>
                     <xsl:for-each select="$theRoot/descendant::w[@lemma = current()]">
                        <cit>
                           <quote>
                              <xsl:copy-of select="parent::s"/>
                           </quote>
                        </cit>
                     </xsl:for-each>
                  </entry>
               </body>
            </text>
         </TEI>
      </xsl:result-document>
   </xsl:for-each>
</xsl:template>
</xsl:stylesheet>

 

 

Search OpenEdition Search

You will be redirected to OpenEdition Search