Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

GROBID Dictionaries: Experiments with the General Basque Dictionary (OEH) PDF

GROBID Dictionaries is a tool for structuring dictionaries (conversion from PDF format to TEI XML), with a supervised machine learning approach (CRF models). Details are explained in (Khemakhem et al. 2017), which is also the paper to cite in relation to GROBID Dictionaries.

At LexMC, I have taken part at the GROBID Dictionaries tutorial and workshop; Mohamed Khemakhem, the developer of GROBID Dictionaries has been our instructor. Here you find the initial guidelines for the tutorial sessions.

After receiving a tutorial about installing and running the tool using a Docker image prepared by Mohamed, we got familiar with the tool and the annotation and training workflow using test data samples stemming from a random paper dictionary (PDF version).

We then repeated the steps using our own data, in my case sample pages from a CC-BY-NC-SA licensed PDF version of the General Basque Dictionary “Orotariko Euskal Hiztegia” (Mitxelena & Sarasola 1986), that counts more than 16,000 pages and due to its structure has to be considered a hard task for structured representation in XML.

OEH dictionary on the shelf and as PDF

I created the XML files to be manually annotated and then used as training data using the “create training data” function in GROBID Dictionaries. The annotation has then carried out using oXygen’s author mode feature. The manually annotated xml files are put into a folder where GROBID will find them and use for training; the commands for that are executed in the terminal (see guidelines).

On this screenshot, we see training data to be annotated in oXygen, and the terminal having run the command to create that XML from the PDF original, together with the other files necessary for GROBID

The files created by GROBID Dictionaries are (1) the pdf-to-text conversion raw text file, (2) the features file, and (3) the XML file. The vertical features file contains for each token of the raw text a set of values derived from the presentational markup present in the PDF.

Screenshot: Features File and PDF

The cascaded parsing performed by GROBID follows this path:

  1. Dictionary segmentation: Isolates the dictionary page content (body) from headers, footers and irrelevant material present in the pdf2text (so called DictScrap).
  2. Dictionary body segmentation: Marks up the entries inside the body.
  3. Lexical entry segmentation: Marks up the blocks for form, sense and related entries (re) inside  entry.
  4. Further inner segmentation of the form and the sense blocks.
  5. […] (see Annotation Guidelines).

Cascaded segmentation in GROBID

Screenshot: Annotated segments of entries (form and sense blocks),
next to original PDF

For the OEH dictionary data, I trained GROBID on manually annotated XML files, for the three models “Dictionary Segmentation”, “Dictionary Body Segmentation”, and “Lexical Entry Segmentation”. After every step, I re-trained GROBID with the annotated training data.

GROBID Dictionaries comes with a RESTful web service for a user-friendly execution of the annotation algorithm on the basis of the training performed beforehand, which is started from the docker image shell prompt and then available at localhost:808080.

We have found some issues that partly could be solved:

  • In one of the versions of the OEH PDF, the pages contained no headers and no footers, so that nothing could be marked up as such; this produced an error (presence of headers was missing); that has been solved by Mohamed by fixing the code in that respect.
  • The size of OEH entries ranges from 3 words in a single line to several pages. At the moment, GROBID wouldn’t accept entries that contain more than two page brakes.
  • Some special characters are not recognized by the pdf-to-text script, and they won’t appear in the features file either. This is due to non-unicode encodings of some special characters in some PDFs, and it should be solved, because these special characters are used as separators inside entries (e.g. at the beginning of each sense description. In the case of OEH, the inner segmentation of entries did not yield very good results, in spite of having annotated about 100 entries. We think that the separating symbol character (a diamond) being recognized could help significantly. In our case, sense blocks are very often preceded by a diamond symbol (see screenshot below), and also square bullets and stars appear with similar functions.

After all, we now know how GROBID dictionaries works. We have clearly seen that the precision of the output gets better, the more annotated training material is provided to the tool. In my case, I did not find a parsing error in the Dictionary Body Segmentation step until I had seen a large number of examples. After going back and annotating more material I re-trained the tool, and the parsing error was not repeated.

Screenshot: Noisy segmentation of entries (left), and PDF original (right)

We will now parse the whole dictionary (Dictionary Segmentation and Dictionary Body Segmentation), then isolate the headword and infra-headword strings and compare this list to another OEH lemma list produced beforehand by other means, in order to evaluate this step. After possibly annotating more pages (examples that contain noisy results), and re-evaluating the results, in a circular process until reaching full precision.

Then we will move on to the next step, annotate, re-train and evaluate, until we get to a noise free result. As the dictionary PDF I want to structure has a very rich  microstructure, the evaluation of GROBID’s performance for every step will take some time.

In the workshop, we have also got familiar with the development workflow of GROBID dictionaries, and will certainly cause more than one headache to the developer by writing tickets on GitHub.

Thanks for this very interesting workshop, and for your patience, Mohamed!

From a Legacy Dictionary to New Lexica: An Alternative Time-Machine to Discover Neologisms

The History of Greek Lexicography has many examples of exceptional and quite “crazy” pioneer researchers, amateur lexicographers, linguist authors who travel all the country to collect data for their lexica and dictionaries. Nikos Kazantzakis travelled most areas of Greece to collect “beautiful” words for his works, especially for his epic saga, Odyssey – as a sequel of the Homer’s Odyssey. He describes the following story.

Kazantzakis was taking pictures of rare flowers and trees in the countryside of Crete. On the outskirts of a village he was stunned by a beautiful flower. He asked the children that were playing nearby what the name of the flower was. They did not know the name; however, they informed him that only one, a very old lady of the village, would know the name of the flower. Kazantzakis went to the house of that lady to ask her about that name. Unfortunately, the neighbors informed him that she had just passed away. Kazantzakis became sad and he then said: “Our language lost a glorious member, since a word just died with that lady…”. Continue reading From a Legacy Dictionary to New Lexica: An Alternative Time-Machine to Discover Neologisms

Digitised and Born-Digital in One Application: Dutch Historical Dictionaries Online

Dutch historical language has been described in four separate comprehensive dictionaries: the Woordenboek der Nederlandsche Taal (WNT, Dictionary of the Dutch Language, 1500-1976) the Middelnederlandsch Woordenboek (MNW, Dictionary of Middle Dutch, ~1250 – 1550), the Vroegmiddelnederlands Woordenboek (VMNW, Dictionary of Early Middle Dutch, 1200-1300). and the Oudnederlands Woordenboek (‘ONW’, Dictionary of Old Dutch, ca. 500–1200). Both WNT and MNW were paper dictionaries, that were digitised by keying and made available on CD-ROM. ONW and VMNW were born digitial dictionaries. To have a comprehensive view of the Dutch language, it was decided to put the four dictionaries online in one portal. Making it possible for users to query one ore more dictionaries simultaneously was a logical step to take because these dictionaries complement each other. The challenge was not only to give the user optimal access to the dictionary information, but also to do so without compromising the uniqueness of each individual dictionary. Continue reading Digitised and Born-Digital in One Application: Dutch Historical Dictionaries Online

Digitising 150 Years of the Swiss German Dictionary

The scholars of the Swiss German Dictionary (Schweizerisches Idiotikon) have collected more than 15000 pages of highly concentrated information over the last 150 years. When we began retro-digitising the dictionary a few years ago, we were unsure if we were up to the task of dealing with such a massive amount of data. Of course, it was not a hardware problem – the 100 million characters will easily fit into any storage device sold nowadays. The real challenge is how to translate a 19th-century information structure into one which today’s computers can handle.

One of the major concerns of dictionary makers before the electronic age was how to save space. Dictionary makers achieved this by using abbreviations, using typography (styles and special characters) to mark different parts of an article and using context to convey information. This specifically means that certain elements can either be omitted depending on the context or bear a different meaning.  Continue reading Digitising 150 Years of the Swiss German Dictionary

A New Life for an Old Dictionary: Notes on Digitizing the Dictionary of Russia

I was a PhD student just finishing my thesis when my doctoral studies advisor, Professor Viktor Kabakchi, a grey-haired Russian scholar with many years of university teaching experience, invited me to take part in updating his brainchild  – The Dictionary of Russia. The first print edition of this dictionary about Russian cultural terms in English, which was published more than 10 years ago, required a thorough editorial revision. My excitement to meet new challenges as well as new perspectives in life knew no bounds.

However, at the time I didn’t have the slightest idea what to start with in order to digitize a print dictionary and produce its electronic version. To my great joy, the eLex 2013 Conference in Tallinn, organized by the Institute of Estonian Language and the Trojina Institute of Applied Slovene Studies shed some light on my understanding of what e-lexicography is. But that was only a mere introduction to the field.

Soon afterwards, I was lucky to start discussing the digitization of The Dictionary of Russia with K Dictionaries from Tel Aviv, who agreed to run the project and provide all the necessary technical support and guidance. I wrote this post to share with you some personal notes on a retro-digitization project taken by someone who is not a computer geek.  So please do not expect find too much ‘technical stuff’ in it, as my idea was to give an overview of the main milestones and show where we have come by now. Continue reading A New Life for an Old Dictionary: Notes on Digitizing the Dictionary of Russia

How Can I OCR My Dictionary?

One way to digitise a dictionary is using Optical Character Recognition or OCR. But is OCR feasible at all for my dictionary? And if so, which OCR program should I used, trainable or omnifont? And how about the workflow: should I train the OCR engine or not? And, finally, what should be the output format of my OCR? For those wanting to take the OCR adventure, here a very brief introduction.

To OCR or not to OCR

Some texts are totally  un-ocr-able:

Continue reading How Can I OCR My Dictionary?

Legacy Dictionaries Reloaded: Why Should We Bother? 

The closest I’ve ever come to glimpsing hell was several years ago, reading an article in the New York Times, entitled “Justices Turning More Frequently to Dictionary, and Not Just for Big Words.” The article cited the example of a certain Chief Justice John G. Roberts Jr. who had apparently parsed the meaning of a federal law by consulting the usual legal precedents (X vs Y) —  and no less than five dictionaries. One of the words he looked up was “of”. He discovered, lo and behold, that its meaning had something to do with belonging or possession.

Dictionaries lie at the core of the human ability to conceptualize, systematize and convey meaning. But they are hardly positivistic, objective repositories of knowledge or truth about language, let alone life.

Continue reading Legacy Dictionaries Reloaded: Why Should We Bother? 

From Books to Bytes: Turning Paper Dictionaries into Digital Format

When working on the Deutsches Wörterbuch, Jacob Grimm felt depressed by his life as a lexicographer. In the preface to the first volume, published in 1854, Jacob wrote:

“As if for days fine and tight flakes were falling down from the sky and soon the whole area is covered with vast snow I am quasi snowed in under the mass of words pressing towards me from all corners and crevices. Sometimes I want to get up and get rid of everything.”

Dorothea Grimm, his brother’s wife, observed Jacob and Wilhelm anxiously. They have to be liberated, otherwise they will get mouldy, she thought, watching them work on the dictionary from the early morning until late at night.

Continue reading From Books to Bytes: Turning Paper Dictionaries into Digital Format