Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

GROBID Dictionaries: Experiments with the General Basque Dictionary (OEH) PDF

GROBID Dictionaries is a tool for structuring dictionaries (conversion from PDF format to TEI XML), with a supervised machine learning approach (CRF models). Details are explained in (Khemakhem et al. 2017), which is also the paper to cite in relation to GROBID Dictionaries.

At LexMC, I have taken part at the GROBID Dictionaries tutorial and workshop; Mohamed Khemakhem, the developer of GROBID Dictionaries has been our instructor. Here you find the initial guidelines for the tutorial sessions.

After receiving a tutorial about installing and running the tool using a Docker image prepared by Mohamed, we got familiar with the tool and the annotation and training workflow using test data samples stemming from a random paper dictionary (PDF version).

We then repeated the steps using our own data, in my case sample pages from a CC-BY-NC-SA licensed PDF version of the General Basque Dictionary “Orotariko Euskal Hiztegia” (Mitxelena & Sarasola 1986), that counts more than 16,000 pages and due to its structure has to be considered a hard task for structured representation in XML.

OEH dictionary on the shelf and as PDF

I created the XML files to be manually annotated and then used as training data using the “create training data” function in GROBID Dictionaries. The annotation has then carried out using oXygen’s author mode feature. The manually annotated xml files are put into a folder where GROBID will find them and use for training; the commands for that are executed in the terminal (see guidelines).

On this screenshot, we see training data to be annotated in oXygen, and the terminal having run the command to create that XML from the PDF original, together with the other files necessary for GROBID

The files created by GROBID Dictionaries are (1) the pdf-to-text conversion raw text file, (2) the features file, and (3) the XML file. The vertical features file contains for each token of the raw text a set of values derived from the presentational markup present in the PDF.

Screenshot: Features File and PDF

The cascaded parsing performed by GROBID follows this path:

  1. Dictionary segmentation: Isolates the dictionary page content (body) from headers, footers and irrelevant material present in the pdf2text (so called DictScrap).
  2. Dictionary body segmentation: Marks up the entries inside the body.
  3. Lexical entry segmentation: Marks up the blocks for form, sense and related entries (re) inside  entry.
  4. Further inner segmentation of the form and the sense blocks.
  5. […] (see Annotation Guidelines).

Cascaded segmentation in GROBID

Screenshot: Annotated segments of entries (form and sense blocks),
next to original PDF

For the OEH dictionary data, I trained GROBID on manually annotated XML files, for the three models “Dictionary Segmentation”, “Dictionary Body Segmentation”, and “Lexical Entry Segmentation”. After every step, I re-trained GROBID with the annotated training data.

GROBID Dictionaries comes with a RESTful web service for a user-friendly execution of the annotation algorithm on the basis of the training performed beforehand, which is started from the docker image shell prompt and then available at localhost:808080.

We have found some issues that partly could be solved:

  • In one of the versions of the OEH PDF, the pages contained no headers and no footers, so that nothing could be marked up as such; this produced an error (presence of headers was missing); that has been solved by Mohamed by fixing the code in that respect.
  • The size of OEH entries ranges from 3 words in a single line to several pages. At the moment, GROBID wouldn’t accept entries that contain more than two page brakes.
  • Some special characters are not recognized by the pdf-to-text script, and they won’t appear in the features file either. This is due to non-unicode encodings of some special characters in some PDFs, and it should be solved, because these special characters are used as separators inside entries (e.g. at the beginning of each sense description. In the case of OEH, the inner segmentation of entries did not yield very good results, in spite of having annotated about 100 entries. We think that the separating symbol character (a diamond) being recognized could help significantly. In our case, sense blocks are very often preceded by a diamond symbol (see screenshot below), and also square bullets and stars appear with similar functions.

After all, we now know how GROBID dictionaries works. We have clearly seen that the precision of the output gets better, the more annotated training material is provided to the tool. In my case, I did not find a parsing error in the Dictionary Body Segmentation step until I had seen a large number of examples. After going back and annotating more material I re-trained the tool, and the parsing error was not repeated.

Screenshot: Noisy segmentation of entries (left), and PDF original (right)

We will now parse the whole dictionary (Dictionary Segmentation and Dictionary Body Segmentation), then isolate the headword and infra-headword strings and compare this list to another OEH lemma list produced beforehand by other means, in order to evaluate this step. After possibly annotating more pages (examples that contain noisy results), and re-evaluating the results, in a circular process until reaching full precision.

Then we will move on to the next step, annotate, re-train and evaluate, until we get to a noise free result. As the dictionary PDF I want to structure has a very rich  microstructure, the evaluation of GROBID’s performance for every step will take some time.

In the workshop, we have also got familiar with the development workflow of GROBID dictionaries, and will certainly cause more than one headache to the developer by writing tickets on GitHub.

Thanks for this very interesting workshop, and for your patience, Mohamed!


OpenEdition suggests that you cite this post as follows:
David Lindemann (December 7, 2017). GROBID Dictionaries: Experiments with the General Basque Dictionary (OEH) PDF. DigiLex. Retrieved December 7, 2024 from https://doi.org/10.58079/nmnr


2 thoughts on “GROBID Dictionaries: Experiments with the General Basque Dictionary (OEH) PDF”

  1. Excellent post! What do you suggest as ressource or tutorial for someone to begin learning this awesome tool. I will nees it for my PhD project to parse millions of bibliometric references in CV.

    1. Dear Francois,
      for a tutorial, I suggest you contact the developers directly. For bibliographic reference sections you need GROBID itself. GROBID-dictionaries is based on the former. GROBID is afaik currently the best bibliography parser. It segments the whole scientific paper into sections, according to TEI-XML, among them biblStruct, the bibliography section, being the single bibliography entries listBibl element values.
      Both projects’ github channels contain documentation:
      https://github.com/MedKhem/grobid-dictionaries
      https://github.com/kermitt2/grobid

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.