I was a PhD student just finishing my thesis when my doctoral studies advisor, Professor Viktor Kabakchi, a grey-haired Russian scholar with many years of university teaching experience, invited me to take part in updating his brainchild – The Dictionary of Russia. The first print edition of this dictionary about Russian cultural terms in English, which was published more than 10 years ago, required a thorough editorial revision. My excitement to meet new challenges as well as new perspectives in life knew no bounds.
However, at the time I didn’t have the slightest idea what to start with in order to digitize a print dictionary and produce its electronic version. To my great joy, the eLex 2013 Conference in Tallinn, organized by the Institute of Estonian Language and the Trojina Institute of Applied Slovene Studies shed some light on my understanding of what e-lexicography is. But that was only a mere introduction to the field.
Soon afterwards, I was lucky to start discussing the digitization of The Dictionary of Russia with K Dictionaries from Tel Aviv, who agreed to run the project and provide all the necessary technical support and guidance. I wrote this post to share with you some personal notes on a retro-digitization project taken by someone who is not a computer geek. So please do not expect find too much ‘technical stuff’ in it, as my idea was to give an overview of the main milestones and show where we have come by now.
Taming of a ‘strange beast’
Citing Toma in his recent blog post, ‘a dictionary is a strange, polyfunctional beast’. When it comes to the idea of a retro digitization process, we may elaborate on this metaphor even further saying that the whole thing can be compared with taming of a wild horse – capricious, unpredictable and not easy to handle. As the DR required two things – generating an electronic dataset and conducting an editorial revision of its entries – it was a good idea to start with setting out its editorial policy and determining the dictionary’s macro- and microstructure. (To tell the truth, the print edition was rather chaotic in its structure).
To produce a digital copy of the print edition, its pages were scanned (in total, 576), and afterwards the obtained image was converted into text by means of an OCR software. It turned out to be quite a long process, taking much of my time, due to the poor quality of the paper. In addition, in many cases, Cyrillic letters were not properly recognized by the software. It is also worth mentioning that after several unsuccessful attempts to produce a decent copy, I finally decided to cut the book so that the text at the binding could be more recognizable. However, these measures did not ensure the high quality of the obtained data, and manual keyboarding (or “double keying”) had to be performed as well.
After capturing the data, it had to be proofread and validated. However, for the sake of time, the validation was done along with editorial revision of the dictionary entries, which is described in the next paragraph.
Baptism of fire
Our next step was to convert the data into a standardized structural form of XML. Understanding data modelling in XML can be a horribly complicated headache for anyone who does not have a lot of technical experience under their belt. Luckily, I started working in close collaboration with Natalia, a software developer from KD’s Technology Team. Together with her, we developed a dictionary’s DTD from scratch. She wrote it herself but I was in the driver’s seat indicating which elements and attributes to include as well as their hierarchy in the document. That was truly a baptism of fire for me as I had to learn the DTD syntax as well in order to understand what we were doing with the data and to oversee the overall structure of the dictionary. In particular, we spent quite a lot of time checking all the elements before I could get down to the real work with the dataset.
To give you an idea how our DTD looks like, I will give a short overview of its main elements. Each entry contains two main components: a headword block and a sense group.
A Headword Block includes a Headword Container, which consists of a headword itself, its alternative scripting (which may vary due to many Romanized versions of Russian words in English), a subject field and a register. The Headword Block may also have a Variant Form Block, where the headword’s original form in Cyrillic and its letter-to-letter transliteration into English are given.
A sense group is made of a Definition Container (consisting of definition(s)) and a Citation Container (where citations from various sources are given to show the usage of a headword):
In addition, I also had to prepare a list of DR2 headwords. It was done manually in a form of an Excel Table, which included the following elements: the headword, its part of speech, the headword’s variants (if any), the source word in Cyrillic and its Romanized transliteration and also cross-references to related headwords (if any). Much of the data was based on the print edition; however, approximately 500 new headwords were added to the updated headword list. Based on the provided data, Nataliya built the XML-files from A to Z, which together with the DTD served as a cornerstone for further data compilation in an XML Editor.
The rest of the work is being done in XMLMind, a strictly validating, near WYSIWYG, DocBook Editor and XML Editor. (Disclosure: the DTD used in DR2 project is not a TEI DTD. Now I know that certain types of documents, including dictionaries are better marked up in TEI than in DocBook). To put a good word for the XMLMind XML Editor, it is really easy to use and novice-friendly. The tool is highly extensible, so it may be used to create documents, conforming to your own customized schemes. In K Dictionaries, it was specifically configured to the needs of the DR2 project. Basically, my procedure has been to put the captured data (by means of simple ‘copy and paste’) into the created XML-files using the tool’s functionality and filling in the relevant containers. At the same time I have been also updating the dictionary data (adding new citations and metadata, formatting the definitions where necessary, etc.). For newly added headwords I have been compiling the entries directly in the software.
Before the curtain falls
As the project is still in progress and there is still much of updating and proofreading work to be done, I believe nonetheless that we are one step closer to its final stage. Maybe, our digitization plan was not as brilliant as it could have been. Maybe, we have taken a rather difficult path to follow (with much of the failures still left outside the scope of this post), but I hope that in general we will be quite successful in breathing a new life to an old dictionary. Now I know for sure that digitization may not be as easy as falling off a log, but if you follow the guidelines and standards it may really make your workflow (and your life also!) much easier. And I do hope that by common efforts of the ENeL Working Group 2 setting this wonderful blog, we will be able to eliminate a lot of stress in the lives of those who are involved in retro-digitization.
OpenEdition suggests that you cite this post as follows:
Kseniya Egorova (March 23, 2016). A New Life for an Old Dictionary: Notes on Digitizing the Dictionary of Russia. DigiLex. Retrieved September 16, 2024 from https://doi.org/10.58079/nmnn