One could assume that one of the first steps in a project involving the encoding of information with markup or the creation of XML files is to create a schema governing encoding procedures. But I’d say that this might not be necessarily true, especially when the subject matter is not entirely clear right from the start, in terms of a data model and features describing it.
So you might begin to encode a text or to arrange metadata in XML, accumulating many files. When you finally want to fix your data model, how to know what’s inside of your collection? Go through all the files again to check? And even if you had a schema from the beginning on, how often did you actually use a certain XML element or attribute? Are there some barely used ones that you could leave out? Did you use different ones for the same kind of information? What if the collection at hand originates from somewhere else and you want to familiarise yourself with it? Have you encoded certain phenomena all-over or just in some documents? One could think of more questions of this kind.
In our research group, Python has become the programming lingua franca which we all use or are beginning to use, so that was what I chose for the creation of a program which analyses the usage of elements and attributes in a collection of XML files. In this post I would like to show what the program can be used for and document some of its features.
If you want to have a look and try it out on your own, it is available on GitHub as part of the group’s “toolbox”:
<https://github.com/cligs/toolbox/blob/master/extract/elements_used.py>
If you find a bug or have suggestions on how to improve the script, you can create an issue there. The program is tested on Linux with Python 3.4 and besides things from the standard library, the following modules are used:
- lxml
- matplotlib
- numpy
Let’s begin with something to look at:
The plot shows in what files and how often the TEI element said occurs in the text collection Novelas Latinoamericanas. At the moment, the collection consists of about 120 files, so it is easy to see that direct speech has just been marked up in a few of them.
Even though we have a TEI schema for the text collections, the above plot shows that without a workflow for error reports and if the encoding has been done manually, there may be slips like saidd instead of said, so here the visualizations help to detect errors.
In addition to plots for the usage of single elements or attributes in a bunch of files, you may create an overview for a single file, showing which different elements and attributes are used there and how often:
This might give an insight into how deeply encoded single documents are (many different elements and attributes? just a few ones?), especially when compared to other documents. It might also give a glimpse on the structure of a text. In the above example, the novel number 71 does not just contain division and paragraph elements, but also quotes, groups of verse lines and floating texts.
Finally, the “elements used” module allows you to create an overview of all element’s and all attribute’s usage in the entire collection:
For those who think “but I’m not interested in the visual stuff” or “my own plots would look much nicer” or “I could imagine doing other things with those element and attribute counts”, the script produces an export of the data in JSON and CSV format.
To finish, I would just like to add some information about how the script can be called and some additional options that it supports. You can either import it as a python module and call the main function with some arguments, or call it from the command line passing arguments there, as well.
The following arguments are supported*:
*All arguments except log should be strings. The JSON and CSV dumps are made every time you run the script.
argument name | description |
collection path | This is the first of the two mandatory arguments: the path to the collection of XML files in your file system. |
collection name | The second mandatory argument is just a name for the collection that will be displayed in the plots. |
mode | This is optional. Two values are possible: “single” and “all”. The default mode is the single mode. In that mode, just one plot will be created, either the general overview or a plot showing the element and attribute usage for one file, or a plot for one specific element’s or attribute’s usage in all the files. In “all” mode, all possible visualizations are created. Depending on how large your collection is, this might be a lot. And maybe you are just interested in a particular file, element or attribute. |
name | The name argument is optional, as well. If you leave it empty, in single mode you will get the overall visualization (“which elements and attributes are used in the whole collection of XML files and how often?”). If you pass a filename, you will get the plot for that file; with an element name you get the overview plot for that element and with an attribute name for that attribute. Attribute names should start with @ to be recognized and filenames end with .xml. |
out | With this optional argument you can indicate the path to a directory where the output files shall be stored. Otherwise, the current working directory is used. |
namespace | By default, it is assumed that your collection is in the TEI namespace (http://www.tei-c.org/ns/1.0). If you want to use another namespace, you can indicate it here. If you do not want to use any namespace at all, you can pass an empty string. |
xpath | This optional argument allows you to pass an XPath expression which determines what elements and attributes are considered in the usage analysis. If you do not indicate anything else, the default is “//ns:body//*” with the namespace ns=”http://www.tei-c.org/ns/1.0″, so all the elements occurring inside of the TEI body element. I assumed that you might not be interested in the usage of elements in the TEI header that much. But if you are, you can change the XPath expression accordingly. And if you are not using TEI at all, you can change both the namespace and XPath expression. Please do always use the “ns” prefix in the path expression in case you use a namespace. Unfortunately, the lxml module does only support XPath 1.0. |
log | An optional argument. If set to True, the y axis will be scaled logarithmically instead of linearly. This can make sense if you are interested in the smaller numbers, e.g. if there are thousands of paragraph elements but just a few other types of elements which you want to have a closer look at. |
An example call from the command line looks like this:
python elements_used.py "/home/ulrike/Dokumente/Git/novelaslatinoamericanas/master" "Novelas Latinoamericanas" --mode="all" --out="/home/ulrike/Schreibtisch"
So be aware of what you are encoding! 🙂
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
ulrikehenny (21. Oktober 2015). What did we actually encode? Analysing XML collections with Python. CLIGS. Abgerufen am 8. Februar 2025 von https://cligs.hypotheses.org/276
See as well the Data Dictionary Generator, designed to quickly generate encoding documentation of a TEI file with oXygen.