Alle Beiträge von José Calvo Tello

José Calvo Tello studied Spanish Philology and learned programming and mark up languages. He has worked both in linguistic and editorial and corpus building projects (Clásicos Hispánicos, Textbox). Currently he analyzes the subgenres of the novels of the Spanish Silver Age in his PhD at the University of Würzburg. To do that, he applies quantitative methodologies like machine learning and stylometry with lexical features, evaluating the results through metadata.

Participants at the DH17 Conference by Country and Continent

​​In recent years I have published a couple of posts about the participants at DH conferences: HDH 2015 and DHd 2016. It was about time to publish about the DH conference. So let’s go directly to the visualization and I will explain the details later:

Authors at the DH17 Conference
Authors at the DH17 Conference

So, what is in these bars? Each author (regardless of how many proposals or roles they were involved in) at the conference has been counted once using the HTML view of conftool; the data has been grouped by country of their current position (cleaning this information semi-automatically) and the results are plotted as bars. Also, the continent defines the color of the bars. So, some details: if a conference paper had 7 co-authors, each of them is counted once. So the countries with a bigger tradition of having multiple co-authors are more likely to appear over-represented. On the other hand, the very active people that are part of several papers and panels only count once. I think both criteria balance the results at the end.

Using the data, the results show different group of countries:

  1. The lead country is USA (not a surprise)
  2. After that, we see a group of three countries with a lot more researchers than the rest: Germany (123), Canada (92) and the UK (73). I didn’t expect to find Germany in the second position
  3. The next country has less than the half of the researchers: France with 36, followed by Switzerland, Netherlands and Japan; all of them very closely together.
  4. The fourth group is built of countries with more then ten researchers: Ireland, Taiwan, Russia, Austria, Poland, Mexico, Spain, and followed very closely (but with less than 10 people) by Belgium
  5. After that we can consider the rest of the countries as part of the long tail with researchers between 6 and 1 (a single country with this value: Denmark!)
  6. Can we think for a moment about the whole region of the world that is just not represented at all in these bars?

There are other aspects that are remarkable: Italy had only 3 authors (!). China had only 1, while Taiwan had 15. There is not a single person from the Arabic World. Actually not a single person coming from the region between Morocco and Pakistan. Not a single soul from Central America, Carribbean or Andes.

Please, don’t take this a a criticism of the conference. I am trying to understand better our community and am simply verbalizing some surprises. And remember that these references of the countries are not the country where the author was born, but where they are currently working. For example, in these bars I am counted as an author from Germany, although my only passport is printed by the Reino de España.

Now, we can group the information by continent and see how they are represented:

Authors at the DH17 Conference by Continent
Authors at the DH17 Conference by Continent

A word of notice about how I divided America: there is no satisfying decision about it. If we group USA and Canada together to see better how Latin America is represented, then we can’t use the concept of North America since Mexico is also part of North America. So I decided to group together all American countries. Anyway there were only 5: USA (313), Canada (92), Mexico (11), Brazil (2) and Argentina (2). So Latin America would have a bar twice as large as the one of Africa.

There are two countries split between Europe and Asia: Turkey (6) and Russia (12). In these cases I decided to follow the rule “put the doubts in the smaller category so they don’t get lost in the large one”.

Even if we only sum USA+Canada (313 + 92 = 405) and make the biggest possible version of Europe with Russia and Turkey (385 + 12 + 6 = 403), the North Americans are still the largest group by literally a couple of people. What is clear is that the two largest groups of authors at the DH Conference are basically composed by people working in Europe and Canada+USA. This is not a surprise, although I didn’t expect that the number of Europeans would be almost as big as the number of North Americans, even when the conference is on their side of the Atlantic.

Let’s see what will happen next year in DH2018 Mexico! Will there be more authors working in different countries of Latin America? From other parts of the world? The deadline will probably be in some months, so, stop procrastinating with posts about DH participants and let’s work on the next proposal!

Using Stylo in Python

Why would you do that?

Since a couple of years I have been using stylometric methods to analyse texts. I learned to use the great stylometric tool Stylo (written in R) at the European Summer School of Digital Humanities in Leipzig from two of the developers: Maciej Eder and Jan Rybicki.

Some months after I started my PhD as a member of the junior research group “Computational Literary Genre Stylistics” CLiGS, guided by Christof Schöch, at theat the Computerphilologie Professorship (hold by Prof. Jannidis) at the University of Würzburg, Germany. I was told that I had to learn Python because that was the programming mother tongue of the department. And I did so. Since then, many of my projects are a mix of very basic R script that call Stylo, and other more sofisticated scripts in Python that make the preprocess and the evaluation.

I am not the only person in this R-Python situation; actually in the last years at least two tools for Stylometry have been written in Python: Pystyl and PyDelta. So, why do I keep working with Stylo if I know more Python? For several reasons:

  • Stylo is very well documented (installation, preparation of the corpus, general use…)
  • It has a mailing group where you get answers and help
  • It has been tested by hundreds of researchers
  • The developers teach about the tool
  • And they use the feedback of these workshops to improve Stylo (I have seen Maciej speed-coding some changes in Stylo during the class, uploading to CRAN, and asking the people to update Stylo)
  • Because my PhD-tutors recommend me to do so

My stylometric tests are becoming more and more complex so it is starting to be a pain to jump all the time between two groups of scripts. I knew that one can use other programming languages inside Python, so I thought it was worth a try to see if it was possible to use R and Stylo in Python.

This blog post and its sibling Notebook (that you can download as a Git Repository with the corpus and the output data) are the first findings. I would be really happy to receive opinion and feedback.

rpy2

The module that we are going to use is rpy2 https://rpy2.readthedocs.io/en/version_2.8.x/, which allows you to work with R in Python. Since it is very possible that this module it is not in your computer, you have install it, for example using pip3 (more info in its documentation: https://rpy2.readthedocs.io/en/version_2.8.x/overview.html#installation):

  • sudo pip3 install rpy2

That was not difficult, but to make it work was. After some time I realised that the problem was the version of R in my computer. Although the documentation of rpy2 says that a 3.0 version of R should be ok, it was not. Updating R in Ubuntu was trickier than expected, so I uninstalled and reinstalled R and Stylo again, making sure that R’s version was higher than 3.0. I am currently working with 3.3.

So, enough talking, if you have already installed rpy2, let’s import it:

I am not going to explain how exactly rpy2 works (because it is not the poing of this notebook and because I couldn’t). Let’s just say that whenever we see anything starting with R., it will be a R object that we can call from Python. Example:

We can convert these objects to Python objects:

Stylo in Python

In the same way we can call Stylo in Python:

Maybe it gives us a warning messages RRuntimeWarning: I think the problem is in the kind of answer that Stylo gives you in command line of R while running, that cannot give you in the same ways in Python. Does anyone know how to fix that?

In the repository of this Notebook you can find a subfolder with one Spanish corpus of the CLiGS Textbox (https://github.com/cligs/textbox), prepared for stylometric tests. So I will define the path just as the current folder and I will call Stylo without the graphical user interface (if I would need the GUI we would just work in R!).

It is cool to see the answers of Stylo in a Jupyter Notebook running on Python, right?

When it is finished, a pop-up window from R will appear with the classic dendrogram that we all know:

Passing arguments

Now, how can I define the arguments for Stylo? Because, as explained in the documentation of Stylo, the arguments for maximum and minimum MFW are called mfw.min and mfw.max. Let’s try that:

Python complains: it doesn’t expect a dot in a variable name. The grammar of R and Python are not compatible. For this cases the documentation of rpy2 (http://rpy.sourceforge.net/rpy2/doc-2.2/html/robjects_functions.html) recommends to pass the arguments as a python dictionary in which the keys are strings with the names of the arguments in Stylo. Example with a couple of arguments:

Or we can define pass arguments for the kind of analysis, output that we want, the size of the n-gram…:

Now we have in our folder all the files that we have asked: png, distance table, features used… Nice!

But what if I want to work further with this data in Python?

Using the data from Stylo in Python

In the cell above I have called stylo() and saved its output in a variable called I_love_this_stuff (following the documentation of stylo 😉 ):

As we see, this variable is a ListVector of length 9. Each of these items contain different information from the analysis I have done. Let’s print the 100 first characters of the first items:

The first item contains actually the distance matrix:

As we see, this object is a matrix in R. Working in Python we would be happier with a Pandas Dataframe. For doing that, we convert first the matrix to a Numpy array, we use this array to load I_love_this_stuff to the dataframe, and we pass the names of the rows and the columns.

There you have your beautiful Delta Matrix of your corpus as Pandas Dataframe, using Stylo but working only with Python scripts. Yey!

Feedback, please!

This is just a try. Many things could be done in different ways, I have probably overseen things, maybe there are better ways to deal with this Python-R problem… So, please, let me know your thoughts (email, twitter, comments in the blog post…). Thanks in advance and thanks to Christof for his feedback about this Notebook!

How is the statistical typical Spanish Modernist Novel?

I am currently preparing an article about stylometry and genre in which I correlate clusters with metadata. One of the present results is that those texts with non typical values in its metadata are better distinguished than the rest: non realistic, texts in which the action takes place in other times or other continents… It seems that the well known structuralist categories of marked and unmarked could help organize the texts and genres. In order to get boolean values (like “yes“ or “no“) I looked for the central tendencies of the texts: which is the typical end of a novel of this period? Which is the typical gender or social level of its protagonist? How long are the novels of this period typically? Another way to see this information is: if I take a random novel from Spain and this period, what will I probably find?

For this purpose I am using the metadata of the Corpus de novelas de la Edad de Plata, from which you can find a first release on our GitHub account. The current state of the whole corpus contains around 250 novels from 1880 until 1939. I am not claiming that this corpus could be statistically representative for the literature of this period (although I am skeptical that the concept of representativeness, as used in statistics, could be any useful for humanist fields). Anyhow, this is a way for achieving very specific information about literature, or at least about this corpus.

For this purpose I have written a short script with the module Pandas of Python. You can find it in our Toolbox on GitHub (annotate > tendencies_metadata.py). With the categorical values I have searched for the mode, and for the numerical values I have calculated the median (which is never worse than the mean, as far as I know).

So, the big question, what can I expect from a random novel of this period? Let’s start with things that we can be very confident about: it was written by a male author, the action takes place in the contemporary times, in Europe, and is realistic. 90% of the corpus agrees with that. But there are good odds about other aspects: it takes place in Spain, its protagonist is a young man with medium social level (neither starving, nor rich) with a sad ending, the text is written in third person, the history of literature doesn’t think that the text represents in any form the author’s life, and (congratulations!) is already in the public domain. All these aspects are true for more than 50% of the corpus.

From the numerical values we can know many other things: it was probably published in the the decade of the 1900, to be more specific in 1905. We have already said that its action takes place in contemporary times, but it reasonably lasts around a year. The text is about 65 000 words (around 250 pages) and presumably contains around 1500 paragraphs, from which around 40% contain dialogue. And it has only four verses, believably. We already know that the author was quite probably a man, but we could even perhaps guess that he lived 64 years, since he was born around 1866 and died around 1930. We even presume that he changed his ways of writing around 1890, so the random book comes from his second period. And finally we may also think that the author was quite important, because manuals of history of literature have actually dedicated a whole chapter to him.

And there are other aspects that are not present in the majority of the corpus, but that represent anyway the most common value. Not only the texts takes place in Spain: around a third of the action of the novels takes place in Madrid. We can also guess that the author wrote it in the late period of the modernism (with a big concept of the Generación del 98 being part of it) and this author probably also wrote collections of short stories. Actually there is 15% chance that the author was Pío Baroja since he was the most prolific author of this period (and it is also in the corpus). And, although it has only a 2% chances, the most common name of the protagonist of the text is Xavier de Bradomín.

Many of you will argue that it is impossible to read a novel written by Pío Baroja with a protagonist called Xavier de Bradomin: this name belongs to a fictional character of Valle-Inclán. And it is true, all this information doesn’t apply to the texts altogether; some parts contradict strongly others: how could possible lords have a medium social level? This script only seeks the central tendency of each category independently. There are many ways to get a sharper and more representative picture of the the literature of this period: better and more data (many of the information shows the bias of my corpus), not only using mode or median, having in consideration correlations between categories,etc.

But other aspects (realistic, contemporary, Europe, Spain, male author and protagonist…) are ideas very present in the history of the literature. With this playful post (I have really enjoyed discovering and writing about it!) I am only suggesting this way to scrutinise texts: this way of treating metadata provides statistical values that can summarize, tinge or reinforce different ideas about literature.

Gender, places and Academical level at the DHd2016

Some weeks ago we published some visualizations of the data of the attendants at the DHd 2016 that we take from the program of the conference. The organization of the conference liked what we did, and while speaking with them, I pointed out that I had the feeling that significantly fewer women were at the conference if compared to the Spanish DH Conference 2015 (in which the gender distribution of the speakers were more or less 50%). So the organization gave us more data about the participants, of course anonymized, and for that we are very thankful.

So, let’s start with the basic gender question. Was I supposing correctly, that there were more men than women?

female-male-proportion
Gender, places and Academical level at the DHd2016 weiterlesen

Verba Alpina: open data + elegant solutions

In the context of the Junge Forum Romanistik, there were workshops with a focus on digital tools in literary and linguistic studies organized by CLiGS together with the FJR and the AG Digitale Romanistik. In one of the workshops, Thomas Krefeld and Stephan Lücke presented the project Verba Alpina:

Screenshot from 2016-03-17 07:11:16This project studies dialectal data from the very multinational area of the Alps. The classical dialectal projects took a national approach that hides the linguistic processes that go across the border.  It impressed me for three reasons: two very simple and elegant solutions that they are applying in the project and how open the data and the tool are.
Verba Alpina: open data + elegant solutions weiterlesen

DHd 2016: countries, cities and institutions of the speakers

The CliGS group had the opportunity of being at the German Digital Humanities Conference  in Leipzig (DHd 2016). As we did with the  DH Spanish Conference of last year, we decided to take the data of the program to see in detail some general information of the people talking at the conference.

The data used in this post come all from the conftool of the conference. In that website is also the information about the pre-conference workshop and the EADH-Day.  It is important to have clear that this represent how visible are in the program countries, cities and institutions, and not about all the participants. We are only taking the data from the people that presented something (conference paper, poster, session…) and if someone took several roles during the conference, his information is also repeated.

I took the HTML, I cleaned it with scripts as best as I could; the tricky part was with this kind of things:

As we can see, the relationship between person and institution is not one to one. I checked the results of some of the most complicated cases and the scripts did a good job, but I wouldn’t dare to plunge my hand in the fire for this data 😉 If there are some errors and you want to give a try to clean the data in a better way, let us know with a comment! For the visualisation I have used the very user-friendly and intuitive tool RAW.

Lets start with the countries, in which country do the people in the program work? Results:

Well, not a huge surprise that Germany is the first country (428). Now the difference between Austria (37) and Switzerland (13) I didn’t expect. It is interesting to see how Italy and the Netherlands are well represented, specially if we compare it with other European countries, specially France, United Kingdom, Spain, Poland…

Lets go a step deeper in the data. And, now, a word of explanation: apparently the participants of some universities are more homogeneous when naming their institutions as other: while Universität Paderborn didn’t have any variant, there was a lot of variants in some Universities, example: Universität Göttingen, Georg-August-Universität Göttingen, GA Universität Göttingen, Uni Göttingen… So I tried to curate the data the best way I could and searched for the locations of many institutions and I didn’t know:

Berlin, Leipzig, Göttingen, Würzburg, Wien, Darmstadt, Stuttgart… And from that we can go a step deeper and see the different institutions in each city. Because while some cities like Berlin, Wien or Göttingen contain a great number of institutions working in the Digital Humanities, other cities like Frankfurt or Würzburg are represented by a single institution.

So the data after institutions looks like this:

After the University of Leipzig, the one holding the conference, the best represented institutions in the program are the Universities from Würzbug, Darmstadt, HU-Berlin, Stuttgart, BBAW, ÖAW, NSUB-Göttingen, Köln…

Surprises?

Visualizing information about HDH2015 & EADH Day

As I have already explained, we were at the HDH2015 & EADH Day at UNED in Madrid, last week. Besides some classic summary, when I was in the conference, I had the idea (inspired by Scott Weingart and his posts about DH conferences) about trying to condense the data about participants and visualize it in order to understand better some tendencies of the conference.

So I grabbed the program of the conference and I took the information (gender, institution and place) of the speakers. I took the information only from this program, so if the information was missing, I didn’t search for it somewhere else (as you can understand). I have also put together the information about HDH and eadhDay since there was a continuity of the programs. I haven’t compare the names of the people to discriminate if they have spoken once or more times; so, if someone have spoken several times, this is counted as different people.

Probably doing this I have done some mistakes; maybe you are reading this,  you work somewhere, you did speak at the conference but your place is not in the visualizations. For that I am sorry; write us a comment and I will try to amend the information.

Let’s start with gender, a topic that I have already mentioned and that was also discussed at the conference.

Screenshot from 2015-10-17 09:01:18From 229 people who talked, 126 were women and  103 were men. What I mentioned in my last post about women in leading positions in DH field, is also truth seeing the data of speakers.

Now, let’s see the distribution of participant by the University where they work:

Screenshot from 2015-10-17 09:06:41In the chart are not all the Universities, only those who sent at least 3 speakers (so we are able to read the names in the chart). The University with most participants was the UNED, which is not a surprise since they hosted the conference. But I have to say that the next Universities wouldn’t have been my first guesses. The great majority are Universities from Spain, but we also see other European and American centres (among them  Würzburg!).

Now let’s see about the speakers sort by cities where they work. And this is an important distinction: this are not the cities where the speakers come from, but where they work. I am an example of that: I studied in Madrid, but now I am working in Würzburg, Germany.

Screenshot from 2015-10-17 09:24:58Again, Madrid as first position is not a surprise, but it is that Palmas de Gran Canaria comes before Barcelona, for example. It is also interesting that the 5th position is hold by Paris and that we find three German Universities: Hamburg, Cologne and Würzburg.

Of course, if we visualize cities, we should also use maps! So I have used the Dariah Geo-browser and the results are:

Screenshot from 2015-10-17 09:38:06Screenshot from 2015-10-17 09:38:21

Let’s see Spain and Europe a little bit closer:

Screenshot from 2015-10-17 09:39:05 Screenshot from 2015-10-17 09:38:47

As we can see, the biggest circles are of course in Spain, some circles distributed in America and there are also a lot of circles in Western Europe.

So, what happen if we organize this information by country?

Screenshot from 2015-10-17 09:25:42As happened with UNED and Madrid, it is not surprising that Spain comes first. I would have expected United States, United Kingdom or France as second country, but actually is Germany, an interesting surprise that emphasize the tradition of strong relationships between Germany and Culture in Spanish.

It was also interesting to see that a great amount of people working abroad are actually Spanish that moved to other countries in the past. The HDH2015 & eadhDay were great opportunities to know each other and get in touch.

HDH 2015 and eadh Day: some impressions

Last week we had the opportunity to attend the 2nd international conference about Digital Humanities in Spain, and the first European Association for Digital Humanities Day. Both  took place at the UNED, in Madrid, and  we were able to present our CLiGS project as a poster and make a short contribution to EADH Day about HTML and TEI.

We had the chance to meet colleagues from Spain and abroad that are working with the DH methodologies and technologies. I would like to make a short overview about some tendencies that I noticed:

  • There is an important amount of projects about digitalization going on right now and TEI has become also in Spain the standard language for that. Some interesting examples of this activity are the edition of La dama boba, by Lope de Vega (abstract here, web here) or Las soledades of Gongora.
  • Another  great editing project comes from the University of Graz (Austria), where they have edited and published on the Web as TEI hundreds of texts from the 18th Century, the so called Espectadores (abstract here, web here). An incredible source of information!

Screenshot from 2015-10-17 07:51:41

  • I still feel from some projects some doubts about publishing the actual TEI on the web; the HTML or plain text is easily uploaded, but the publishing of TEI version is many times postpone. And some projects are looking for ways of working with TEI without seeing brackets or any code.
  • The conference was very useful to get a general idea about how different projects are developing their software (databases, webs, storage systems…) and work-flow. Also to hear about the conditions and academical problems that are confronted with.
  • The DH field has a stronger relationship in Spain with teaching and in general pedagogy that in other countries.
  • One very strong tendency is to use metadata from catalogues, linked data and using semantic web technologies. Probably the  most ambitious project was presented by Asunción Gómez Pérez and it was about the datos.BNE.es portal.

Screenshot from 2015-10-17 08:09:34

  • On the other hand, there weren’t too many presentations about linguistics and NLP. One of the exceptions  was present and that have developed interesting web services called ParamText are the researchers from Las Palms de Gran Canaria.
  • Probably related to this is also the small number of presentations about results of applying new technologies to the digital data. David Wrisley (using stylometry to some hundreds of texts in order to understand better the relationship between the texts), Frank Fischer (searching for dates in thousands of European novels in order to know when the action of the novels tend to take place; by the way, between May and August) or Borja Navarro (using different techniques for grouping sonnets of the Siglo de Oro)  did present this kind of analysis.
  • I think the DH field in Spanish language is doing a good job against gender discrimination. One female president abandon the charge and another female president took up. An other great example of that was the panel were 4 women  presented their work of creating groups and networks around DH in Spain and Latin America.
  • I was very impressed by the group at the UNED, LiNHD, led by Elena González Blanco, both for their work as organizers but also for all the activities (seminars, publications, work with other groups, and so on) that are achieving.