Using Stylo in Python

Why would you do that?

Since a couple of years I have been using stylometric methods to analyse texts. I learned to use the great stylometric tool Stylo (written in R) at the European Summer School of Digital Humanities in Leipzig from two of the developers: Maciej Eder and Jan Rybicki.

Some months after I started my PhD as a member of the junior research group “Computational Literary Genre Stylistics” CLiGS, guided by Christof Schöch, at theat the Computerphilologie Professorship (hold by Prof. Jannidis) at the University of Würzburg, Germany. I was told that I had to learn Python because that was the programming mother tongue of the department. And I did so. Since then, many of my projects are a mix of very basic R script that call Stylo, and other more sofisticated scripts in Python that make the preprocess and the evaluation.

I am not the only person in this R-Python situation; actually in the last years at least two tools for Stylometry have been written in Python: Pystyl and PyDelta. So, why do I keep working with Stylo if I know more Python? For several reasons:

  • Stylo is very well documented (installation, preparation of the corpus, general use…)
  • It has a mailing group where you get answers and help
  • It has been tested by hundreds of researchers
  • The developers teach about the tool
  • And they use the feedback of these workshops to improve Stylo (I have seen Maciej speed-coding some changes in Stylo during the class, uploading to CRAN, and asking the people to update Stylo)
  • Because my PhD-tutors recommend me to do so

My stylometric tests are becoming more and more complex so it is starting to be a pain to jump all the time between two groups of scripts. I knew that one can use other programming languages inside Python, so I thought it was worth a try to see if it was possible to use R and Stylo in Python.

This blog post and its sibling Notebook (that you can download as a Git Repository with the corpus and the output data) are the first findings. I would be really happy to receive opinion and feedback.

rpy2

The module that we are going to use is rpy2 https://rpy2.readthedocs.io/en/version_2.8.x/, which allows you to work with R in Python. Since it is very possible that this module it is not in your computer, you have install it, for example using pip3 (more info in its documentation: https://rpy2.readthedocs.io/en/version_2.8.x/overview.html#installation):

  • sudo pip3 install rpy2

That was not difficult, but to make it work was. After some time I realised that the problem was the version of R in my computer. Although the documentation of rpy2 says that a 3.0 version of R should be ok, it was not. Updating R in Ubuntu was trickier than expected, so I uninstalled and reinstalled R and Stylo again, making sure that R’s version was higher than 3.0. I am currently working with 3.3.

So, enough talking, if you have already installed rpy2, let’s import it:

I am not going to explain how exactly rpy2 works (because it is not the poing of this notebook and because I couldn’t). Let’s just say that whenever we see anything starting with R., it will be a R object that we can call from Python. Example:

We can convert these objects to Python objects:

Stylo in Python

In the same way we can call Stylo in Python:

Maybe it gives us a warning messages RRuntimeWarning: I think the problem is in the kind of answer that Stylo gives you in command line of R while running, that cannot give you in the same ways in Python. Does anyone know how to fix that?

In the repository of this Notebook you can find a subfolder with one Spanish corpus of the CLiGS Textbox (https://github.com/cligs/textbox), prepared for stylometric tests. So I will define the path just as the current folder and I will call Stylo without the graphical user interface (if I would need the GUI we would just work in R!).

It is cool to see the answers of Stylo in a Jupyter Notebook running on Python, right?

When it is finished, a pop-up window from R will appear with the classic dendrogram that we all know:

Passing arguments

Now, how can I define the arguments for Stylo? Because, as explained in the documentation of Stylo, the arguments for maximum and minimum MFW are called mfw.min and mfw.max. Let’s try that:

Python complains: it doesn’t expect a dot in a variable name. The grammar of R and Python are not compatible. For this cases the documentation of rpy2 (http://rpy.sourceforge.net/rpy2/doc-2.2/html/robjects_functions.html) recommends to pass the arguments as a python dictionary in which the keys are strings with the names of the arguments in Stylo. Example with a couple of arguments:

Or we can define pass arguments for the kind of analysis, output that we want, the size of the n-gram…:

Now we have in our folder all the files that we have asked: png, distance table, features used… Nice!

But what if I want to work further with this data in Python?

Using the data from Stylo in Python

In the cell above I have called stylo() and saved its output in a variable called I_love_this_stuff (following the documentation of stylo 😉 ):

As we see, this variable is a ListVector of length 9. Each of these items contain different information from the analysis I have done. Let’s print the 100 first characters of the first items:

The first item contains actually the distance matrix:

As we see, this object is a matrix in R. Working in Python we would be happier with a Pandas Dataframe. For doing that, we convert first the matrix to a Numpy array, we use this array to load I_love_this_stuff to the dataframe, and we pass the names of the rows and the columns.

There you have your beautiful Delta Matrix of your corpus as Pandas Dataframe, using Stylo but working only with Python scripts. Yey!

Feedback, please!

This is just a try. Many things could be done in different ways, I have probably overseen things, maybe there are better ways to deal with this Python-R problem… So, please, let me know your thoughts (email, twitter, comments in the blog post…). Thanks in advance and thanks to Christof for his feedback about this Notebook!

How is the statistical typical Spanish Modernist Novel?

I am currently preparing an article about stylometry and genre in which I correlate clusters with metadata. One of the present results is that those texts with non typical values in its metadata are better distinguished than the rest: non realistic, texts in which the action takes place in other times or other continents… It seems that the well known structuralist categories of marked and unmarked could help organize the texts and genres. In order to get boolean values (like “yes“ or “no“) I looked for the central tendencies of the texts: which is the typical end of a novel of this period? Which is the typical gender or social level of its protagonist? How long are the novels of this period typically? Another way to see this information is: if I take a random novel from Spain and this period, what will I probably find?

For this purpose I am using the metadata of the Corpus de novelas de la Edad de Plata, from which you can find a first release on our GitHub account. The current state of the whole corpus contains around 250 novels from 1880 until 1939. I am not claiming that this corpus could be statistically representative for the literature of this period (although I am skeptical that the concept of representativeness, as used in statistics, could be any useful for humanist fields). Anyhow, this is a way for achieving very specific information about literature, or at least about this corpus.

For this purpose I have written a short script with the module Pandas of Python. You can find it in our Toolbox on GitHub (annotate > tendencies_metadata.py). With the categorical values I have searched for the mode, and for the numerical values I have calculated the median (which is never worse than the mean, as far as I know).

So, the big question, what can I expect from a random novel of this period? Let’s start with things that we can be very confident about: it was written by a male author, the action takes place in the contemporary times, in Europe, and is realistic. 90% of the corpus agrees with that. But there are good odds about other aspects: it takes place in Spain, its protagonist is a young man with medium social level (neither starving, nor rich) with a sad ending, the text is written in third person, the history of literature doesn’t think that the text represents in any form the author’s life, and (congratulations!) is already in the public domain. All these aspects are true for more than 50% of the corpus.

From the numerical values we can know many other things: it was probably published in the the decade of the 1900, to be more specific in 1905. We have already said that its action takes place in contemporary times, but it reasonably lasts around a year. The text is about 65 000 words (around 250 pages) and presumably contains around 1500 paragraphs, from which around 40% contain dialogue. And it has only four verses, believably. We already know that the author was quite probably a man, but we could even perhaps guess that he lived 64 years, since he was born around 1866 and died around 1930. We even presume that he changed his ways of writing around 1890, so the random book comes from his second period. And finally we may also think that the author was quite important, because manuals of history of literature have actually dedicated a whole chapter to him.

And there are other aspects that are not present in the majority of the corpus, but that represent anyway the most common value. Not only the texts takes place in Spain: around a third of the action of the novels takes place in Madrid. We can also guess that the author wrote it in the late period of the modernism (with a big concept of the Generación del 98 being part of it) and this author probably also wrote collections of short stories. Actually there is 15% chance that the author was Pío Baroja since he was the most prolific author of this period (and it is also in the corpus). And, although it has only a 2% chances, the most common name of the protagonist of the text is Xavier de Bradomín.

Many of you will argue that it is impossible to read a novel written by Pío Baroja with a protagonist called Xavier de Bradomin: this name belongs to a fictional character of Valle-Inclán. And it is true, all this information doesn’t apply to the texts altogether; some parts contradict strongly others: how could possible lords have a medium social level? This script only seeks the central tendency of each category independently. There are many ways to get a sharper and more representative picture of the the literature of this period: better and more data (many of the information shows the bias of my corpus), not only using mode or median, having in consideration correlations between categories,etc.

But other aspects (realistic, contemporary, Europe, Spain, male author and protagonist…) are ideas very present in the history of the literature. With this playful post (I have really enjoyed discovering and writing about it!) I am only suggesting this way to scrutinise texts: this way of treating metadata provides statistical values that can summarize, tinge or reinforce different ideas about literature.

Gender, places and Academical level at the DHd2016

Some weeks ago we published some visualizations of the data of the attendants at the DHd 2016 that we take from the program of the conference. The organization of the conference liked what we did, and while speaking with them, I pointed out that I had the feeling that significantly fewer women were at the conference if compared to the Spanish DH Conference 2015 (in which the gender distribution of the speakers were more or less 50%). So the organization gave us more data about the participants, of course anonymized, and for that we are very thankful.

So, let’s start with the basic gender question. Was I supposing correctly, that there were more men than women?

female-male-proportion
Gender, places and Academical level at the DHd2016 weiterlesen

CLiGS at the Day of DH 2016

On April 8th was the Day of DH! – The occasion on which the world wide DH community makes a snapshot of its members acitivities during the day.

Where were the members of the CLiGS group? What did they do on that day?

As you can see on the map, we spent the day in different places: José Calvo and Daniel Schlör were in Würzburg, Ulrike Henny in Cologne and Christof Schöch was blogging from Kraków.

geomap
Snippet of a geomap created with the Google Chart API

To get to know what we actually did on that day, check out the blog posts on the Day of DH 2016 website:

Our overall impression was that people were very busy on that day (188 members representing the international DH community, some non-active members, some bustling Good-Morning-posts). We hope that the interest in the Day of DH event will continue and hopefully grow in the future!

Verba Alpina: open data + elegant solutions

In the context of the Junge Forum Romanistik, there were workshops with a focus on digital tools in literary and linguistic studies organized by CLiGS together with the FJR and the AG Digitale Romanistik. In one of the workshops, Thomas Krefeld and Stephan Lücke presented the project Verba Alpina:

Screenshot from 2016-03-17 07:11:16This project studies dialectal data from the very multinational area of the Alps. The classical dialectal projects took a national approach that hides the linguistic processes that go across the border.  It impressed me for three reasons: two very simple and elegant solutions that they are applying in the project and how open the data and the tool are.
Verba Alpina: open data + elegant solutions weiterlesen

DHd 2016: countries, cities and institutions of the speakers

The CliGS group had the opportunity of being at the German Digital Humanities Conference  in Leipzig (DHd 2016). As we did with the  DH Spanish Conference of last year, we decided to take the data of the program to see in detail some general information of the people talking at the conference.

The data used in this post come all from the conftool of the conference. In that website is also the information about the pre-conference workshop and the EADH-Day.  It is important to have clear that this represent how visible are in the program countries, cities and institutions, and not about all the participants. We are only taking the data from the people that presented something (conference paper, poster, session…) and if someone took several roles during the conference, his information is also repeated.

I took the HTML, I cleaned it with scripts as best as I could; the tricky part was with this kind of things:

As we can see, the relationship between person and institution is not one to one. I checked the results of some of the most complicated cases and the scripts did a good job, but I wouldn’t dare to plunge my hand in the fire for this data 😉 If there are some errors and you want to give a try to clean the data in a better way, let us know with a comment! For the visualisation I have used the very user-friendly and intuitive tool RAW.

Lets start with the countries, in which country do the people in the program work? Results:

Well, not a huge surprise that Germany is the first country (428). Now the difference between Austria (37) and Switzerland (13) I didn’t expect. It is interesting to see how Italy and the Netherlands are well represented, specially if we compare it with other European countries, specially France, United Kingdom, Spain, Poland…

Lets go a step deeper in the data. And, now, a word of explanation: apparently the participants of some universities are more homogeneous when naming their institutions as other: while Universität Paderborn didn’t have any variant, there was a lot of variants in some Universities, example: Universität Göttingen, Georg-August-Universität Göttingen, GA Universität Göttingen, Uni Göttingen… So I tried to curate the data the best way I could and searched for the locations of many institutions and I didn’t know:

Berlin, Leipzig, Göttingen, Würzburg, Wien, Darmstadt, Stuttgart… And from that we can go a step deeper and see the different institutions in each city. Because while some cities like Berlin, Wien or Göttingen contain a great number of institutions working in the Digital Humanities, other cities like Frankfurt or Würzburg are represented by a single institution.

So the data after institutions looks like this:

After the University of Leipzig, the one holding the conference, the best represented institutions in the program are the Universities from Würzbug, Darmstadt, HU-Berlin, Stuttgart, BBAW, ÖAW, NSUB-Göttingen, Köln…

Surprises?

How good are our texts, really? Quality assurance for literary texts from various sources

by Ulrike Henny and Christof Schöch

Some weeks ago, we made our „New Year’s release“ of text collections available. We publish the texts in the „CLiGS“ group’s GitHub repository called „textbox and archive each release on Zenodo where they get a DOI. The texts are encoded in TEI with relatively detailed metadata. The collections are subsets of the texts we are using in our various research projects in computational genre stylistics and contain narrative texts from France, Spain and Latin America. The texts have been gathered from various sources, most notably among them Ebooks libres et gratuits and Biblioteca Virtual Miguel de Cervantes

How good are our texts, really? Quality assurance for literary texts from various sources weiterlesen

Breaking News: The CLiGS textbox New Year release

We are very proud to announce our first public CLiGS text release on github, just in time for being a New Year release.

The „textbox“ of our CLiGS-repository contains the following four collections of literary texts from Spain, France and Latin America, which are now online at your disposal:

The novels and novellas have been encoded according to the Guidelines of the Text Encoding Initiative. Matadata tables and short descriptions of each collection (readme.md) are available as well.

You want to experiment with some new tools on Spanish or French texts? Or you are simply curious about our TEI-encoding? So don’t hesitate and check it out on github and zenodo . Praise, suggestions for improvement and (good :-)) reviews are always welcome!

SnapshotTextbox
One example of our TEI-encoding by José Calvo Tello.

Workshop „Advanced Methods in Stylometry“

The junior research group „Computational Literary Genre Stylistics“ (CLiGS) is organizing a hands-on workshop on „Advanced Methods in Stylometry“ which will take place at Würzburg University, Germany, on December 9-11. (All further information will be posted here; see bottom of this post for practical information.)

The workshop targets doctoral students in literary studies already familiar with computational text analysis and interested in using specific, advanced methods for their use-cases and research questions. The aims of the workshop are to help participants move beyond out-of-the-box functionality in stylo, either using advanced functionality in stylo or using specific Python packages. Participants are encouraged to bring their own datasets to the workshop.

The workshop will be taught by Maciej Eder (Paedagogical University, Kraków, Poland), Mike Kestemont (University of Antwerp, Belgium), and Jeremi Ochab (Jagiellonian University, Kraków, Poland), three experts in stylometry. It is being coordinated by Christof Schöch. The workshop will have three parts, adresssing the following issues:

  • The first part of the workshop will focus on designing and implementing workflows in R, aimed at performing large-scale custom stylometric experiments. To this end, a few low-level functions of the package ’stylo‘, as well as a number of generic R functions and routines will be introduced.
  • The second part will offer an introduction to the popular Machine Learning toolkit for Python: sklearn (http://scikit-learn.org/). The workshop will focus on sklearn’s powerful suite of text processing algorithms. Using relevant examples from stylometry, it will be demonstrated how sklearn equips users with an arsenal of easily available (un)supervised machine learning routines.
  • The third part will be an introduction to models of complex networks as well as to the most prominent results on empirical networks. They will cover the most relevant graph characteristics, and will further expand to graph-based unsupervised clustering techniques, so-called community detection algorithms.

The workshop requires familiarity with the fundamental assumptions of computational text analysis including stylometry as well as solid competencies in using R and Python. If you are interested in joining us for the workshop, please send an application to christof.schoech@uni-wuerzburg.de until November 20, 2015, specifying why you would like to participate and how you have achieved your current level of competency in stylometry.

The workshop will start on Wednesday, December 9 at 9:30 am and end on Friday, December 11 at 1:00pm. Participation is free except for a small contribution for drinks and snacks during the breaks. The working language of the workshop will be English, but text collections used may be in the language of your choice. Participants are expected to bring their own laptop computers with the latest version of R (with stylo) as well as Python (version 3, with numpy, pandas, sklearn) installed.

The workshop is organized by the CLiGS group with funding from the German Federal Ministry for Research and Education (BMBF).

Practical information:

  • The workshop will take place at Würzburg University, Campus Hubland, Philosophisches Institut, Building 8, room 8.E.18. The pointer on this map points there.
  • The closest bus stop is „Philosophisches Institut“. From Würzburg Hauptbahnhof, buses 14, 114 and 214 take you there.

What did we actually encode? Analysing XML collections with Python

One could assume that one of the first steps in a project involving the encoding of information with markup or the creation of XML files is to create a schema governing encoding procedures. But I’d say that this might not be necessarily true, especially when the subject matter is not entirely clear right from the start, in terms of a data model and features describing it. What did we actually encode? Analysing XML collections with Python weiterlesen

Visualizing information about HDH2015 & EADH Day

As I have already explained, we were at the HDH2015 & EADH Day at UNED in Madrid, last week. Besides some classic summary, when I was in the conference, I had the idea (inspired by Scott Weingart and his posts about DH conferences) about trying to condense the data about participants and visualize it in order to understand better some tendencies of the conference.

So I grabbed the program of the conference and I took the information (gender, institution and place) of the speakers. I took the information only from this program, so if the information was missing, I didn’t search for it somewhere else (as you can understand). I have also put together the information about HDH and eadhDay since there was a continuity of the programs. I haven’t compare the names of the people to discriminate if they have spoken once or more times; so, if someone have spoken several times, this is counted as different people.

Probably doing this I have done some mistakes; maybe you are reading this,  you work somewhere, you did speak at the conference but your place is not in the visualizations. For that I am sorry; write us a comment and I will try to amend the information.

Let’s start with gender, a topic that I have already mentioned and that was also discussed at the conference.

Screenshot from 2015-10-17 09:01:18From 229 people who talked, 126 were women and  103 were men. What I mentioned in my last post about women in leading positions in DH field, is also truth seeing the data of speakers.

Now, let’s see the distribution of participant by the University where they work:

Screenshot from 2015-10-17 09:06:41In the chart are not all the Universities, only those who sent at least 3 speakers (so we are able to read the names in the chart). The University with most participants was the UNED, which is not a surprise since they hosted the conference. But I have to say that the next Universities wouldn’t have been my first guesses. The great majority are Universities from Spain, but we also see other European and American centres (among them  Würzburg!).

Now let’s see about the speakers sort by cities where they work. And this is an important distinction: this are not the cities where the speakers come from, but where they work. I am an example of that: I studied in Madrid, but now I am working in Würzburg, Germany.

Screenshot from 2015-10-17 09:24:58Again, Madrid as first position is not a surprise, but it is that Palmas de Gran Canaria comes before Barcelona, for example. It is also interesting that the 5th position is hold by Paris and that we find three German Universities: Hamburg, Cologne and Würzburg.

Of course, if we visualize cities, we should also use maps! So I have used the Dariah Geo-browser and the results are:

Screenshot from 2015-10-17 09:38:06Screenshot from 2015-10-17 09:38:21

Let’s see Spain and Europe a little bit closer:

Screenshot from 2015-10-17 09:39:05 Screenshot from 2015-10-17 09:38:47

As we can see, the biggest circles are of course in Spain, some circles distributed in America and there are also a lot of circles in Western Europe.

So, what happen if we organize this information by country?

Screenshot from 2015-10-17 09:25:42As happened with UNED and Madrid, it is not surprising that Spain comes first. I would have expected United States, United Kingdom or France as second country, but actually is Germany, an interesting surprise that emphasize the tradition of strong relationships between Germany and Culture in Spanish.

It was also interesting to see that a great amount of people working abroad are actually Spanish that moved to other countries in the past. The HDH2015 & eadhDay were great opportunities to know each other and get in touch.

HDH 2015 and eadh Day: some impressions

Last week we had the opportunity to attend the 2nd international conference about Digital Humanities in Spain, and the first European Association for Digital Humanities Day. Both  took place at the UNED, in Madrid, and  we were able to present our CLiGS project as a poster and make a short contribution to EADH Day about HTML and TEI.

We had the chance to meet colleagues from Spain and abroad that are working with the DH methodologies and technologies. I would like to make a short overview about some tendencies that I noticed:

  • There is an important amount of projects about digitalization going on right now and TEI has become also in Spain the standard language for that. Some interesting examples of this activity are the edition of La dama boba, by Lope de Vega (abstract here, web here) or Las soledades of Gongora.
  • Another  great editing project comes from the University of Graz (Austria), where they have edited and published on the Web as TEI hundreds of texts from the 18th Century, the so called Espectadores (abstract here, web here). An incredible source of information!

Screenshot from 2015-10-17 07:51:41

  • I still feel from some projects some doubts about publishing the actual TEI on the web; the HTML or plain text is easily uploaded, but the publishing of TEI version is many times postpone. And some projects are looking for ways of working with TEI without seeing brackets or any code.
  • The conference was very useful to get a general idea about how different projects are developing their software (databases, webs, storage systems…) and work-flow. Also to hear about the conditions and academical problems that are confronted with.
  • The DH field has a stronger relationship in Spain with teaching and in general pedagogy that in other countries.
  • One very strong tendency is to use metadata from catalogues, linked data and using semantic web technologies. Probably the  most ambitious project was presented by Asunción Gómez Pérez and it was about the datos.BNE.es portal.

Screenshot from 2015-10-17 08:09:34

  • On the other hand, there weren’t too many presentations about linguistics and NLP. One of the exceptions  was present and that have developed interesting web services called ParamText are the researchers from Las Palms de Gran Canaria.
  • Probably related to this is also the small number of presentations about results of applying new technologies to the digital data. David Wrisley (using stylometry to some hundreds of texts in order to understand better the relationship between the texts), Frank Fischer (searching for dates in thousands of European novels in order to know when the action of the novels tend to take place; by the way, between May and August) or Borja Navarro (using different techniques for grouping sonnets of the Siglo de Oro)  did present this kind of analysis.
  • I think the DH field in Spanish language is doing a good job against gender discrimination. One female president abandon the charge and another female president took up. An other great example of that was the panel were 4 women  presented their work of creating groups and networks around DH in Spain and Latin America.
  • I was very impressed by the group at the UNED, LiNHD, led by Elena González Blanco, both for their work as organizers but also for all the activities (seminars, publications, work with other groups, and so on) that are achieving.

 

Open Peer Review: Überblicksartikel zu Möglichkeiten und Nutzen von TEI für Textedition und Textanalyse

Die neue romanistische Zeitschrift Romanische Studien  erscheint nicht nur online und im Open Access (Lizenz: Creative Commons – Attribution), sondern hat auch die lobenswerte Initiative ergriffen, Autorinnen und Autoren dazu zu ermuntern, die Beiträge schon vor der offiziellen Veröffentlichung ins Netz zu stellen, damit Kolleginnen und Kollegen den Beitrag vorab schon wahrnehmen und kommentieren können: Die Redaktion merkt an, dass „so produktive Austauschprozesse wie auch eine frühe und erweiterte Bezugnahme auf das veröffentlichte Werk gefördert werden“ (siehe der entsprechende Vermerk). Wohlgemerkt passiert das als Ergänzung zum klassischen peer review, der ebenfalls (und vorab oder parallel) erfolgt.

Für meinen Beitrag mit dem Titel „Ein digitales Textformat für die Literaturwissenschaften. Die Richtlinien der Text Encoding Initiative und ihr Nutzen für Textedition und Textanalyse“ ist nun dieser peer review abgeschlossen und ich möchte heute das Verfahren einmal testen und den aktuellen Stand des Textes öffentlich zur Diskussion stellen. Der Abstract folgt unten, die aktuelle Version des Beitrags steht hier als PDF zum Herunterladen bereit und die Kommentarfunktion ist freigeschaltet!

Abstract: Die stetig voranschreitende Digitalisierung literarischer Texte verschiedenster Sprachen, Epochen und Gattungen stellt die Literaturwissenschaften immer wieder vor die Frage, wie sie diese Entwicklung mitgestalten und zu ihrem Vorteil nutzen können. Dabei ist digital nicht gleich digital, sondern es existiert eine Vielzahl sehr unterschiedlicher, digitaler Repräsentationsformen von Text. Nur wenige dieser Repräsentationsformen werden literaturwissenschaftlichen Anforderungen tatsächlich gerecht, darunter diejenige, die den Richtlinien der Text Encoding Initiative folgt. Der vorliegende Beitrag vergleicht zunächst einige derzeit gängige digitale Repräsentationsformen von Text. Für literaturwissenschaftliche Forschung besonders geeignet erweist sich hierbei eine Repräsentationsform, die den Richtlinien der Text Encoding Initiative folgt. Daher informiert der Beitrag anschließend über deren Nutzen für die literaturwissenschaftliche Arbeit, sowohl im Bereich der wissenschaftlichen Textedition als auch im Bereich der Analyse und Interpretation von Texten. Nur wenn die Literaturwissenschaften in ihrer Breite den Nutzen von offenen, expressiven, flexiblen und standardisierten, langfristig nutzbaren Formaten für die Forschung erkennen, können sie sich mit dem erforderlichen Nachdruck für deren Verbreitung einsetzen und durch die zunehmende Verfügbarkeit von Texten in solchen Formaten für die eigene Forschung und Lehre davon profitieren.

Die Nachwuchsgruppe bei der Digital Humanities Conference 2015 in Sydney!

Mit einem Beitrag zu „Topic Modeling French Crime Fiction“ ist die  Nachwuchsgruppe bei der Digital Humanities Conference in Sydney, Autralien (30. Juni bis 4. Juli 2015) vertreten. Die Konferenz ist das wichtigste Ereignis des Jahres in den Digitalen Geisteswissenschaften.

Christof Schöch wird ein short paper zum Thema „Topic Modeling French Crime Fiction“ halten. Die Einleitung des ausführlichen Abstracts formuliert: „This study applies topic modeling to a collection of French crime fiction novels in order to discover topic-related patterns. The results show both expected and unexpected patterns related to authors, subgenres, and time period. Topic modeling proves highly useful for investigating the history of French crime fiction.“ Der gesamte Abstract ist im „Book of Abstracts“ der Konferenz einzusehen.

Neben dem Topic Modeling-Beitrag ist der Lehrstuhl für Computerphilologie auch mit dem folgenden Beitrag aus dem Bereich der Stilometrie bei der DH2015 vertreten: Fotis Jannidis, Steffen Pielström, Christof Schöch, Thorsten Vitt: „Improving Burrows’ Delta – An empirical evaluation of text distance measures„.

CLIGS

Die weltweit vorangetriebene Digitalisierung des kulturellen Erbes hat im Bereich der Volltextdigitalisierung inzwischen einen Umfang erreicht, der neue methodische Zugänge zu literaturwissenschaftlichen Fragestellungen ermöglicht und erfordert, woraus zugleich neue Fragestellungen entstehen. Es ist die übergeordnete Zielsetzung der Nachwuchsgruppe „Computergestützte literarische Gattungsstilistik“, eine methodische Konvergenz herzustellen zwischen neuesten Verfahren der quantitativen Analyse literarischer Texte einerseits und grundlegenden literaturwissenschaftlichen Fragestellungen aus dem Bereich der Gattungstheorie und der Stilistik andererseits. Das Projekt geht von etablierten literaturwissenschaftlichen Fragestellungen grundlegender Bedeutung aus, allerdings von Anfang an in der Perspektive einer computergestützten Methodik. Ziel ist es, die literaturwissenschaftlichen Fragestellungen durch eine Kombination umfangreicher Textdaten, innovativer Analysemethoden und hermeneutischer Kontextsensibilität auf neuer Grundlage und mit einem neuen Blick beantworten zu können. Dies wird anhang mehrerer umfangreicher Textsammlungen aus dem Bereich des französischen Theaters der Klassik und Aufklärung sowie des französischen und spanischen Romans des 19. Jahrhunderts unternommen. Die Gruppe ist in der romanistischen Literaturwissenschaft angesiedelt sein und wird dazu beitragen, dass romanistische Nachwuchswissenschaftler/innen einschlägige Kompetenzen im Bereich neuester computergestützte Methoden erwerben können.