One of my modules this semester is looking at “computer-assisted approaches to text analysis”. Or at least that is the module overview says. In reality, we are looking at research questions in the humanities and how digital methods for text analysis and corpus linguistics can help us answer them.
From the first two seminars, the idea that has stuck with me is that most humanities research does not go beyond forming a hypothesis. If we take a scientific approach (as digital humanities borrows a lot form computer science, this isn’t difficult), we can read a history or English research paper as an exploratory discussion of the topic. However, the conclusion often does not beyond what we have discovered through the course of this discussion.
Why is textual analytics different? The main reason, and the one I am interested in, is that we can look at a larger sample of data by using computers. We can compare different elements of multiple texts, allowing us to understand them quantitatively as well as qualitatively.
For example, my chosen thesis topic is on how the presentation of Sir Gawain changes in Arthurian literature and film. I could just use a close reading approach but this would be largely reliant on my own interpretation of the texts and small samples of data. In other words, qualitative research. If I use digital methods, I can compare the frequencies of adjectives related to Sir Gawain within each texts. This will allow me to demonstrate how Sir Gawain’s identity has changed through time, making my research quantitative.
Why add this quantitative element to humanities research? By adding an analytical aspect to research, humanities researchers can prove their hypothesis instead of relying on their own intuition and interpretation.