Computational text analysis can be a powerful tool for exploring qualitative data. In this blog post, I'll walk you through the steps involved in reading a document into R in order to find and plot the most relevant words on each page.
While analyzing text data can be a lot fun, preprocessing text data is generally not. It can also be extremely difficult, especially when you're just getting into computational text analysis or the R programming language.
Recently I learned about an incredible initiative launched by a team of political scientists, computer scientists, and historians at my university called The Canadian Hansard Dataset. The data set is a massive, digital collection of English-language debates in the House of Commons from 1901 to today (all French speeches have been translated to English).
In my last blog post, we discussed how to read .pdf files into RStudio.
Using pdftools, we were able to read in .pdfs that were both machine-ready and not.
Doing quantitative text analysis often means working with documents in .pdf format, and these documents may or may not be in a machine-readable format. Assuming we are using RStudio, how do we read these files into our environment so that we can clean, process, and analyze them?
Code and tutorial prepared for the Toronto Data Workshop session on July 30, 2020. You can download the corresponding slide deck for this workshop here.
Since launching the Policing the Pandemic Mapping Project with Alexander McClelland, a lot of people have asked us how we built the interactive map and database.
I was recently listening to a Radiolab podcast on the history of the phrase "can neither confirm nor deny", formally known as the "Glomar Response". If you have not yet heard this episode, I highly recommend it.
It is a perennial problem in Canada that municipal, provincial, and federal government agencies disclose records under Access to Information (ATI)/Freedom of Information (FOI) law in non-machine readable (image) formats by default.