komex: Crowd-sourced Text Analysis

Content 

The five-day in-person course teaches you how to design and conduct a crowd-sourced text analysis project—covering key concepts and ideas of crowd-sourced decision-making as well as quality criteria and pitfalls.

What Is This Course About?
Crowd-sourced text analysis allows for fast, affordable, valid, and reproducible online coding of very large numbers of statements by collecting repeated judgements by multiple (paid) non-expert coders. The five-day in-person course provides you with the skills needed to design and conduct a crowd-sourced text analysis project. The course covers key concepts and ideas of crowd-sourced decision-making as well as quality criteria and pitfalls. The course focuses on two applications (simple and complex categorization tasks and the identification of positive and negative sentiment) that can be generalized to many contexts and supports students in developing their own crowd-sourcing projects.

Timetable – 4h / day

  •     09.00-10.30h
  •     10.45-12.15h
  •     13.30-14.30h


Learning Goals
After this course you will:

  • Be able to design, set up, implement, and evaluate your own online crowd-coding data project.
  • Be familiar with the core concepts underlying crowd-sourcing and the ‘wisdom of the crowd’ paradigm, as well as most recent applications to political science and social science questions.
  • Be able to calculate and critically reflect on different agreement-, reliability-, and trust scores.
  • Be able to craft and/or find suitable gold questions needed before, during, and after the analysis.
  • Be aware of potential pitfalls and limitations of crowd-coding and the best ways to address them.


Who are Your Instructors?
Alexander Horn is Head of the Emmy Noether Research Group Varieties of Egalitarianism, University of Konstanz/Cluster The Politics of Inequality, Germany. Previously, he served as Assistant Professor at Aarhus University and John F. Kennedy Memorial Fellow at the Center for European Studies at Harvard. His methods expertise is in crowdsourcing, crowd-coding, and text analysis. His publications on online crowd-coding, content validity and measurement of political text include “Peeping at the corpus – What is really going on behind the equality and welfare items of the Manifesto project?” and “Can the online crowd match real expert judgments? How task complexity and coder location affect the validity of crowd-coded data”.

Martin Haselmayer is a postdoctoral researcher of the Emmy Noether Research Group Varieties of Egalitarianism specialized in text analysis. Previously, he was a pre- and postdoctoral researcher at the University of Vienna and the Austrian National Election Study (AUTNES). He has published on various aspects of crowdsourced text analysis: e.g., Sentiment analysis of political communication: Combining a dictionary approach with crowdcoding, More than bags of words: Sentiment Analysis with word embeddings, and crowdsourced a factorial survey experiment Love is blind. Partisanship and perception of negative campaign messages in a multiparty system.

Bildungszeit (can be claimed by employees in Baden-Württemberg) 
Anforderungen des Bildungszeitgesetzes Baden-Württemberg sind erfüllt
Course Management 
Co-Lecturer 
Fee 
460 EUR / Early bird 390 EUR / Please note: you will gain access to our learning management system Moodle only after having paid your course fee
ECTS Credits 
4
Contact for Questions 
Date 
06.03.2023 09:00 to 14:30
07.03.2023 09:00 to 14:30
08.03.2023 09:00 to 14:30
09.03.2023 09:00 to 14:30
10.03.2023 09:00 to 14:30
Requirements 
While the course does not require specific knowledge, basic knowledge of empirical social research is assumed (e.g., what is reliability and how to work with spread sheet programs such as Excel). Prior experience with R or Stata is helpful, but not expected. Participants should bring their laptop and it is recommended that they bring some additional funds (50 EUR) to run their own crowd-coding tasks. Sockets will be provided.