Description
Content analysis is an important method in the social sciences and humanities. However, it can be very difficult to achieve a satisfactory level of intercoder agreement. In most studies, annotating consists of coding predefined items, where coders only have to choose a category for each item. However, when the data is a continuum (e.g., text, audio, video), coders also have to choose the relevant parts of the continuum (units) before they categorize them. This is called unitizing.It is difficult to get a good agreement value for coding of predefined items, in particular when variables are subjective (e.g., metaphor types, coherence relations, informal language, filmic narratives). But it is even more difficult in the case of unitizing since there are additional disagreements possible concerning the position and the presence of units. Correspondingly, methods to assess agreement for unitizing are much more difficult to elaborate than the ones for predefined items (such as the famous Cohen’s kappa) because there are two types of discrepancies (position and category) that interfere.
The 3rd Intercoder Reliability (ICR) Workshop is about this complex phenomenon of unitizing. What does unitizing entail? How to assess agreement? Which problems do researchers have? To what extent are these problems subject-dependent? What solutions are possible? These and similar questions will be addressed in this workshop.
Period | 6 Jul 2018 |
---|---|
Event type | Conference |
Conference number | 3 |
Location | Tilburg, NetherlandsShow on map |
Keywords
- Content analysis
- Workshop
- Intercoder reliability
- Discourse