4.4.2. Assessing interrater coding reliability and
internal consistency
Following the discussion in the previous section, the researchers
investigated issues related to interrater coding reliability
(ICR) and internal consistency. At this stage, the ICR
test was dealt with as follows. The researchers asked one independent
coder (Coder 1) to cluster 64 previously identified nodes.
The second coder (Coder 2), on the other hand, was made
familiar with the result of the cluster exercise of researchers
and this coder was asked to identify the five clusters in the
data transcript. The ICR exercise delivered a high reliability,
which satisfied the level set by Miles and Huberman [34]
(around 90% agreement during the second coding round).
Similar levels of ICR were achieved by Druskat and Wheeler
[16]. The research of Druskat and Wheeler [16] is especially
important benchmark for this research as Druskat and Wheeler
[16] had a similar number of high level nodes (four). Other
studies with a greater number of high level nodes do not reach
that level of ICR [17] and differences are polished off through
a dialogue [26].