Problems with “percent correct” in conditional discrimination tasks
Literature on conditional discrimination tasks indicates that interpretation of data depends on assumptions about what constitutes evidence of performance accuracy and change. According to one interpretation, performance after a procedural intervention (e.g., introduction of new stimuli in an identity matching-to-sample task) is compared to baseline performance before the intervention; if a decrease in performance is evident, then the conclusion is drawn that the intervention produced a deficit in performance. According to a different interpretation, performance from an intervention is compared not to baseline but to chance level; if performance is significantly different from chance level after the intervention, the conclusion is drawn that the intervention did not produce a deficit in performance. Evidence for presence or absence of stimulus control or concepts is extracted from such data depending on the method of comparison. In many cases, the intervention may produce a decrease in accuracy from a baseline of 90–100% accuracy to the 60–80% range, which may be significantly different from baseline but also significantly different from chance level of 50%, for two-choice tasks. Thus, different, if not opposite, conclusions might be drawn from the same set of data depending on the method of analysis (e.g., a change from a baseline of near 90% correct to 70% correct after the intervention is either a performance deficit or not depending on the method of analysis). Interpretations of results from conditional discrimination tasks may profitably be clarified when data are presented more objectively as percent stimulus control rather than as percent correct.
European Journal of Behavior Analysis
Digital Object Identifier (DOI)
Iversen. (2016). Problems with “percent correct” in conditional discrimination tasks. European Journal of Behavior Analysis, 17(1), 69–80. https://doi.org/10.1080/15021149.2016.1139368