Often the effectiveness of an educational intervention for a large lecture group is assessed by testing the cohort before and after the intervention and measuring any improvement in the aggregated data from pre- to post-testing. A limitation of this method is that not all students may attend the pre-test, post-test or the lectures where the intervention is administered, diluting the significance of the results. An alternative approach is for students to use a unique but anonymous research code that allows researchers to 'tag' each individual student and hence identify those students who participate in all intervention activities and tests ('complete responders'). This paper argues that tagged data can increase the statistical significance of an intervention hypothesis when compared to untagged data even when the statistical sample is small. In a recent study that tested the efficacy of interactive lecture demonstrations (ILDs) in improving students' conceptual understanding for an advanced topic in electronics (AC resonance), the 'complete responders' formed a relatively small subgroup (N=21) of the full group (N=86) that participated in all or only some of the activities or tests ('all responders'). The learning gains for the 'complete responders' were more significant than those of 'all responders'. The reasons for the increased significance are discussed in this paper.