Susan Ebbels - Within-participant designs as a way into intervention research for busy practitioners?

Professionals working in health and education are increasingly required to read, interpret and create evidence regarding the effectiveness of interventions. This requires a good understanding of the strengths and weaknesses of different intervention study designs. Databases such as What Works and SpeechBite help those interested in interventions for speech, language and communication to distinguish between different levels of evidence.

Intervention studies are often divided into between- and within-participant designs. Between-participant designs (often called group comparisons) involve comparing two or more groups of participants where at least one group receives an intervention and at least one other group acts as a control. Within-participant designs do not require a control group as they compare performance within (a group of) participants, for example, by comparing progress with intervention to progress without intervention (either before it started, or in a different area).

The What Works database divides evidence for different interventions into 3 broad categories of strong (including at least one systematic review), moderate (randomised control trials and some other robust group comparison designs) and indicative (other evidence, including all within-participant designs, whether involving groups or single cases). The SpeechBite website has a more detailed rating scale (from 1-10), but this is only applied to studies involving a group comparison design. Within-participant designs, whether groups of children or single cases are not yet rated.

Within-participant designs are the most frequent design on SpeechBite and are also the most feasible for practising professionals to carry out as part of their daily practice. However, different designs vary in their robustness and in my view it is important for professionals to be aware of the ways in which their robustness can be increased.  This is because, by making a few tweaks, much everyday practice could be adjusted in a way that would enable results to contribute to the evidence base. Thus, it is a pity that different types of within-participant designs are not distinguished from each other in either What Works or SpeechBite.

I recently wrote an article encouraging speech and language therapists to appraise and carry out intervention research (Ebbels, 2017). This takes readers through a range of designs commonly used in the field, working from those with the least to most experimental control, with a particular focus on how the more robust designs avoid some of the limitations of weaker designs. This includes several different within-participant designs which can be used with single cases or with groups, without requiring a control group. I discuss how the robustness of these designs can be increased and how not all within-participant designs are equal.

The key feature of intervention research is that studies need to provide experimental control in order to increase the chances that any progress seen is unlikely to have happened without intervention. The main ways to achieve this with within-participant designs is to include:

  1. a baseline period before intervention starts. This indicates the spontaneous rate of progress without intervention (maybe due to maturation or practice effects),
  2. control items or areas which are also tested, but not expected to improve due to the intervention, or
  3. both of the above.

Adding in multiple baselines provides further control. Multiple baselines involve staggered intervention for different targets (or indeed participants) where one target is held in baseline while another target is treated, but then that target is later treated too. Measuring what happens after intervention ceases is also important: is progress maintained or not, or does progress continue further? Of course, the strength of the experimental design is not the same as whether the results support an intervention or not. Results can be positive or negative and the more robust designs allow you to draw stronger conclusions either way, regarding whether an intervention was effective or not.

If practitioners incorporate aspects of evidence-based practice into their daily work, this is likely to improve both the evidence base and practice. Whether interpreting the research studies of others, or designing their own, practitioners need a good understanding of research design. I hope I have provided a few pointers here and in my paper for those wishing to carry out smaller scale studies without the need for a control group and that practitioners will become increasingly involved in intervention research.

References: 

EBBELS, S. H. (in press). Intervention research: Appraising study designs, interpreting findings and creating research in clinical practice. International Journal of Speech-Language Pathology. Published online here

Written by Susan Ebbels at 00:00

0 Comments :

Comment

Tags

Latest Comments