-
PDF
- Split View
-
Views
-
Cite
Cite
Eva Skärstrand, Knut Sundell, Sven Andréasson, Response to the commentary of Segrot et al on the Swedish SFP trial, European Journal of Public Health, Volume 24, Issue 3, June 2014, Pages 355–356, https://doi.org/10.1093/eurpub/cku050
- Share Icon Share
The commentary of Segrot et al on the Swedish Strengthening Families Program (SFP) trial is important. The spread of evidence-based practice has resulted in an increased interest in empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted interventions. Evidence from selected case examples of replication trials of family-based US Blueprints model and promising programs appears mixed.1 We are beginning to learn from these successes and failures that features of both ESIs and the research designs used to test them may contribute to outcomes, that is, whether transport from one cultural context to another is successful in terms of program implementation and observed outcomes.
To understand the contradictory results from studies of imported ESIs, at least four explanations are available. The first has to do with methodological differences of the outcome trials. For instance, efficacy trials in which program developers supervise the provision of experimental services often produce larger effect sizes than effectiveness trials that take place in the context of routine services where program developers are less involved.2
The second deals with ambiguities in the cultural adaptation process. When an ESI is imported to a new culture, program materials must often be translated and the content of program activities is often screened for cultural relevance. Typically, some type of adaptation or modification is needed. Unfortunately, there is no consensus about the criteria for determining when cultural adaptation is needed.3,4 One solution is to restrict adaptation to what Resnicow et al.5 referred to as ‘surface structure’ and stress fidelity to the so-called ‘deep structure’. However, few program developers define the deep structure and even fewer have tested whether core components are empirically related to outcomes.
A third potential explanation is that ESIs in failed replications have not been adequately implemented. Implementation is a multidimensional construct, consisting of ‘fidelity, dosage, quality, participant’s responsiveness, program differentiation, monitoring of control conditions, program reach and adaptation’.6 Fidelity involves adherence to the program curriculum, competence in using the intervention and differentiation from alternative services.6 Fidelity is a key variable among implementation markers, and it has been found to modify intervention benefits.7 If an imported ESI is implemented with poor fidelity, an otherwise well-conducted outcome study might falsely produce findings of no effect.
A fourth source of variation may lie in unobserved contextual influences that have potential to moderate the effects of ESIs when imported to new settings. The exact effects of these cultural differences are widely acknowledged but have rarely been the focus of systematic research in translating ESIs to new cultures.4,8
The next generation of imported ESI trials should incorporate research designs that allow for a differential examination of surface and deeper adaptations. Data from recent replications suggest that carefully controlled effectiveness research is warranted before an ESI is recommended for dissemination in a new cultural context. Also, much can be learned from domestic intervention adaptations for ethnic or racial subgroups, especially if risk processes differ within subgroups. An emerging challenge is the identification of aspects of adaptation that may be unique to specific contexts vs. those aspects of adaptation that may have universal application.
From attempts to replicate ESIs across and within countries, a number of models for cultural adaptation are beginning to emerge.4 Typically, these models prescribe a series of steps or decision-making guidelines for adapting, implementing and evaluating an intervention for a new context. For instance, the Planned Intervention Adaptation Protocol1 suggests that the original program is compared with a culturally adapted version and a control group.
The design in Swedish SFP trial did not allow for a determination of whether intervention effectiveness was compromised by the adaptation or, alternatively, whether the adaptations were insufficient and more changes might have been necessary to yield benefits among Swedish families. Therefore, one can only speculate on possible reasons for the lack of significant effect.
Comments