Abstract

Background

The Multiphase Optimization Strategy (MOST) is an approach to systematically and efficiently developing a behavioral intervention using a sequence of experiments to prepare and optimize the intervention.

Purpose

Using a 6 year MOST-based behavioral intervention development project as an example, we outline the results—and resulting decision-making process—related to experiments at each step to display the practical challenges present at each stage.

Methods

To develop a positive psychology (PP) based intervention to promote physical activity after an acute coronary syndrome (N = 255 across four phases), we utilized qualitative, proof-of-concept, factorial design, and randomized pilot experiments, with iterative modification of intervention content and delivery.

Results

Through this multiphase approach, we ultimately developed a 12 week, phone-delivered, combined PP-motivational interviewing intervention to promote physical activity. Across stages, we learned several important lessons: (a) participant and interventionist feedback is important, even in later optimization stages; (b) a thoughtful and systematic approach using all information sources is required when conflicting results in experiments make next steps unclear; and (3) new approaches in the field over a multiyear project should be integrated into the development process.

Conclusions

A MOST-based behavioral intervention development program can be efficient and effective in developing optimized new interventions, and it may require complex and nuanced decision-making at each phase.

Introduction

Compared to the development of new biological treatments (e.g., drug development), developing behavioral interventions has numerous additional complexities and considerations [1, 2]. These considerations include questions regarding the specific elements of the intervention, the modality of delivery, the discipline and training of those delivering the intervention (or whether there are interventionists at all, as in the case of eHealth or mHealth interventions), and frequency and duration of contact with participants. Each of these factors can play a major role in the feasibility, acceptability, and impact of a behavioral intervention, and some of the most complicated determinations involve decisions about optimal combinations of these components within multicomponent behavioral interventions [3].

Researchers have approached the question of behavioral intervention development in different ways. Some approaches to developing and testing multicomponent behavioral interventions involve either taking a single best approximation of an intervention, based on prior literature or a theoretical model, or using a staged model of new intervention assessment that typically involves a single predetermined type of intervention [4]. These approaches do not plan for substantial optimization or modification of a program prior to a well-powered trial nor do they typically provide insight into which components of the program may be most necessary or efficacious. Recently, more complex and iterative multiphase trial designs have been used for Type I translational research, such as in the National Institutes of Health (NIH) Obesity-Related Behavioral Intervention Trials (ORBIT) initiative [2] to develop obesity prevention and reduction strategies. These more systematic methods for development of behavioral interventions should allow for more rational generation of effective interventions and can optimize their major public health impact.

One increasingly used approach to multicomponent intervention development is the Multiphase Optimization Strategy (MOST) [3]. This strategy, inspired by approaches that are standard in engineering, utilizes a range of experiments to efficiently create an optimized intervention before testing it in a high-stakes and expensive clinical trial. The MOST approach is comprised of Preparation, Optimization, and Evaluation Phases (see Fig. 1). The Preparation Phase may include literature review, qualitative research, and feasibility pilot trials. An initial version of the intervention created via the Preparation Phase is then refined via the Optimization Phase. Optimization can involve direct comparison of different intervention elements (e.g., different components or different intensities or durations of delivery), often using complete or fractional factorial design experiments. The information gathered in this phase is used to select the intervention components that make up the optimized intervention. This may be followed by a preliminary controlled trial of the optimized intervention to confirm feasibility and potential impact. This process ultimately results in a well-tested and refined intervention, ready for the Evaluation Phase, in which the intervention is tested against a relevant control condition in an efficacy trial.

Phases of the Multiphase Optimization Strategy (MOST) process as implemented in this project
Fig. 1.

Phases of the Multiphase Optimization Strategy (MOST) process as implemented in this project

A core feature of the MOST process is that each experiment informs the next step in development, providing information to assess intervention feasibility, inform refinement of the intervention, and, in some cases, gauge the relative benefits of different components. While the MOST model is logical, decision-making about next steps within the MOST process can be more complex than it may first appear. At times, unclear or conflicting results from a particular phase—or new innovations in the field—may necessitate nuanced decisions about intervention elements or the design of next-step experiments. Not all such questions can be anticipated at the outset of the MOST process, and flexible thinking with an eye toward the ultimate goal of developing a feasible and effective intervention in the specific population of interest are critical to using MOST appropriately.

Our team used the MOST approach to develop and test a positive psychology (PP) based behavioral intervention to promote physical activity and other health behaviors in patients suffering an acute coronary syndrome (ACS; myocardial infarction or unstable angina). Over the past 6 years, we completed multiple phases of this project, including qualitative, proof-of-concept, factorial design, and initial randomized trials to create an optimized PP-based intervention. In each phase, we encountered decisions about the optimization of the intervention based on our data, participant feedback, and new information in the field in a dynamic process.

We have previously published results from each phase of this process, but we have not detailed the process by which we used results at each step to make decisions about the intervention and next-step testing. We experienced multiple complicated decision points, and we also learned that each experiment provided additional rich and useful information critical to intervention development that we had not expected to be able to utilize. We felt that sharing the considerations and decisions required to utilize a multiphase framework could be valuable for behavioral scientists considering such an approach to develop an intervention. Accordingly, in this paper, we outline the design and results of each experiment conducted in the MOST-based process, then discuss the challenges faced and decisions made at each stage to move forward in the treatment development and optimization process.

Methods

Overview

The Positive Emotions after Acute Cardiac Events (PEACE) project was a multiphase behavioral intervention development project (Fig. 1) that utilized MOST to develop a PP-based intervention to promote physical activity among patients with a recent ACS [5]. We chose to utilize a PP approach given growing evidence that positive psychological constructs (e.g., positive affect and optimism) are prospectively and independently linked with superior cardiac health [6, 7]. In addition, PP interventions have shown promise in effectively improving well-being and, in some cases, health behaviors in patients with medical illness [8]. Utilizing the MOST framework, this project was organized into four distinct studies.

Preparation Phase (PEACE I and II)

The first stage (PEACE I) used qualitative data to understand positive cognitions and emotions experienced by ACS patients, the relationship between these constructs and physical activity, and patient preferences regarding a PP-based intervention [9]. Using this information, we crafted an initial PP-based intervention and tested it in post-ACS patients in a proof-of-concept trial (PEACE II) to assess feasibility/acceptability and obtain participant feedback [10]. We refer to PEACE II as proof of concept because it was the first test of a new concept/program that had just been developed based on qualitative feedback and review of theory. We see this in contrast to a pilot trial, which we conceptualized as an early test of a more well-developed intervention to more formally assess feasibility or efficacy (as in PEACE IV).

Optimization Phase (PEACE III)

Next, we performed a complete factorial optimization trial (PEACE III) to examine the effects of different potential intervention components (e.g., adding motivational interviewing and comparing different intervention durations) on health behaviors [11], allowing us to efficiently ask several questions about optimal intervention elements.

Preparation for Evaluation Phase (PEACE IV)

Finally, we tested the intervention, optimized in PEACE III, in a small randomized trial (N = 47) against an attention- and time-matched control condition [12]. This was our final step in working to create a carefully refined intervention that would be ready for testing in a well-powered formal efficacy trial in the Evaluation Phase.

Across all studies, except as noted, eligible patients were English-speaking adults admitted to a cardiology unit for an ACS diagnosed using consensus criteria and/or as per prior studies [13–15] who had suboptimal health behavior adherence, measured using items from the Medical Outcomes Study Specific Adherence Scale (MOS SAS) [16]. Patients were excluded if they demonstrated cognitive deficits [17], had a medical condition likely to lead to death within 6 months, or were unable to participate in physical activity.

Data safety and monitoring boards (DSMBs) were convened for the final two stages, which included the PEACE III factorial design trial and randomized pilot trial. Given the complexity regarding assessments of efficacy by group in PEACE III (the factorial trial, which included eight conditions), the small number of participants in PEACE IV (the pilot trial), the relatively low-risk nature of the behavioral interventions, and the focus of the trials on feasibility and acceptability, the DSMB in both studies focused less on group differences in outcomes (and domains such as “study-stopping rules” related to superior efficacy) and more on accrual, retention, adverse events, and feasibility.

Preparation Phase: PEACE I and PEACE II

PEACE I: qualitative study

PEACE I was a qualitative study that enrolled 34 hospitalized ACS patients. In this case, participants with suboptimal pre-ACS adherence (n = 22) and those with adequate adherence (n = 12), as assessed by the MOS SAS, were included to obtain information from patients with a wide range of health behavior adherence. Participants underwent 50–60-min semi-structured qualitative interviews in the hospital and then three months post-admission. The interviews focused on: (a) exploring ACS patients’ experiences of positive psychological cognitions and emotions (e.g., positive affect) over the 3 month study period; (b) examining the links between these constructs and physical activity, diet, and medication adherence, as part of our model linking positive states, cardiac health behaviors, and cardiac outcomes; and (c) exploring participants’ ideas about the timing, duration, content, and delivery modality of a PP-based intervention to promote cardiac health behaviors after ACS. For this qualitative work, we continued until reaching thematic saturation, used directed content analysis, and utilized two independent coders of transcribed interviews via the NVivo program; further detail is available in the main study results paper [9].

Overall, optimism and positive affect were frequently reported both in the hospital and at 3 months. In addition, participants reported a reinforcing relationship between positive affect/optimism and physical activity such that physical activity led to greater optimism and positive affect, and these constructs in turn led to more activity. In contrast, gratitude, though commonly experienced, was infrequently linked with health behaviors. Regarding the proposed PP intervention, participants reported that they would prefer a core intervention lasting 8 weeks and completion of sessions by phone rather than in person or over the Internet due to the combination of logistical advantages and personal connection afforded by a phone-based program. Regarding specific PP-activity types, participants emphasized a desire to choose activities for at least some of the program rather than having all activities predetermined. They also expressed that activities that focused on utilizing past success and personal strengths, along with those targeting gratitude, were of particular interest.

We also collected quantitative data on levels of positive and negative psychological constructs (e.g., optimism and depression) to better characterize the population and examine changes over time, though this was an exploratory aim given the small sample. These data did confirm that, among those who were nonadherent to health behaviors pre-ACS, greater optimism during the ACS hospitalization was associated with greater adherence improvements at 3 months [18].

Decisions and challenges regarding next steps (see Table 1)

This information generally supported the core theoretical framework of the intervention: optimism and positive affect appeared to be associated with key health behaviors, including physical activity. Based on the information from PEACE I, we created an intervention with a written treatment manual to outline intervention/activity rationale and to allow participants to complete and write about PP intervention activities (e.g., performing kind acts), with phone sessions weekly postdischarge. We shortened the intervention to 8 weeks from an originally conceived 12 weeks based on the participant feedback. In addition, following participant completion of different predetermined activities for the first 6 weeks, in the final 2 weeks, participants chose an activity from the prior weeks that they found to be a good fit and completed a modified version of that activity.

Table 1.

Challenges experienced during this Multiphase Optimization Strategy (MOST) based treatment development approach and our team’s attempts to address them

ChallengeSolution
1What does the study team do if participants’ self-reported preferences in qualitative interviews conflict with the existing literature on the same topic?Consider using the next portion of the Preparation Phase to test ideas related to this content area. This will allow for additional input and empirical data to be obtained in this domain.
2How should the team proceed if new, previously untested intervention components appear to be indicated based on participant feedback?If time is available, utilize established methods of component development to develop a potential component and test it in the next phase.
3How and when should the team consider alternative study designs when they appear in the literature?Integrate such designs into the preplanned process when possible, understanding that such integration may be imperfect due to the timing of this integration within the ongoing MOST process.
4How should the team proceed when empirical findings about intervention components differ from the experience of interventionists who delivered the program?Try to reconcile the differences while considering the limitations of each type of information. May consider how components or designs can include both perspectives.
5How can observations about how participants customize the intervention to their circumstances be utilized to further develop the intervention?Deliberately review (e.g., via exit interviews) how participants utilize the intervention. Tailor the program in an ongoing manner to the ways in which participants are making best use of it.
6How should the team make decisions in the context of mismatches in participant feedback and quantitative outcome data from the same trial?Interpret both types of data with an understanding of the limitations of each. Consider how to include both perspectives, and again revisit data from prior phases and the broader literature.
7How does the study team select one intervention component over another when there is little apparent difference in efficacy between the components?Consider compromise solutions, and consider the overall goals of the intervention, the intervention’s theoretical model, feasibility, and acceptability.
ChallengeSolution
1What does the study team do if participants’ self-reported preferences in qualitative interviews conflict with the existing literature on the same topic?Consider using the next portion of the Preparation Phase to test ideas related to this content area. This will allow for additional input and empirical data to be obtained in this domain.
2How should the team proceed if new, previously untested intervention components appear to be indicated based on participant feedback?If time is available, utilize established methods of component development to develop a potential component and test it in the next phase.
3How and when should the team consider alternative study designs when they appear in the literature?Integrate such designs into the preplanned process when possible, understanding that such integration may be imperfect due to the timing of this integration within the ongoing MOST process.
4How should the team proceed when empirical findings about intervention components differ from the experience of interventionists who delivered the program?Try to reconcile the differences while considering the limitations of each type of information. May consider how components or designs can include both perspectives.
5How can observations about how participants customize the intervention to their circumstances be utilized to further develop the intervention?Deliberately review (e.g., via exit interviews) how participants utilize the intervention. Tailor the program in an ongoing manner to the ways in which participants are making best use of it.
6How should the team make decisions in the context of mismatches in participant feedback and quantitative outcome data from the same trial?Interpret both types of data with an understanding of the limitations of each. Consider how to include both perspectives, and again revisit data from prior phases and the broader literature.
7How does the study team select one intervention component over another when there is little apparent difference in efficacy between the components?Consider compromise solutions, and consider the overall goals of the intervention, the intervention’s theoretical model, feasibility, and acceptability.
Table 1.

Challenges experienced during this Multiphase Optimization Strategy (MOST) based treatment development approach and our team’s attempts to address them

ChallengeSolution
1What does the study team do if participants’ self-reported preferences in qualitative interviews conflict with the existing literature on the same topic?Consider using the next portion of the Preparation Phase to test ideas related to this content area. This will allow for additional input and empirical data to be obtained in this domain.
2How should the team proceed if new, previously untested intervention components appear to be indicated based on participant feedback?If time is available, utilize established methods of component development to develop a potential component and test it in the next phase.
3How and when should the team consider alternative study designs when they appear in the literature?Integrate such designs into the preplanned process when possible, understanding that such integration may be imperfect due to the timing of this integration within the ongoing MOST process.
4How should the team proceed when empirical findings about intervention components differ from the experience of interventionists who delivered the program?Try to reconcile the differences while considering the limitations of each type of information. May consider how components or designs can include both perspectives.
5How can observations about how participants customize the intervention to their circumstances be utilized to further develop the intervention?Deliberately review (e.g., via exit interviews) how participants utilize the intervention. Tailor the program in an ongoing manner to the ways in which participants are making best use of it.
6How should the team make decisions in the context of mismatches in participant feedback and quantitative outcome data from the same trial?Interpret both types of data with an understanding of the limitations of each. Consider how to include both perspectives, and again revisit data from prior phases and the broader literature.
7How does the study team select one intervention component over another when there is little apparent difference in efficacy between the components?Consider compromise solutions, and consider the overall goals of the intervention, the intervention’s theoretical model, feasibility, and acceptability.
ChallengeSolution
1What does the study team do if participants’ self-reported preferences in qualitative interviews conflict with the existing literature on the same topic?Consider using the next portion of the Preparation Phase to test ideas related to this content area. This will allow for additional input and empirical data to be obtained in this domain.
2How should the team proceed if new, previously untested intervention components appear to be indicated based on participant feedback?If time is available, utilize established methods of component development to develop a potential component and test it in the next phase.
3How and when should the team consider alternative study designs when they appear in the literature?Integrate such designs into the preplanned process when possible, understanding that such integration may be imperfect due to the timing of this integration within the ongoing MOST process.
4How should the team proceed when empirical findings about intervention components differ from the experience of interventionists who delivered the program?Try to reconcile the differences while considering the limitations of each type of information. May consider how components or designs can include both perspectives.
5How can observations about how participants customize the intervention to their circumstances be utilized to further develop the intervention?Deliberately review (e.g., via exit interviews) how participants utilize the intervention. Tailor the program in an ongoing manner to the ways in which participants are making best use of it.
6How should the team make decisions in the context of mismatches in participant feedback and quantitative outcome data from the same trial?Interpret both types of data with an understanding of the limitations of each. Consider how to include both perspectives, and again revisit data from prior phases and the broader literature.
7How does the study team select one intervention component over another when there is little apparent difference in efficacy between the components?Consider compromise solutions, and consider the overall goals of the intervention, the intervention’s theoretical model, feasibility, and acceptability.

Regarding specific PP activities, we largely utilized content from prior PP intervention studies [19, 20]. PP interventions typically consist of PP “exercises” that use reflective, written, or behavioral tasks related to positive thoughts, feelings, and behaviors. To select the specific exercises (see Table 2 for exercises in the optimized intervention), we first identified those positive psychological constructs that have been most consistently linked to health behaviors (e.g., positive affect and optimism) [6]. Next, the PP intervention literature was reviewed to identify frequently used, validated, and effective PP intervention exercises that would target these specific constructs.

Table 2.

Summary of optimized positive psychology–motivational interviewing (PP–MI) intervention

WeekPP modulePP exerciseMI moduleMI topic
1Gratitude-based activitiesGratitude for positive events
Participants identify three positive events in the past week and reflect on the associated positive feelings.
Setting goals for physical activityMoving for better health/activity tracking
Participants identify a physical activity goal, then discuss the pros and cons of changing their activity.
2Gratitude letter
Participants write a letter of gratitude thanking a person for their support or kindness.
Setting a sequential multiple assignment randomized trial (SMART) activity goal
Participants learn about the SMART (specific, measurable, attainable, relevant, time-based) system for goal setting.
3Capitalizing on positive events
Participants identify a positive life event in real-time and then magnify its effect by reflecting on or sharing it.
Barriers and problem solving
Participants consider potential barriers to being more active and problem-solve them. They continue to refine their goals.
4Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on physical activity
Participants reflect on their physical activity in the past 3 weeks and their progress toward their overall goal.
5Strengths-based activitiesRecalling past success
Participants recall and write about an event in which they experienced success and their contribution to it.
Using resources to be activeFinding new routes
Participants explore their neighborhood and brainstorm new places to walk.
6Using personal strengths
Participants identify a “signature strength,” then find a new way to use it the next 7 days.
Using neighborhood and equipment resources
Participants consider neighborhood and equipment resources that could help them be more active.
7Using personal strengths (Part 2)
Participants find a new way to use a second “signature strength” in the next 7 days.
Using social resources
Participants identify social resources for activity and consider ways to engage in small amounts of activity each day.
8Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on activity
Participants reflect on their activity in the past 3 weeks and progress toward their overall goal.
9Meaning-based activitiesEnjoyable and meaningful activities
Participants complete enjoyable and meaningful activities.
Finding new ways to be activeManaging “slips”
Participants learn about managing “slips” related to activity.
10The good life
Participants write in detail about a best possible future in 1 year, then make a plan to achieve it.
Reducing sitting time
Participants assess the amount of time they spend sitting each day and discuss strategies for reducing their sitting time.
11Acts of kindness
Participants plan and then complete three acts of kindness toward others within a single day.
Increasing strength
Participants learn about strength-training benefits and discuss ways to perform strength training.
12Skills application and preparing for the future
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing progress and considering the future
Participants review their accomplishments and help them to create a plan for physical activity for the near future.
WeekPP modulePP exerciseMI moduleMI topic
1Gratitude-based activitiesGratitude for positive events
Participants identify three positive events in the past week and reflect on the associated positive feelings.
Setting goals for physical activityMoving for better health/activity tracking
Participants identify a physical activity goal, then discuss the pros and cons of changing their activity.
2Gratitude letter
Participants write a letter of gratitude thanking a person for their support or kindness.
Setting a sequential multiple assignment randomized trial (SMART) activity goal
Participants learn about the SMART (specific, measurable, attainable, relevant, time-based) system for goal setting.
3Capitalizing on positive events
Participants identify a positive life event in real-time and then magnify its effect by reflecting on or sharing it.
Barriers and problem solving
Participants consider potential barriers to being more active and problem-solve them. They continue to refine their goals.
4Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on physical activity
Participants reflect on their physical activity in the past 3 weeks and their progress toward their overall goal.
5Strengths-based activitiesRecalling past success
Participants recall and write about an event in which they experienced success and their contribution to it.
Using resources to be activeFinding new routes
Participants explore their neighborhood and brainstorm new places to walk.
6Using personal strengths
Participants identify a “signature strength,” then find a new way to use it the next 7 days.
Using neighborhood and equipment resources
Participants consider neighborhood and equipment resources that could help them be more active.
7Using personal strengths (Part 2)
Participants find a new way to use a second “signature strength” in the next 7 days.
Using social resources
Participants identify social resources for activity and consider ways to engage in small amounts of activity each day.
8Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on activity
Participants reflect on their activity in the past 3 weeks and progress toward their overall goal.
9Meaning-based activitiesEnjoyable and meaningful activities
Participants complete enjoyable and meaningful activities.
Finding new ways to be activeManaging “slips”
Participants learn about managing “slips” related to activity.
10The good life
Participants write in detail about a best possible future in 1 year, then make a plan to achieve it.
Reducing sitting time
Participants assess the amount of time they spend sitting each day and discuss strategies for reducing their sitting time.
11Acts of kindness
Participants plan and then complete three acts of kindness toward others within a single day.
Increasing strength
Participants learn about strength-training benefits and discuss ways to perform strength training.
12Skills application and preparing for the future
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing progress and considering the future
Participants review their accomplishments and help them to create a plan for physical activity for the near future.
Table 2.

Summary of optimized positive psychology–motivational interviewing (PP–MI) intervention

WeekPP modulePP exerciseMI moduleMI topic
1Gratitude-based activitiesGratitude for positive events
Participants identify three positive events in the past week and reflect on the associated positive feelings.
Setting goals for physical activityMoving for better health/activity tracking
Participants identify a physical activity goal, then discuss the pros and cons of changing their activity.
2Gratitude letter
Participants write a letter of gratitude thanking a person for their support or kindness.
Setting a sequential multiple assignment randomized trial (SMART) activity goal
Participants learn about the SMART (specific, measurable, attainable, relevant, time-based) system for goal setting.
3Capitalizing on positive events
Participants identify a positive life event in real-time and then magnify its effect by reflecting on or sharing it.
Barriers and problem solving
Participants consider potential barriers to being more active and problem-solve them. They continue to refine their goals.
4Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on physical activity
Participants reflect on their physical activity in the past 3 weeks and their progress toward their overall goal.
5Strengths-based activitiesRecalling past success
Participants recall and write about an event in which they experienced success and their contribution to it.
Using resources to be activeFinding new routes
Participants explore their neighborhood and brainstorm new places to walk.
6Using personal strengths
Participants identify a “signature strength,” then find a new way to use it the next 7 days.
Using neighborhood and equipment resources
Participants consider neighborhood and equipment resources that could help them be more active.
7Using personal strengths (Part 2)
Participants find a new way to use a second “signature strength” in the next 7 days.
Using social resources
Participants identify social resources for activity and consider ways to engage in small amounts of activity each day.
8Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on activity
Participants reflect on their activity in the past 3 weeks and progress toward their overall goal.
9Meaning-based activitiesEnjoyable and meaningful activities
Participants complete enjoyable and meaningful activities.
Finding new ways to be activeManaging “slips”
Participants learn about managing “slips” related to activity.
10The good life
Participants write in detail about a best possible future in 1 year, then make a plan to achieve it.
Reducing sitting time
Participants assess the amount of time they spend sitting each day and discuss strategies for reducing their sitting time.
11Acts of kindness
Participants plan and then complete three acts of kindness toward others within a single day.
Increasing strength
Participants learn about strength-training benefits and discuss ways to perform strength training.
12Skills application and preparing for the future
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing progress and considering the future
Participants review their accomplishments and help them to create a plan for physical activity for the near future.
WeekPP modulePP exerciseMI moduleMI topic
1Gratitude-based activitiesGratitude for positive events
Participants identify three positive events in the past week and reflect on the associated positive feelings.
Setting goals for physical activityMoving for better health/activity tracking
Participants identify a physical activity goal, then discuss the pros and cons of changing their activity.
2Gratitude letter
Participants write a letter of gratitude thanking a person for their support or kindness.
Setting a sequential multiple assignment randomized trial (SMART) activity goal
Participants learn about the SMART (specific, measurable, attainable, relevant, time-based) system for goal setting.
3Capitalizing on positive events
Participants identify a positive life event in real-time and then magnify its effect by reflecting on or sharing it.
Barriers and problem solving
Participants consider potential barriers to being more active and problem-solve them. They continue to refine their goals.
4Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on physical activity
Participants reflect on their physical activity in the past 3 weeks and their progress toward their overall goal.
5Strengths-based activitiesRecalling past success
Participants recall and write about an event in which they experienced success and their contribution to it.
Using resources to be activeFinding new routes
Participants explore their neighborhood and brainstorm new places to walk.
6Using personal strengths
Participants identify a “signature strength,” then find a new way to use it the next 7 days.
Using neighborhood and equipment resources
Participants consider neighborhood and equipment resources that could help them be more active.
7Using personal strengths (Part 2)
Participants find a new way to use a second “signature strength” in the next 7 days.
Using social resources
Participants identify social resources for activity and consider ways to engage in small amounts of activity each day.
8Skills application week
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing and reflecting on activity
Participants reflect on their activity in the past 3 weeks and progress toward their overall goal.
9Meaning-based activitiesEnjoyable and meaningful activities
Participants complete enjoyable and meaningful activities.
Finding new ways to be activeManaging “slips”
Participants learn about managing “slips” related to activity.
10The good life
Participants write in detail about a best possible future in 1 year, then make a plan to achieve it.
Reducing sitting time
Participants assess the amount of time they spend sitting each day and discuss strategies for reducing their sitting time.
11Acts of kindness
Participants plan and then complete three acts of kindness toward others within a single day.
Increasing strength
Participants learn about strength-training benefits and discuss ways to perform strength training.
12Skills application and preparing for the future
Participants select a useful activity from the prior 3 weeks, then make a plan to utilize this skill regularly.
Reviewing progress and considering the future
Participants review their accomplishments and help them to create a plan for physical activity for the near future.

Challenge 1: what does the study team do if participants’ self-reported preferences in qualitative interviews conflict with the existing literature on the same topic?

Participants in Preparatory Phase projects (e.g., qualitative studies) may thoughtfully and consistently report one preference relevant to an intervention, while the literature may point to the opposite. This can be challenging, as participants in the project may precisely represent the population to be studied in the intervention and may have provided clear and compelling information. At the same time, the literature may point to a broader trend endorsed by a substantially larger number of persons as opposed to a single, smaller study. This can make it difficult to know which information to weigh most heavily and can depend on factors such as sample size (e.g., number and breadth of participants from whom qualitative data were gathered) and applicability of prior work (e.g., how well study populations in prior literature match the current population of interest).

One such dilemma we faced was related to the use of activities focused on gratitude. In the qualitative interviews, ACS participants reported that experiencing gratitude was not linked to subsequent participation in physical activity or other health behaviors. However, we chose to include gratitude activities given that prior successful PP interventions had utilized gratitude activities [19, 21], qualitative participants reported a desire to complete activities focused on cultivating gratitude, and given that a prior study in a cohort of medical psychiatry patients [22] found that two gratitude exercises (out of nine total PP exercises) led to the greatest improvement in optimism, which was clearly linked with health behaviors in the qualitative interviews, consistent with a wide range of other studies [23].

Challenge 2: how does the team proceed if new, previously untested, intervention components appear to be indicated based on participant feedback?

Participants may have important, relevant, and novel ideas about intervention content or delivery based on their knowledge and direct experience. Such information can be a critical piece of the intervention development process. At the same time, there may be no existing or well-developed precedent for such intervention elements, leaving the team to decide whether to defer action on this information or to work to develop these elements as part of the next phase in the MOST process.

In this situation, during the qualitative interviews, participants reported interest in an activity that focused on recalling and using skills from previous life successes. However, we could identify no PP activity that was focused on leveraging past success. Given that participants had clearly and consistently reported interest in such an activity, we chose to develop such an activity, utilizing a parallel structure and approach to more established activities focused on optimism, altruism, gratitude, and using personal strengths. We pilot tested the activity within our team, edited it in multiple rounds, and ultimately included it in PEACE II intervention, which was the next step in the Preparation Phase. This would allow us to obtain feedback from participants, who would be rating the utility of individual exercises in this trial, to help us ultimately choose whether to include it in the final intervention.

Challenge 3: how and when should the team consider alternate study designs when they appear in the literature?

A well-developed a priori plan for sequenced experiments within a MOST framework can help to structure the process and lead to an efficient and focused program of intervention development. At the same time, alternative study design approaches (e.g., adaptive designs) can arise that may be worth considering if they will more efficiently or effectively inform intervention optimization, though this decision must be weighed against the time, effort, and resources associated with the change and the new design. A benefit of the MOST framework is that it is designed to be flexible, so additional design components may be tested at later phases.

In our case, a somewhat simpler consideration arose. Though we had originally conceived of the next phase (PEACE II) as a single-arm feasibility and acceptability trial of the initial PP intervention, shortly after beginning the trial, we became aware of a similar trial by Moskowitz et al. of a PP intervention in patients with HIV [21]. That trial similarly focused on acceptability and preliminary impact, was a nonrandomized single-arm study but also used a subsequently recruited treatment as usual (TAU) comparison group to provide some initial comparison to the natural history of measured constructs (e.g., positive affect) in such a population. We decided to adjust our approach and adopt a similar design for PEACE II (especially given that the trial had already begun, making a randomized two-arm trial less viable) to maximize the information we gathered about the impact of the initial intervention.

PEACE II: proof-of-concept trial

In PEACE II, we enrolled post-ACS patients, initially for the PP intervention group and then subsequently for the TAU group, with suboptimal adherence. Intervention participants were enrolled in the hospital, completed weekly PP activities, and had weekly phone sessions with the interventionist for 8 weeks [10]. The primary study outcomes were feasibility and acceptability. Feasibility was assessed by rates of PP session completion, while acceptability was measured through weekly participant ratings of PP exercise ease and utility on 0–10 Likert scales. As a secondary aim, we examined changes from baseline to 8 weeks in positive affect (measured using the positive affect items from the Positive and Negative Affect Schedule [PANAS] [24]), optimism (measured via the Life-Orientation Test-Revised [LOT-R] [25] scale), and depression and anxiety (measured via the Hospital Anxiety and Depression Scale [HADS [26] subscales] between the intervention and the subsequently enrolled TAU group. Each week, PP participants were also asked open-ended questions about each exercise to allow us to obtain detailed exercise-specific feedback.

Overall, 23 intervention participants and 25 TAU participants were enrolled. Eighty-one percent of all possible exercises were completed (M 6.5/8 completed), including dropouts, and mean ratings of PP ease and activity were 7.4 (standard deviation [SD] 2.1) and 8.1 (SD 1.6) out of 10. Not counting the final two choice weeks (which varied in the specific exercises selected by participants), the past success exercise was rated as easiest to complete, and the two gratitude exercises rated as most useful. Compared to TAU, the intervention led to greater, medium effect size improvements in positive affect (PANAS effect size difference: d = 0.47; p = .097), anxiety (HADS-A: d = 0.57; p = .051), and depression (HADS-D: d = 0.67; p = .029) but not dispositional optimism. Qualitative participant feedback suggested that the exercises that focused on personal strengths and acknowledging past successes fostered greater self-esteem, optimism, and feelings of efficacy and confidence.

Decisions and challenges regarding next steps

Based on these findings, it appeared that the intervention was feasible and well-accepted in this patient population and that it had effects on most proximal psychological outcomes (e.g., positive affect), though not on optimism when measured as a trait. We were then prepared to begin the optimization phase of the MOST process to determine which elements may be associated with the greatest impact on physical activity (and other health behaviors) post-ACS.

Challenge 4: how should the team proceed when study findings about intervention components differ from the experience of interventionists who delivered the program?

Quantitative outcome data from patients—whether related to acceptability of intervention components or change in patient-reported measures—play a major role in determinations about feasibility and efficacy of an intervention and its component parts. Such ratings provide a systematic and targeted assessment of components across all participants, and such data are highly weighted in decision-making. At the same time, other inputs may be highly valuable. For example, interventionists who work with many participants during behavioral intervention trials have an opportunity to observe trends across multiple participants and components and may have more comprehensive recall about components that were working well at the time and longitudinally. This can lead to a dilemma when an aspect of the program seems clearly to be burdensome or ineffective to those directly working with participants, but outcome data suggest otherwise.

This situation arose when we reviewed data on the individual PP activities from PEACE II in preparation for developing a core PP-based intervention in the next step. In PEACE II, participants selected the PP activities they wished to complete for the final 2 weeks of the program. These choice activities received relatively high participant ratings of both ease and utility in PEACE II. However, upon reviewing their systematically recorded observations within the study session records, interventionists recalled that many patients struggled to choose an activity and instead in some cases explicitly requested preselected activities in all weeks. In addition, the process of choosing an activity often utilized a substantial portion of the phone session, limiting time for other key components of the session.

Challenge 5: how can observations about how participants individually customize the intervention to their circumstances be utilized to further develop the intervention?

In addition, the PEACE II intervention utilized few activities specifically focused on positive affect despite the identified links in the literature between positive affect and health [27, 28] and our qualitative work in PEACE I linking positive affect with health behaviors. Furthermore, interventionists noted a desire on some participants’ behalf to have additional activities focused more on simple, hedonic happiness. To address this gap, we reviewed the literature and identified a recently published activity (“capitalizing on positive events”) focused on amplification of positive affect that had been used and tested in patients with medical illness [21]. In addition, numerous participants utilized humor in their daily lives to promote well-being, and some noted that an activity focused on using humor could be of use in the post-ACS period.

Given the above two linked challenges, we chose to replace the choice exercises for the PP-based intervention in the upcoming PEACE III factorial optimization trial with the capitalizing and humor exercises to target these specific constructs and remove the complexity associated with the choice exercise. At the same time, we remained mindful of maintaining choice and person-activity fit [20] throughout the process and looked to find additional ways to bring these aspects of participant experience and preference into the intervention (see below). We were then prepared to initiate PEACE III, a factorial design trial to compare different potential intervention components in the Optimization Phase.

Optimization Phase

PEACE III: factorial design optimization trial

The PEACE III study utilized a complete factorial design to make three concurrent comparisons of intervention components [5, 11]. The trial: (a) compared the relative merits of performing PP activities alone versus performing PP in combination with motivational interviewing (MI), (b) assessed whether daily or weekly PP exercise completion by participants was superior, and (c) determined the utility of additional three bi-weekly “booster” sessions after the initial 8 week intervention. The results of these three simultaneous embedded comparisons examining impact on health behaviors would inform the selection of the intervention’s optimal components for the Evaluation Phase.

We chose the PP versus PP–MI comparison for several reasons. The PP-alone program would be straightforward and would focus directly on well-being-related constructs linked to greater activity. On the other hand, psychological interventions combined with specific behavioral interventions have had substantial effects on health behaviors [29] and, thus, a combined psychological–behavioral intervention program could have a substantially bigger effect than solely focusing on well-being. At the same time, such an intervention would be potentially burdensome for patients regarding required out-of-session activities, which could affect the intervention’s acceptability, and, thus, a careful empirical comparison was required. Daily and weekly completion of PP activities were compared because more frequent completion of exercises could lead more quickly to improved well-being and to stronger integration of related skills into everyday life. However, daily exercises could prove to be tedious or time consuming, and prior work in PP has suggested that intermittent completion of well-being-related tasks may have a greater effect than daily tasks [30]. Finally, booster sessions to consolidate knowledge and behavior have been used in health interventions [31, 32], but they have had at times mixed results in behavioral intervention studies [33]. Fidelity for this and future intervention studies was assessed by ratings of randomly selected participant sessions by study intervention supervisors (C.C., E.P.) using structured rating scales; see PEACE III methods paper for detail [5]. To efficiently make these comparisons within a single trial, we utilized a 2 × 2 × 2 complete factorial design, with a total of eight conditions (e.g., PP alone, with weekly PP-activity completion, no boosters). The primary study outcome was moderate to vigorous physical activity (MVPA) at 16 weeks as measured by accelerometer; the main secondary outcome was self-reported adherence to physical activity, diet, and medications as measured by the MOS SAS. Importantly, this should not be considered an eight-arm randomized controlled trial (RCT); instead, the factorial design allows for the analysis of three main effects (content/frequency/booster), as well as the interactions across the conditions. Performing three separate experiments with N = 128 for each outcome would have provided no additional power to detect main effects and would not have allowed estimation of the interaction effects [34]; we chose N = 128 as the sample size such that study was powered at 80% to detect medium effect size differences between groups for each of the three comparisons.

Overall, a total of 128 hospitalized ACS patients with suboptimal health behavior adherence were enrolled. Of these, 26 (20%) were immediate dropouts (zero exercises completed), often due to medical issues, within 2 weeks post-ACS, though follow-up data were obtained from 87% of the remainder. As in PEACE II, participants (aside from immediate dropouts) completed sessions at high rates (81% of all sessions completed) and rated PP-activity ease and utility (M 7.9/10 [SD 2.2] in both cases) highly. Overall, across all participants (all of whom received some version of a PP-based intervention), there were small-to-medium effect size (and statistically significant) improvements in psychological outcomes (PANAS, LOT-R, HADS-A, HADS-D; d = 0.25–0.60) and large improvements in self-reported adherence to health behaviors (MOS SAS; d = 1.27) over 16 weeks [35]; of note, these improvements were all greater in magnitude than the pre–post improvements seen in the TAU condition in PEACE II [10].

Regarding between-group comparisons, for the primary study outcome of MVPA, there was a substantial, small-to-medium effect of booster sessions on MVPA (p = .064; d = 0.43) and minimal effect of PP versus PP–MI content (p = .82; d = 0.05) or frequency of PP exercises (weekly vs. daily; p = .95; d = 0.02). Regarding differences in overall adherence (MOS SAS) at 16 weeks, there was a small-to-medium effect of MI content on MOS improvement (p = .044; d = 0.39), and effects modestly favoring weekly exercises (p = .48; d = 0.14) and booster sessions (p = .20; d = 0.25). When analyzing only the MOS physical activity item, results likewise favored the MI (p = .19; d = 0.27), booster (p = .22; d = 0.25), and weekly (p = .84; d = 0.04) conditions, without reaching statistical significance. Two-way and three-way interactions did not reveal substantial additional findings.

Decisions and challenges regarding next steps

Overall, this study appeared to confirm the overall feasibility and acceptability of the intervention, though it raised a substantial concern related to immediate withdrawal/dropout from the study within the first 2 weeks postdischarge (with no completed sessions), in many cases, due to recurrent symptoms/need for readmission. The study also confirmed the overall beneficial effects of the intervention as measured by pre–post assessments, though a non-PP comparison group was not included in this optimization study.

Challenge 6: how should the study team make decisions in the context of mismatches between participant feedback and empirical quantitative data from a trial?

This situation is somewhat similar to the one noted in Challenge 4 related to mismatches between participant ratings and interventionist observations. Systematically collected quantitative data about outcomes take great priority in optimization experiments. Such data represent information across all participants, typically involving important outcomes collected using validated methods and, therefore, should play a substantial role in selecting optimal components. At the same time, qualitative feedback from participants provides vital information, helping to elucidate alternative hypotheses for outcomes or mechanistic pathways, and can provide context for findings. At times, such feedback may also be in stark contrast to findings from quantitative data, leading to a dilemma for the study team.

In this trial, booster sessions were associated with greater improvement in both MVPA and self-reported health behavior adherence on systematic qualitative assessments. This was in some contrast to some participant reports about the utility and acceptability of these sessions. Participants (and interventionists) in many cases found these sessions to be too brief, repetitive (as they contained no new material), and spaced apart in such a manner (every 2 weeks) that seemed to impede prior momentum.

Challenge 7: how does the study team select one intervention component over another when there is little apparent difference in efficacy between the components?

Though experiments may be designed to identify a component that is more efficacious, better accepted, or otherwise superior to a second component (or lack of the component), it can be the case that there are minimal observed differences between the components, or that one component may have advantages in one domain (e.g., efficacy), while the other may have advantages in the other direction (e.g., acceptability or burden).

In this study, when comparing PP alone to PP–MI, the findings were complex. PP–MI did not appear to be more burdensome than PP alone, with rates of session completion and acceptability ratings similar between the groups, suggesting that the combined intervention was not necessarily more burdensome to participants. In terms of efficacy, there were also minimal differences between the groups on MVPA, though PP–MI led to substantially greater improvements in self-reported health behavior adherence. Likewise, regarding weekly and daily completion of PP activities, there were only slight differences between the two groups, modestly favoring PP weekly exercises in terms of efficacy.

Both sets of challenges described above led to substantial discussions among all study team members regarding how to consider the booster session data and how to select optimal components related to the PP–MI and weekly/daily comparisons. In the end, we chose to expand the intervention to 12 weeks given the superior outcomes associated with booster sessions. However, rather than the brief boosters used in PEACE III, we chose to extend the intervention with additional, novel sessions to leverage the data that additional sessions and/or a longer intervention appeared to be superior, while at the same time not replicating the booster sessions that were experienced by some as unsatisfying. After much deliberation and review of our theoretical model [36], we also chose to utilize PP–MI content given its clear impact on self-reported adherence to activity and other health behaviors, despite the minimal observed effect on MVPA compared to PP alone. Finally, given that we only wished to include a more intensive intervention if it showed sufficient benefit to justify the greater burden, we chose to utilize weekly PP-activity completion by participants given the combination of modestly greater impact with lower overall participant burden.

Alongside these changes, we also modified the structure of the intervention in two additional ways. First, to make the intervention more cohesive, we restructured the PP portion of the intervention into three conceptual modules. We had newly utilized such an approach in phone-delivered PP-based interventions in other medical populations [37, 38] and found it to be well-accepted and to provide a clearer broad framework for the intervention. In this case, we created three modules: (a) gratitude/positive affect-based activities, (b) strengths-based activities, and (c) meaning/optimism-based activities. As opposed to the prior intervention structure, which simply used a series of separate exercises performed weekly, this approach allowed interventionists to explain the rationale and targeted well-being construct for a series of activities that would be completed each 4 weeks (see Table 2).

Second, given consistent feedback from participants in PEACE III about the need to adapt and use the associated activities/skills in daily life for them to have a meaningful effect (rather than completing them as isolated, one-off activities), the optimized intervention to be tested in the next step focused more heavily on adapting weekly exercises into skills that could be used in daily life (e.g., after completing a gratitude letter, the interventionists and participants would work together to consider how gratitude could be more regularly expressed to others in daily life). In addition, each 4 week module ended with a week that specifically focused on using skills from one of that module’s prior activities in the participant’s regular daily activities. This approach also emphasized participant choice in adapting these activities to promote person-activity fit, consistent with the approach endorsed by PEACE I participants noted earlier.

Finally, in future studies of the intervention in post-ACS patients, we decided to continue to identify potential participants in the hospital but to randomize participants at a 2 week study visit postdischarge to reduce the number of participants lost to follow-up due to medical or logistical reasons and allowing us to apply our resources to those patients who successfully navigated the first 2 weeks post-ACS.

Moving Toward the Evaluation Phase

PEACE IV: initial randomized trial

Following the factorial design optimization trial, we aimed to ensure that the optimized intervention was well-accepted and potentially effective prior to Evaluation Phase testing. Accordingly, we examined the feasibility and preliminary impact of the optimized, 12 week PP–MI intervention in an initial RCT among ACS patients with suboptimal health behavior adherence. To provide a relevant, attention-matched control condition, an MI-based health behavior education intervention was selected as the control condition. Published intervention development and stage models [39] often recommend a TAU control condition at this stage (NIH Stage 1b [1]). However, we chose to use a more intensive control condition to better explore the intervention’s impact given that the next step would be a large efficacy trial and because we had already examined the intervention’s effect compared to treatment as usual in prior work (e.g., PEACE II). We, therefore, wished to explore whether there was some signal of an effect compared to a more intensive control in this small study, understanding that the study was underpowered for formal analyses of these outcomes and that such information would not be used in power calculations for subsequent larger trials.

The primary study outcome of this small pilot trial was feasibility/acceptability (the projected minimum N was 40, which would result in sufficient power to meet specific a priori feasibility/acceptability aims), and key secondary outcomes focused on between-group differences in the proximal psychological (positive affect) and main health behavior (accelerometer-measured physical activity) outcomes. Metrics of feasibility and acceptability in this trial were the proportion of sessions successfully completed by participants (feasibility) and participants’ mean ratings of ease and utility (0–10) of the activities (acceptability). Participants in both conditions received treatment manuals, weekly interventionist sessions, a health education handbook, and pedometers to promote physical activity. We recruited participants in the hospital, but given the early dropout observed in PEACE III, did not randomize them until a study visit 2 weeks after discharge.

A total of 65 participants were enrolled in the hospital, and 47 were randomized (n = 24, PP–MI; n = 23, MI alone) and included in the intent-to-treat analysis. Details of recruitment, enrollment, retention, and follow-up are outlined elsewhere via CONSORT diagram [12]. Regarding feasibility and acceptability, PP–MI participants completed 84% of all possible sessions, and participants’ mean ratings (0–10 Likert scale) of PP–MI activity ease and utility were 8.3 (SD 2.3) and 8.0 (SD 2.3), respectively, for the PP intervention component and 8.1 (SD 2.4) and 8.2 (SD 2.2) for MI; these feasibility and acceptability ratings were the highest yet observed in this multiphase project, suggesting beneficial effects of the refinement process.

Regarding intervention efficacy, the PP–MI condition was associated with medium-to-large effect size, statistically significant improvements in PANAS (positive affect) score at both 12 and 24 weeks (12 weeks: estimated mean difference [EMD] 3.90 [SE 1.95], p = .045, d = 0.56; 24 weeks: EMD 7.34 [SE 2.16], p < .001, d = 1.12) compared to the MI-based control condition. For physical activity, compared to the MI-based control condition, the PP–MI intervention was associated with higher, though nonsignificant, MVPA at 12 weeks (EMD 9.46 min/day [SE 7.92], p = .23, d = 0.42) and with large effect size and significantly greater MVPA at 24 weeks (EMD 15.1 [SE 6.8], p = .026, d = 0.81). PP–MI participants also took a significantly greater number of steps at 12 weeks (EMD 1842.1 steps [SE 849.8], p = . 030, d = 0.76), with a modest reduction of between-group differences at 24 weeks (EMD 1617.0 steps [SE 1081.3], p = .14, d = 0.53) [12]. These results suggested that the optimized PP–MI intervention was likely ready for the Evaluation Phase, in which the PP–MI program could be tested against MI alone in a larger efficacy trial powered to detect between-group differences in physical activity.

Discussion/Lessons Learned

Through this MOST treatment development project, we learned three important lessons

Getting Participant—and Interventionist—Feedback is Crucial, Even at Later Stages

Though the initial stages of the multiphase process (e.g., qualitative research and proof-of-concept intervention trial) more explicitly focused on participant feedback, such feedback was critical throughout the development process. Indeed, with the largest number of post-ACS participants, the PEACE III factorial trial was perhaps the most useful in terms of feedback, as we were able to obtain participant perspectives about all aspects of the intervention in a large and varied group of post-ACS patients. This feedback, in turn, led to some of the biggest changes to the intervention (e.g., creation of modules and focus on skill building) and contributed substantially to decision-making about intervention components (e.g., creating a longer intervention of 12 weeks rather than booster sessions). Likewise, interventionists, who had deep knowledge of the theoretical framework of the intervention and direct experience with numerous participants, had valuable input at each stage and were able to make suggestions that were loyal to the concepts of the program yet provided practical solutions to barriers encountered during intervention testing.

Decisions are Sometimes More Complex Than Straightforward Interpretations of Data

When we began this process, we expected that our primary study results at each phase would rather clearly guide the study team to an optimized intervention to be tested in the subsequent phase. However, in each phase, the data indicated that the situation was more complex and nuanced than expected. For example, in PEACE I, participants did not link gratitude to health behaviors; in contrast, research suggested that gratitude activities were linked with substantial increases in optimism, which was clearly associated with health behaviors. This led to a dilemma about whether to include activities focused on gratitude. This complexity was also apparent in the three intervention component comparisons in PEACE III. For example, the combined PP–MI intervention was not associated with greater improvements in MVPA (the primary study outcome) yet was associated strongly with improvements in overall self-reported adherence. This led us to review feasibility/acceptability data (to assess whether PP–MI was more burdensome), feedback from participants and interventionists, and additional literature on combined psychological–behavioral interventions to make this difficult decision about whether to utilize a PP–MI or PP-alone approach. Likewise, conflicting data on booster sessions led us to include additional standard sessions rather than formal boosters, after considering all possible options, while still retaining the concepts of boosters (i.e., reinforcement of principles) in the every 4 week “integration” week of each module in the optimized intervention.

Information Comes Not Just From the Project’s Studies but From Advances in the Field

Finally, we initially expected the data from our projects to drive intervention development in a relatively linear and insular manner. However, over the course of the development period, numerous additional studies in the field led directly to changes in the intervention. Publications describing new PP-based activities led us to include some such activities (e.g., capitalizing on positive events) in the intervention. Likewise, our own team’s use of modules in PP-based interventions in other medical populations led us to see the benefits of this approach and apply it to this specific population. Such a situation underscored a key fact about this multiphase process: developing a behavioral intervention (even when expedited via an efficient treatment strategy) is a long process that can occur over the better part of a decade, especially when considering time from initial conception, before it is even tested in a large efficacy-focused randomized trial. Accordingly, the field will move forward in parallel with the team’s work in intervention development, and we found it vital to stay abreast of these changes.

MOST is a broad framework for optimization of all types of behavioral and biobehavioral interventions. There are a variety of optimization trial designs available to the investigator. Examples include the factorial experiment, which was used in the present study; the fractional factorial experiment; the sequential multiple assignment randomized trial (SMART); the microrandomized trial [40]; and the system identification experiment [41]. As is discussed at length in Collins [42], the choice of experimental design for an optimization trial depends on whether the intervention being optimized is fixed or adaptive and the specific scientific questions motivating the experiment.

Our specific example displays some of the ways in which the MOST approach can inform the development of a behavioral intervention. Utilizing careful preparation stages prior to testing allowed removal and modification of components that “should” have worked but did not resonate with participants and, by using efficient designs in an optimization phase, we were able to test some components that we could not have otherwise tested given constraints on time and funding (in our case, we would not have been able to test booster sessions if we had used a more traditional approach that quickly moved from conceptualization directly to a large efficacy trial).

It is also important to note that many of these concerns are not specific to the MOST approach. A recent intervention development paper utilizing a theory-based but non-MOST approach [43] utilized similar principles to those described here, including preparation (e.g., choosing a theory and identifying key concepts), creating and initially testing an intervention, and more fully conducting larger empirical trials of the intervention, and they identified many of the same challenges. Likewise, in our approach to developing the intervention, we did not utilize additional nontraditional trial designs, such as SMART designs, N-of-1 studies, or other novel designs, but such approaches could be used as part of the MOST framework or a non-MOST paradigm.

In conclusion, using the MOST approach, we were able to iteratively develop a PP-based behavioral intervention for post-ACS patients to promote physical activity. Though we followed a clear path to treatment development laid out through MOST, we found that decision-making about next steps was complex and nuanced, with multiple challenges for our study team to consider at each step. At the same time, we also found that each experiment also provided information that was new, unexpected, and richer than we had anticipated. Indeed, the studies allowed us to have numerous insights into patient and interventionist experience and to gain information about subtle but important aspects of intervention content that ultimately played critical roles in optimizing the program. At this point, the optimized intervention has been repeatedly refined and tested in the population of interest, allowing our team to have confidence that an evaluative trial will be examining a well-developed intervention. We hope that this description of our experience, with discussion of our dilemmas and how we made decisions at each point, is useful for behavioral scientists who plan to use a MOST-based approach, and we highly recommend the use of MOST under similar circumstances.

Acknowledgments Funding

This research project was supported by the National Heart, Lung, and Blood Institute through grant R01HL113272 (J.H.). Time for analysis and article preparation was also funded by the National Heart, Lung, and Blood Institute through grants K23HL123607 (C.C.) and K23HL135277 (R.M.).

Compliance with Ethical Standards

Authors’ Statement of Conflict of Interest and Adherence to Ethical Standards The authors declare that they have no conflicts of interest.

Authors’ Contributions: J.H. and R.M. contributed to the original conception and design of the article, all authors provided substantial conceptual modifications and critical revision of the article for important intellectual content, and all authors approved of the final article.

Ethical Approval: The procedures and materials used in this study were approved by Partners Healthcare’s Institutional Review Board. This research was performed in accordance with the ethical standards of the 1964 Declaration of Helsinki and its later amendments.

Informed Consent: Written informed consent was obtained from all participants included in the studies.

References

1.

Onken
LS
,
Carroll
KM
,
Shoham
V
.
Reenvisioning clinical science
.
Clin Psychol Sci
.
2014
;
2
:
22
34
.

2.

Czajkowski
SM
,
Powell
LH
,
Adler
N
, et al.
From ideas to efficacy: The ORBIT model for developing behavioral treatments for chronic diseases
.
Health Psychol.
2015
;
34
:
971
982
.

3.

Collins
LM
,
Murphy
SA
,
Nair
VN
,
Strecher
VJ
.
A strategy for optimizing and evaluating behavioral interventions
.
Ann Behav Med.
2005
;
30
:
65
73
.

4.

Rounsaville
BJ
,
Carroll
KM
,
Onken
LS
.
A stage model of behavioral therapies research: Getting started and moving on from stage I
.
Clin Psychol (New York).
2001
;
8
:
133
142
.

5.

Huffman
JC
,
Albanese
AM
,
Campbell
KA
, et al.
The positive emotions after acute coronary events behavioral health intervention: Design, rationale, and preliminary feasibility of a factorial design study
.
Clin Trials.
2017
;
14
:
128
139
.

6.

Kubzansky
LD
,
Huffman
JC
,
Boehm
JK
.
Positive psychological well-being and cardiovascular disease
.
J Am Coll Cardiol.
2018
;
72
:
1382
1396
.

7.

Huffman
JC
,
Beale
EE
,
Celano
CM
.
Effects of optimism and gratitude on physical activity, biomarkers, and readmissions after an acute coronary syndrome
.
Circ Cardiovasc Qual Outcomes
.
2016
;
9
:
55
63
.

8.

Peterson
JC
.
A randomized controlled trial of positive-affect induction to promote physical activity after percutaneous coronary intervention
.
Arch Intern Med.
2012
;
172
:
329
336
.

9.

Huffman
JC
,
Dubois
CM
,
Mastromauro
CA
.
Positive psychological states and health behaviors in acute coronary syndrome patients: A qualitative study
.
J Health Psychol.
2016
;
21
:
1026
1036
.

10.

Huffman
JC
,
Millstein
RA
,
Mastromauro
CA
, et al.
A positive psychology intervention for patients with an acute coronary syndrome: Treatment development and proof-of-concept trial
.
J Happiness Stud
.
2016
;
17
:
1985
2006
.

11.

Celano
CM
,
Albanese
AM
,
Millstein
RA
.
Optimizing a positive psychology intervention to promote health behaviors following an acute coronary syndrome
.
Psychosom Med.
2018; 80:526–534
.

12.

Huffman
JC
,
Feig
EH
,
Millstein
RA
.
Usefulness of a positive psychology-motivational interviewing intervention to promote positive affect and physical activity after an acute coronary syndrome
.
Am J Cardiol.
2019
;
123
:
1906
1914
.

13.

Thygesen
K
,
Alpert
JS
,
Jaffe
AS
, et al.
Third universal definition of myocardial infarction
.
J Am Coll Cardiol.
2012
;
60
:
1581
1598
.

14.

Ruo
B
,
Rumsfeld
JS
,
Hlatky
MA
Depressive symptoms and health-related quality of life
.
JAMA
.
2003
;
290
:
215
221
.

15.

Huffman
JC
,
Mastromauro
CA
,
Beach
SR
.
Collaborative care for depression and anxiety disorders in patients with recent cardiac events
.
JAMA Intern Med
.
2014
;
174
:
927
935
.

16.

DiMatteo
MR
,
Hays
RD
,
Sherbourne
CD
.
Adherence to cancer regimens: Implications for treating the older patient
.
Oncology
1992
;
6
:
50
57
.

17.

Callaha
CM
,
Unverzagt
FW
,
Hui
SL
.
Six-item screener to identify cognitive impairment among potential subjects for clinical research
.
Med Care.
2002
;
40
:
771
781
.

18.

Huffman
JC
,
Moore
SV
,
Dubois
CM
.
An exploratory mixed methods analysis of adherence predictors following acute coronary syndrome
.
Psychol Health Med.
2015
;
20
:
541
550
.

19.

Seligman
ME
,
Steen
TA
,
Park
N
,
Peterson
C
.
Positive psychology progress: Empirical validation of interventions
.
Am Psychol.
2005
;
60
:
410
421
.

20.

Lyubomirsky
S
,
Sheldon
KM
,
Schkade
D
.
Pursuing happiness the architecture of sustainable change
.
Rev Gen Psychol
.
2005
;
9
:
111
131
.

21.

Moskowitz
JT
,
Hult
JR
,
Duncan
LG
, et al.
A positive affect intervention for people experiencing health-related stress: Development and non-randomized pilot test
.
J Health Psychol.
2012
;
17
:
676
692
.

22.

Huffman
JC
,
Dubois
CM
,
Healy
BC
.
Feasibility and utility of positive psychology exercises for suicidal inpatients
.
Gen Hosp Psychiatry.
2014
;
36
:
88
94
.

23.

Dubois
CM
,
Lopez
OV
,
Beale
EE
.
Relationships between positive psychological constructs and health outcomes in patients with cardiovascular disease: A systematic review
.
Int J Cardiol.
2015
;
195
:
265
280
.

24.

Watson
D
,
Clark
LA
,
Tellegen
A
.
Development and validation of brief measures of positive and negative affect: The PANAS scales
.
J Pers Soc Psychol.
1988
;
54
:
1063
1070
.

25.

Scheier
MF
,
Carver
CS
,
Bridges
MW
.
Distinguishing optimism from neuroticism and trait anxiety, self-mastery, and self-esteem: A reevaluation of the life orientation test
.
J Pers Soc Psychol.
1994
;
67
:
1063
1078
.

26.

Bjelland
I
,
Dahl
AA
,
Haug
TT
.
The validity of the hospital anxiety and depression scale
.
J Psychosom Res.
.
2002
;
52
:
69
77
.

27.

Petrie
KJ
,
Pressman
SD
,
Pennebaker
JW
, et al.
Which aspects of positive affect are related to mortality? Results from a general population longitudinal study
.
Ann Behav Med.
2018
;
52
:
571
581
.

28.

Chida
Y
,
Steptoe
A
.
Positive psychological well-being and mortality: A quantitative review of prospective observational studies
.
Psychosom Med.
2008
;
70
:
741
756
.

29.

Safren
SA
,
Gonzalez
JS
,
Wexler
DJ
, et al.
A randomized controlled trial of cognitive behavioral therapy for adherence and depression (CBT-AD) in patients with uncontrolled type 2 diabetes
.
Diabetes Care.
2014
;
37
:
625
633
.

30.

Lyubomirsky
S
,
Layous
K
.
How do simple positive activities increase well-being?
.
Curr Dir Psychol Sci
.
2013
;
22
:
57
62
.

31.

Le
HN
,
Perry
DF
,
Stuart
EA
.
Randomized controlled trial of a preventive intervention for perinatal depression in high-risk latinas
.
J Consult Clin Psychol.
2011
;
79
:
135
141
.

32.

Mangels
M
,
Schwarz
S
,
Worringen
U
.
Evaluation of a behavioral-medical inpatient rehabilitation treatment including booster sessions
.
Clin J Pain.
.
2009
;
25
:
356
364
.

33.

Ashby
WA
,
Wilson
GT
.
Behavior therapy for obesity booster sessions and long-term maintenance of weight loss
.
Behav Res Ther.
1977
;
15
:
451
463
.

34.

Collins
LM
,
Dziak
JJ
,
Kugler
KC
.
Factorial experiments
.
Am J Prev Med.
2014
;
47
:
498
504
.

35.

Huffman
JC
,
Feig
EJ
,
Freedman
M
, et al.
Usefulness of a positive psychology-motivational interviewing intervention to promote positive affect and physical activity after an acute coronary syndrome
.
Am J Cardiol.
2019
;123:1906–1914
.

36.

Huffman
JC
,
Dubois
CM
,
Millstein
,
RA
.
Positive psychological interventions for patients with type 2 diabetes rationale, theoretical model, and intervention development
.
J Diabetes Res
.
2015
;
2015
:
1
18
.

37.

Celano
CM
,
Gomez-bernal
F
,
Mastromauro
CA
, et al.
A positive psychology intervention for patients with bipolar depression a randomized pilot trial
.
J Ment Health
.
2018
;
26
:
1
9
.

38.

Celano
CM
,
Gianangelo
TA
,
Millstein
RA
, et al.
A positive psychology-motivational interviewing intervention for patients with type 2 diabetes: Proof-of-concept trial
.
Int J Psychiatry Med.
2019
;
54
:
97
114
.

39.

Gold
SM
,
Enck
P
,
Hasselmann
H
, et al.
Control conditions for randomised trials of behavioural interventions in psychiatry: A decision framework
.
Lancet Psychiatry
.
2017
;
4
:
725
732
.

40.

Klasnja
P
,
Hekler
EB
,
Shiffman
S
, et al.
Microrandomized trials: An experimental design for developing just-in-time adaptive interventions
.
Health Psychol.
2015
;
34S
:
1220
1228
.

41.

Rivera
DE
,
Hekler
EB
,
Savage
JS
,
Downs
DS
.
Intensively adaptive interventions using control systems engineering: two illustrative examples.
In:
Collins
LM
,
Kugler
KC
, eds.
Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics
.
New York, NY
:
Springer
;
2018: 21–35
.

42.

Collins
LM.
Optimization of Behavioral Biobehavioral, and Biomedical Interventions: The Multiphase Optimization Strategy (MOST)
.
New York, NY
:
Springer
;
2018
.

43.

Masters
KS
,
Ross
KM
,
Hooker
SA
,
Wooldridge
JL
.
A psychometric approach to theory-based behavior change intervention development: Example from the colorado meaning-activity project
.
Ann Behav Med.
2018
;
52
:
463
473
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/journals/pages/open_access/funder_policies/chorus/standard_publication_model)