Abstract

Objectives

To develop and disseminate a technical framework for administering the Research Participant Perception Survey (RPPS) and aggregating data across institutions using REDCap.

Materials and Methods

Six RPPS Steering Committee (RSC) member institutions met bi-weekly to achieve consensus on survey sampling techniques, data standards, participant and study descriptor variables, and dashboard design.

Results

RSC members implemented the infrastructure to send the RPPS to participants and shared data to the Empowering the Participant Voice Consortium Database. Two pilot sites used the tools generated by the RSC to implement the RPPS.

Discussion

The RSC created a REDCap project setup file, an external module visual analytics dashboard, an English/Spanish language file, and an implementation guide.

Conclusion

The technical setup materials created by the RSC were effective in aiding new sites in implementing the RPPS and could help future sites adopt the RPPS to better understand participant experiences to improve research recruitment and retention.

Lay Summary

Volunteering for a research study is a choice. Volunteers who have good experience are more likely to join future studies. Yet, most researchers do not ask participants about their experiences. They fail to collect data key to improving research conduct. Surveys can be expensive and difficult to manage. Six research groups teamed up to solve this problem. They developed tools to streamline sending surveys and analyzing results. They shared tools online free of charge and sent surveys to 28 309 participants. They analyzed data in local and centralized databases. During the project, 2 more research groups adopted the tools and shared their data. Institutions use their data to drive improvements in the research process. The group is working to make the tools even easier to use for widespread adoption.

Background and significance

Recruitment and retention of diverse populations for clinical studies and trials remain as challenges for researchers across clinical disciplines.1 An individual’s experience with the research enterprise can affect their willingness to enroll in future studies or recommend participation to members of their community.2 While patients’ experiences with clinical care are extensively evaluated and used to drive improvement through the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPSs),3 research participants’ experiences are rarely assessed.4 Moreover, there are no resources to compare local research participants’ experience with peer institutions or national benchmarks.

In 2008, the National Center for Advancing Translational Sciences (NCATSs) funded an initiative led by Rockefeller University to develop and validate the Research Participant Perception Survey (RPPS). This validated survey contained 72 questions that participants deemed were important about the research experience.5,6 Survey themes include motivations to join, leave, or stay in research, recruitment, consent, communication with the study team, trust, respect, feeling of partnership, and understanding of risks and benefits.6,7 Follow-up efforts shortened the survey to 13 or 25 questions versions.8 Rockefeller holds the copyright to the RPPS questions which it has freely shared with organizations for quality improvement and research purposes. Despite the importance of understanding the participant experience, only 3 institutions fully implemented the survey by 2020. RPPS developers hypothesized that 1 reason for the slow uptake of programs to assess participant perception was due to technical hurdles required to distribute the survey. Prior attempts to do so have relied on vendor systems with cost and privacy barriers.

REDCap is an electronic data capture system developed by Vanderbilt University Medical Center for clinical research data management. REDCap supports direct entry by study team members, survey functionality through a web browser, “conversational” text messaging, and mobile apps.9 It is available at no cost to academic, government, and non-profit organizations and is currently in use at over 7200 institutions in 156 countries. Every institution with a Clinical and Translational Science Award (CTSA) from the NCATSs uses REDCap and is a member of the REDCap Consortium.10

Objectives

In 2020, NCATS funded the Empowering the Participant Voice Initiative (EPV) to improve dissemination of the RPPS across academic institutions by developing a technical framework leveraging REDCap for collecting and sharing participant perception scores locally and across institutions.11 A central hypothesis in this work was that developing operational guidance which incorporated a ubiquitous technical platform would enable standard adoption and provide an opportunity to share data useful for benchmarking. Therefore, the primary major goal of EPV was to standardize supporting data, technology, and options for local approaches to selecting participants and administering surveys with REDCap.

Materials and methods

Consensus building

The RPPS Steering Committee (RSC) was formed and tasked with developing these standards and testing the framework. The committee consists of the overall project PI and technical leader (R.K., A.C., respectively), a clinical research leader, a project manager, and at least 1 technical staff from Rockefeller University, University of Rochester, Johns Hopkins University, Duke University, Wake Forest University, and Vanderbilt University Medical Center. All RSC member sites distributed the RPPS except for Vanderbilt, which served as the technical lead and data coordinating center (DCC) for the EPV Consortium. The committee met bi-weekly between June 2020 and April 2024 to discuss survey sampling techniques, data standards, participant and study descriptor covariates, At-A-Glance Dashboard design, and test use cases. Between RSC meetings, a REDCap analyst, software developer, and project manager implemented technical changes. The Vanderbilt team also hosted a bi-weekly technical support meeting for sites to troubleshoot REDCap setup, external module, and data transfer questions.

The RSC-defined standard processes for obtaining data from research participants eligible to receive the RPPS included uploading data about those participants and their studies into REDCap, sending surveys to and receiving responses from participants, and sharing de-identified results with the DCC. Figure 1 outlines the process followed by each site. Project staff at each institution extracted study and research participant descriptors, including email addresses, from a clinical trials management system (CTMS) or electronic health record system (EHR) for participants eligible to receive the RPPS. They uploaded descriptors into a local REDCap project and sent invitations to research participants to complete the survey through email. Participants had 2-3 weeks to respond to the survey, with system-prompted reminders configured by the sites.

A data flow diagram starting with study descriptor and participant descriptor .csv files coming out of the clinical trials management system or electronic health record system. Data are merged based on protocol number and uploaded to a REDCap project. The survey is sent to participants and their responses are received in REDCap. Data are then sent to a consortium database through the application programming interface.
Figure 1.

Data flow at EPV sites. CTMS = clinical trials management system; EHR = electronic health record system; CSV = comma separated values; API = application programming interface; RPPS = research participant perception survey; EPV = empowering the participant voice; DCC = data coordinating center.

Prior to distributing the survey, each site established a reciprocal data use agreement with the DCC to transmit de-identified participant descriptor, study descriptor, and survey data, and to view aggregated results. As part of the registration process for joining the EPV Consortium, sites provided a REDCap Application Programming Interface (API) key through which the DCC automatically extracted information into the Consortium Database daily. Sites had access to a central EPV Consortium Dashboard website, allowing for comparison of their scores and response rates with the blinded scores of other contributing sites. Table 1 summarizes the areas of data standardization that the RSC agreed on, as well as institutionally specific deviations. We sought to collect key characteristics of survey recipients to enable subgroup analyses, while minimizing institutional burden for obtaining, cleaning, and uploading data.

Table 1.

Topic areas of consensus and variation in the data collection and distribution of the RPPS.

TopicConsensusAdditions (A) and variations (V)
Participant descriptors uploaded from external databaseAge, race, ethnicity, sex, gender, protocol (local variable)(A) Surveyed in the past 6 months (y/n)
(V) Omit study-level tracking
Study descriptors uploaded from external databaseInterventional versus observational, randomization (y/n), MeSH terms for clinical domain of study(V) Omit MeSH terms
Method for samplingCensus—distributed to all eligible participants in all (>90%) all studies(V) Random sample of eligible participants in all studies
(V) Sampling targeted to specific studies or units
Timing of RPPS administrationEnd of study participation(A) 0-2 months after consent
(A) Annually if still on-study
Method of sending RPPSEmail with unique participant survey link(A) Patient portal message
(A) Text message with unique participant survey link
(A) Mailed paper survey with self-addressed stamped envelope
Secure data transferNightly API Sync by providing DCC with API Key(V) Secure file transfer of REDCap data download to DCC every 6 months
Data deidentificationAll participant, researcher, research unit, and dates removed before API aggregationNone
Site identity obfuscationSites assigned number only known to the site and DCC. Sites’ total survey counts hidden on Consortium Dashboard to avoid reidentificationNone
IRB approvalQuality improvement determination by IRB for local use and aggregation(V) Exempt research determination by local IRB
(V) Secondary data use approval for aggregation
Data use agreementsInitiated by DCC with standard template(V) Minor language modifications that do not affect data shared or data flow permitted
Multilingual managementEPV provided Spanish—English language file with built-in REDCap multilingual management features(V) Spanish language version provided using multilingual external module (now deprecated)
TopicConsensusAdditions (A) and variations (V)
Participant descriptors uploaded from external databaseAge, race, ethnicity, sex, gender, protocol (local variable)(A) Surveyed in the past 6 months (y/n)
(V) Omit study-level tracking
Study descriptors uploaded from external databaseInterventional versus observational, randomization (y/n), MeSH terms for clinical domain of study(V) Omit MeSH terms
Method for samplingCensus—distributed to all eligible participants in all (>90%) all studies(V) Random sample of eligible participants in all studies
(V) Sampling targeted to specific studies or units
Timing of RPPS administrationEnd of study participation(A) 0-2 months after consent
(A) Annually if still on-study
Method of sending RPPSEmail with unique participant survey link(A) Patient portal message
(A) Text message with unique participant survey link
(A) Mailed paper survey with self-addressed stamped envelope
Secure data transferNightly API Sync by providing DCC with API Key(V) Secure file transfer of REDCap data download to DCC every 6 months
Data deidentificationAll participant, researcher, research unit, and dates removed before API aggregationNone
Site identity obfuscationSites assigned number only known to the site and DCC. Sites’ total survey counts hidden on Consortium Dashboard to avoid reidentificationNone
IRB approvalQuality improvement determination by IRB for local use and aggregation(V) Exempt research determination by local IRB
(V) Secondary data use approval for aggregation
Data use agreementsInitiated by DCC with standard template(V) Minor language modifications that do not affect data shared or data flow permitted
Multilingual managementEPV provided Spanish—English language file with built-in REDCap multilingual management features(V) Spanish language version provided using multilingual external module (now deprecated)

Abbreviations: API, Application Programming Interface; DCC, Data Coordinating Center; EPV, Empowering the Participant Voice; IRB, Institutional Review Board; MeSH, Medical Subject Headings.

Table 1.

Topic areas of consensus and variation in the data collection and distribution of the RPPS.

TopicConsensusAdditions (A) and variations (V)
Participant descriptors uploaded from external databaseAge, race, ethnicity, sex, gender, protocol (local variable)(A) Surveyed in the past 6 months (y/n)
(V) Omit study-level tracking
Study descriptors uploaded from external databaseInterventional versus observational, randomization (y/n), MeSH terms for clinical domain of study(V) Omit MeSH terms
Method for samplingCensus—distributed to all eligible participants in all (>90%) all studies(V) Random sample of eligible participants in all studies
(V) Sampling targeted to specific studies or units
Timing of RPPS administrationEnd of study participation(A) 0-2 months after consent
(A) Annually if still on-study
Method of sending RPPSEmail with unique participant survey link(A) Patient portal message
(A) Text message with unique participant survey link
(A) Mailed paper survey with self-addressed stamped envelope
Secure data transferNightly API Sync by providing DCC with API Key(V) Secure file transfer of REDCap data download to DCC every 6 months
Data deidentificationAll participant, researcher, research unit, and dates removed before API aggregationNone
Site identity obfuscationSites assigned number only known to the site and DCC. Sites’ total survey counts hidden on Consortium Dashboard to avoid reidentificationNone
IRB approvalQuality improvement determination by IRB for local use and aggregation(V) Exempt research determination by local IRB
(V) Secondary data use approval for aggregation
Data use agreementsInitiated by DCC with standard template(V) Minor language modifications that do not affect data shared or data flow permitted
Multilingual managementEPV provided Spanish—English language file with built-in REDCap multilingual management features(V) Spanish language version provided using multilingual external module (now deprecated)
TopicConsensusAdditions (A) and variations (V)
Participant descriptors uploaded from external databaseAge, race, ethnicity, sex, gender, protocol (local variable)(A) Surveyed in the past 6 months (y/n)
(V) Omit study-level tracking
Study descriptors uploaded from external databaseInterventional versus observational, randomization (y/n), MeSH terms for clinical domain of study(V) Omit MeSH terms
Method for samplingCensus—distributed to all eligible participants in all (>90%) all studies(V) Random sample of eligible participants in all studies
(V) Sampling targeted to specific studies or units
Timing of RPPS administrationEnd of study participation(A) 0-2 months after consent
(A) Annually if still on-study
Method of sending RPPSEmail with unique participant survey link(A) Patient portal message
(A) Text message with unique participant survey link
(A) Mailed paper survey with self-addressed stamped envelope
Secure data transferNightly API Sync by providing DCC with API Key(V) Secure file transfer of REDCap data download to DCC every 6 months
Data deidentificationAll participant, researcher, research unit, and dates removed before API aggregationNone
Site identity obfuscationSites assigned number only known to the site and DCC. Sites’ total survey counts hidden on Consortium Dashboard to avoid reidentificationNone
IRB approvalQuality improvement determination by IRB for local use and aggregation(V) Exempt research determination by local IRB
(V) Secondary data use approval for aggregation
Data use agreementsInitiated by DCC with standard template(V) Minor language modifications that do not affect data shared or data flow permitted
Multilingual managementEPV provided Spanish—English language file with built-in REDCap multilingual management features(V) Spanish language version provided using multilingual external module (now deprecated)

Abbreviations: API, Application Programming Interface; DCC, Data Coordinating Center; EPV, Empowering the Participant Voice; IRB, Institutional Review Board; MeSH, Medical Subject Headings.

Metadata were collected to track the specific implementation of each variable associated with each survey and to evaluate the effects of survey implementation decisions on survey outcomes.

Early versions of the At-A-Glance-Dashboard were designed according to specifications developed by the steering committee to incorporate calculations of participant satisfaction scores. Response and completion rate tables were added to the At-A-Glance Dashboard after the performance views were created through additional discussion by the RSC. Response rates for subgroups (eg, by race, age) are calculated based on all participants who were sent a survey. Therefore, sites queried participant demographic data from the local CTMS or EHR to upload into REDCap. While top box scores were the primary outcome that the Dashboard sought to visualize, response rates were important to understand whether the results were representative of the population being surveyed. The ability to filter response rates by demographic factors also provided actionable information on how the organization could better reach or advertise the survey to underrepresented populations.

The RSC developed 4 products available for downloading to assist future institutions desiring to implement RPPS: (1) A REDCap project setup .xml file, (2) a REDCap external module At-A-Glance Dashboard for plug-and-play analysis of RPPS data, deployable both locally and at the DCC, (3) a language file that allowed the RPPS to be presented in English and Spanish in conjuction with REDCap multilanguage functionality, and (4) an Implementation Guide that covered the technical and institutional considerations when designing a program to distribute the RPPS.12

Dissemination

We shared the EPV At-A-Glance Dashboard External Module on the REDCap External Module Repository (REMR). Once all RSC members had implemented this framework and the 4 dissemination products were published, we solicited additional institutions who had previously expressed interest in using the RPPS to pilot the framework and contribute their data to the Consortium Database. Columbia University and the University of Michigan obtained institutional commitment and agreed to participate in the pilot, using each of the 4 products to distribute the survey to research participants within their respective institutions. Links to the deliverable components of the EPV/RPPS infrastructure were made available to the public on the EPV project website.13

Results

Framework features

Figure 2 illustrates how the EPV At-A-Glance Dashboard functioned. The dashboard displayed the top box scores for the 14 participant perception questions in the RPPS (Figure 2A). A top box score is the percentage of individuals responding with the best possible response (ie, yes—definitely on a 5-point Likert Scale, or 9 or 10 on a scale from 1 to 10) out of all responses to that question and is the standard metric of satisfaction surveys.14 Dashboard users could filter the top box scores by site, age, education, ethnicity, gender, race, sex, level of demand of the study (simple: a few visits or simple tests or surveys; moderate: multiple visits or a short inpatient stay; intense: long or multiple inpatient stays), whether a disease was required to enroll in the study, the setting of the informed consent, and whether the study was interventional.11 Sites could customize filtering of local data by institutionally-determined variables such as by study, investigator, and department. Top box scores were displayed as a graph over time by month, quarter, and year of survey response to enable sites to track the effect of interventions to improve participant perception (Figure 2B). The At-A-Glance Dashboard displayed survey response and completion rates, which were filterable by demographic factors (Figure 2C). For sites interested in data for all response options rather than top box scores, the built-in REDCap feature of “stats and charts” displayed the distribution of responses for each question (Figure 2D). Users at local sites could also create a custom dashboard for a subset of participants using complex criteria (ie, only participants in a study with a specific department or investigator, or with specific characteristics) in REDCap reports.

(A) A tabular dashboard showing participant perception questions as rows and demographic categories (in this case age groups) by columns. Higher scores are shown in green while lower scores are shown in orange. (B) A graph of participant perception top box score over time. Lines are added to the graph for overall score and for scores for specific age groups. (C) Tabular dashboard showing response rates by age group. (D) Detailed view showing a distribution of responses to the question “Would you recommend joining a research study to your family and friends”. Responses range from “Definitely No” to “Definitely Yes”.
Figure 2.

(A) Top box table by age, (B) top box graph over time by age, (C) and response rate table by age, and (D) response distribution graph in the EPV At-A-Glance Dashboard. Response rates were categorized as any (>0% questions answered), partial (50%-79%), complete (80%-100%), and breakoffs (1%-49%). Results shown here are for demonstration purposes and do not represent current actual EPV Consortium outcomes.

Dissemination

RSC member sites began distributing the RPPS through the framework in September 2021. Among RSC and pilot site institutions, we sent 28 309 surveys and received 5281 responses as of May 17, 2024. After implementation at the RSC sites, the availability of tools for adoption was disseminated through presentations at professional conferences, webinars, and websites. Two newer member institutions, Columbia University and University of Michigan, piloted the RPPS between October 2023 and January 2024. Having not received funding from the EPV project, they identified internal resources to support implementation. As of May 17, 2024, these 2 pilot sites sent 1470 surveys and received 611 responses. In addition to these 2 pilot sites and the RSC members, the EPV At-A-Glance Dashboard external module was downloaded by 6 other academic medical centers who were REDCap Consortium members but not affiliated with the EPV initiative. These unsolicited downloads suggested a wider interest among the REDCap Community to implement RPPS.

Discussion

Through the refinement and testing of setup, distribution, and data transfer for the RPPS, the EPV RSC produced a package of 4 products that streamlined new sites’ implementation of the RPPS. Two institutions were able to implement the RPPS using no outside funding, demonstrating accessibility for EPV participation. The EPV framework on REDCap was planned to evolve over time, to ensure the framework would be sufficiently flexible to accommodate changes in benchmarking needs, reporting requirements, consensus modifications to questions. Over the 4-year course of the EPV project, we had 4 version changes to the .xml project file that were documented, shared, and implemented by sites. Initial implementation proved successful across the RSC and pilot sites with results presented in another publication.15 Response rates were lower for younger participants but the timing of survey administration and sampline approach did not have a significant effect on response rates.15

Future efforts will continue to streamline and disseminate this framework to additional clinical research organizations to better understand participants’ perceptions of research. We will disseminate through the REDCap instrument library,16 EPV project website,13 CTSA’s Trial Innovation Network,17,18 and presentations at informatics, human protections, and translational research professional conferences.

The process for developing and implementing the RPPS could serve as a model for future initiatives that require multisite standardized data collection sourced from local systems and participants. Using REDCap enables collaborators to use common methods for applying data standards and data transfer. Through the process of consensus building, sites can download working versions of surveys from a DCC to test on their local REDCap instances and efficiently provide iterative feedback. Once data dictionaries are finalized, data transfer is seamless through the REDCap API and updates are easily managed through data dictionary versioning communicated by the DCC.

Conclusion

With engagement from multiple stakeholders in both technical and organizational roles, the EPV project was successful in creating software and documentation to facilitate adoption of the RPPS and the ability to generate and compare actionable data for improving participant experience. Understanding research participant perceptions will help improve experiences and satisfaction. These efforts are intended to accelerate participant recruitment in research and to enhance the quality of and support for broad and equitable clinical research.

Author contributions

Alex Cheng (Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Visualization), Eva Bascompte Moragas (Software, Resources), Ellis Thomas (Data curation, Project administration, Resources), Lindsay O’Neal (Project administration, Resources), Paul Harris (Conceptualization, Funding acquisition, Investigation, Methodology, Supervision), Ranee Chatterjee (Conceptualization, Funding acquisition, Investigation, Supervision), James Goodrich (Data curation, Formal analysis), Jamie Roberts (Project administration, Resources), Sameer Cheema (Project administration, Resources), Sierra Lindo (Project administration, Resources), Daniel E. Ford (Conceptualization, Funding acquisition, Investigation, Supervision), Liz Martinez (Project administration, Resources), Scott Carey (Software, Project administration, Resources), Ann Dozier (Conceptualization, Funding acquisition, Investigation, Supervision), Carrie Dykes (Conceptualization, Project administration, Funding acquisition, Investigation), Pavithra Panjala (Software, Data curation, Project administration, Resources), Lynne Wagenknecht (Conceptualization, Project administration, Funding acquisition, Supervision, Investigation), Joseph E. Andrews (Conceptualization, Project administration, Funding acquisition, Supervision, Investigation), Janet Shuping (Project administration, Resources), Derrick Burgin (Project administration, Resources), Nancy S. Green (Project administration, Supervision, Investigation), Siddiq Mohammed (Data curation, Project administration, Resources), Sana Khoury-Shakour (Project administration, Supervision, Investigation), Lisa Connally (Project administration, Supervision, Investigation), Cameron Coffran (Software, Data curation, Project administration, Resources edit), Adam Qureshi (Software, Data curation, Formal analysis, Resources), Natalie Schlesinger (Project administration, Supervision, Investigation), and Rhonda G. Kost (Conceptualization, Funding acquisition, Supervision, Investigation, Methodology, Validation)

Funding

This work was supported by a Collaborative Innovation Award from the National Center for Accelerating Translational Science (NCATS) U01TR003206 to the Rockefeller University, and Clinical Translational Science Awards UL1TR001866 (Rockefeller University), UL1TR002553 (Duke University), UL1TR003098 (Johns Hopkins University), UL1TR002001 (University of Rochester), UL1TR002243 (Vanderbilt University), UL1TR001420 (Wake Forest University Health Sciences), UL1TR001873 (Columbia University), and UM1TR004404 (University of Michigan). We also acknowledge the Recruitment Innovation Center NCATS U24TR004432 for their support. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Conflicts of interest

None of the authors have any relevant competing interests to report.

Data availability

The data underlying implementation statistics will be shared on reasonable request to the corresponding author.

References

1

Wilkins
CH
,
Edwards
TL
,
Stroud
M
, et al.  
The recruitment innovation center: developing novel, person-centered strategies for clinical trial recruitment and retention
.
J Clin Transl Sci
.
2021
;
5
:
e194
.

2

Griffith
DM
,
Jaeger
EC
,
Bergner
EM
, et al.  
Determinants of trustworthiness to conduct medical research: findings from focus groups conducted with racially and ethnically diverse adults
.
J Gen Intern Med
.
2020
;
35
:
2969
-
2975
.

3

Zusman
EE.
 
HCAHPS replaces press ganey survey as quality measure for patient hospital experience
.
Neurosurgery
.
2012
;
71
:
N21
-
N24
.

4

Planner
C
,
Bower
P
,
Donnelly
A
, et al.  
Trials need participants but not their feedback? A scoping review of published papers on the measurement of participant experience of taking part in clinical trials
.
Trials
.
2019
;
20
:
381
.

5

Kost
RG
,
Lee
LM
,
Yessis
J
, et al. ;
Research Participant Perception Survey Focus Group Subcommittee
.
Assessing research participants’ perceptions of their clinical research experiences
.
Clin Transl Sci
.
2011
;
4
:
403
-
413
.

6

Yessis
JL
,
Kost
RG
,
Lee
LM
, et al.  
Development of a research participants’ perception survey to improve clinical research
.
Clin Transl Sci
.
2012
;
5
:
452
-
460
.

7

Kost
RG
,
Lee
LN
,
Yessis
JL
, et al.  
Research participant-centered outcomes at NIH-supported clinical research centers
.
Clin Transl Sci
.
2014
;
7
:
430
-
440
.

8

Kost
RG
,
Rosa
JCd.
 
Impact of survey length and compensation on validity, reliability, and sample characteristics for ultrashort-, short-, and long-research participant perception surveys
.
J Clin Transl Sci
.
2018
;
2
:
31
-
37
.

9

Harris
PA
,
Taylor
R
,
Thielke
R
, et al.  
Research electronic data capture (REDCap) - a metadata-driven methodology and workflow process for providing translational research informatics support
.
J Biomed Inform
.
2009
;
42
:
377
-
381
.

10

Harris
PA
,
Taylor
R
,
Minor
BL
, et al. ;
REDCap Consortium
.
The REDCap consortium: building an international community of software platform partners
.
J Biomed Inf
.
2019
;
95
:
103208
.

11

Kost
RG
,
Cheng
A
,
Andrews
J
, et al.  
Empowering the participant voice (EPV): design and implementation of collaborative infrastructure to collect research participant experience feedback at scale
.
J Clin Transl Sci
.
2024
;
8
:
e40
.

12

Exploring EPV for your institution
. Research. Accessed February 16,
2024
. https://www.rockefeller.edu/research/epv/joining-epv/

13

Empowering the Participant Voice
. Research. Accessed December 17,
2024
. https://www.rockefeller.edu/research/epv/

14

HCAHPS Summary Analyses
. Accessed February 16,
2024
. https://hcahpsonline.org/en/summary-analyses/

15

Kost
RG
,
Andrews
J
,
Chatterjee
R
, et al.  
What research participants say about their research experiences in empowering the participant voice: outcomes and actionable data
.
J Clin Trans Sci
.
2025
:
1
-
39
.

16

Obeid
JS
,
McGraw
CA
,
Minor
BL
, et al.  
Procurement of shared data instruments for research electronic data capture (REDCap)
.
J Biomed Inform
.
2013
;
46
:
259
-
265
.

17

Bernard
GR
,
Harris
PA
,
Pulley
JM
, et al.  
A collaborative, academic approach to optimizing the national clinical research infrastructure: the first year of the trial innovation network
.
J Clin Transl Sci
.
2018
;
2
:
187
-
192
.

18

Shah
M
,
Culp
M
,
Gersing
K
, et al.  
Early vision for the CTSA program trial innovation network: a perspective from the national center for advancing translational sciences
.
Clin Transl Sci
.
2017
;
10
:
311
-
313
.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]