Abstract

The failure of animal studies to translate to effective clinical therapeutics has driven efforts to identify underlying cause and develop solutions that improve the reproducibility and translatability of preclinical research. Common issues revolve around study design, analysis, and reporting as well as standardization between preclinical and clinical endpoints. To address these needs, recent advancements in digital technology, including biomonitoring of digital biomarkers, development of software systems and database technologies, as well as application of artificial intelligence to preclinical datasets can be used to increase the translational relevance of preclinical animal research. In this review, we will describe how a number of innovative digital technologies are being applied to overcome recurring challenges in study design, execution, and data sharing as well as improving scientific outcome measures. Examples of how these technologies are applied to specific therapeutic areas are provided. Digital technologies can enhance the quality of preclinical research and encourage scientific collaboration, thus accelerating the development of novel therapeutics.

INTRODUCTION

The utility of animal experiments for predicting efficacy in clinical trials is under fire. In 2012, an estimated $56.4B (49% of all life science research) in the United States was spent on preclinical research.1 Unfortunately, over one-half of these funds were ultimately spent on studies that cannot be replicated. Recent analyses reported alarming irreproducibility rates ranging from 51% to 89%.2–6 When experiments cannot be replicated, it leads to a lack of confidence that the results will be translationally relevant.7–9 The magnitude of these economic costs and little benefit to drug development have driven the scientific community to identify the underlying causes of failure and to develop solutions to improve translational relevance of preclinical research.

Many comprehensive review articles exist detailing why results from preclinical studies fail to translate to successful therapies in the clinic. Regardless of the research area, commonly cited issues fall into 3 broad categories: (1) the choice of animal or disease model; (2) poor study design, analysis, and reporting; and (3) the lack of standardization between preclinical and clinical endpoints.7,9–11 This review will use these commonly cited issues as a starting place to explore how recent advances in digital technology are being used to improve the translational relevance of preclinical results to the clinic.

A number of mainstream and emerging technologies have been created for the digital capture, analysis, and sharing/reporting of data from preclinical studies. Software programs provide guidance on appropriate study design and analysis. Online databases enable researchers to organize and share data effectively. Innovative new digital technologies are specializing in biomonitoring of molecular and cellular,12 behavior and physiology,13–21 and imaging data,22–24 which can be further analyzed to identify biomarkers of disease using artificial intelligence.

Biomonitoring has many definitions, but in the context of this discussion, biomonitoring will be used in the most broad sense to refer to monitoring of any biomarker across time in the living animal. In 1998, the National Institutes of Health (NIH) Biomarkers Definitions Working Group defined “biomarker” as “a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic response to a therapeutic intervention.”25 Although biomarkers are often thought of as a molecular readout, the NIH definition allows for biomarkers to be captured automatically from devices that are implantable or located in the animal’s environment. In the clinic, the automatic stream of data from wearables and implantables are referred to as digital biomarkers.26 In our effort to bridge the species gap, we will refer to similar readouts in preclinical models as digital biomarkers.

This review is divided into 2 parts, specifically the use of biomonitoring and digital data technology for (1) study design, analysis, and reporting, and (2) improving scientific outcomes. Examples of how digital technologies are being implemented within specific therapeutic areas will be provided but are not meant to be exhaustive.

STUDY DESIGN, ANALYSIS, AND REPORTING

Poor translation of compound efficacy in animal studies to clinical trials is due, in part, to inadequate study methodology.27 Some methodological issues include study designs that are underpowered or without appropriate controls as well as inadvertent experimental bias during animal selection, allocation to groups, data collection, and analysis of study results.7,9,10 These issues can result in an over-representation of the potential efficacy of therapeutics in preclinical animal studies that result in failure during clinical trials. As an example, a systematic review and meta-analysis of efficacy data for the compound FK506 in stroke models showed that as study quality increased (ie, authors reported the implementation of more rigorous design), the reported efficacy of FK506 decreased.28 Several principles of proper study design, execution, and data sharing are discussed below with respect to their importance for scientific rigor, challenges in proper implementation, and the use of digital technologies to overcome these challenges.

Study and Statistical Design

Challenge: Studies are often not designed or powered to answer the specific scientific question of interest29

Enhancing scientific rigor in preclinical studies maximizes the potential for clinical translation. Indeed, robust study designs and analyses yield more reliable and reproducible data. The framework for rigorous study design is dictated by a well-defined scientific hypothesis and primary measures.30,31 Funding sources now commonly require consideration of the following design components, highlighting their importance: rationale for selected model, appropriateness of controls, route and timing of experimental intervention, justification of sample size, and statistical methods used for analyses.32

Several parameters must be considered in model selection with an emphasis on translatability for clinical development. Such factors include control for genetic variability as well as anatomical, physiological, and biological relevance to humans.33 In addition, many models are subject to individual variation resulting from complex interactions between genetic and environmental influences. Therefore, controlling for animal characteristics (eg, strain, sex, weight, age), baseline performance (detailed below), testing environment (eg, temperature, humidity, lighting), and experimental intervention (eg, timing, frequency, duration, route) are central to reproducibility. Individual variation of animals within a model also underscores the impact of proper sample size selection. Underpowered experiments with inadequate animal numbers are incapable of generating meaningful data and often lead to inaccurate conclusions regarding therapeutic interventions.34

Planning of statistical analyses before experimentation is an essential, although often overlooked, aspect of study design. Selecting a statistical approach after the collection of data can introduce unintentional biases into the interpretation of study findings. In addition, improper use and reporting of statistics can skew conclusions and misinform the reader. The prevalence of erroneous statistical analyses is evident in a sample of studies published in prestigious journals that identified a specific statistical error in one-half of the studies.35 Common errors include improper selection of parametric or non-parametric tests, misuse of P values to indicate the magnitude of an effect or to accept the null hypothesis, and improper adjustments for multiple comparisons.29,36

Although broad design guidelines are available for investigators, the adherence to these guidelines are subject to individual interpretation and are often consulted after completion of experiments. Software programs have been created to facilitate study design and analysis to provide unbiased guidance for pre-experiment planning and subsequent communication.37 Such programs are often developed by interdisciplinary teams of specialists led by experts familiar with the generally poor reproducibility and clinical translatability of preclinical research. Researchers input experimental details to build a visual representation of the entire study. Algorithms in the software then analyze the features to produce bespoke feedback on areas of refinement, which may include suggestions on sample size, appropriateness of controls, confounding variables, and statistical methods.38 Software programs are not a substitute for statistical expertise or training in the scientific method but nonetheless provide tremendous utility for users with various levels of experience. The feedback provided does not serve as a restrictive design gateway but instead promotes critical design review with efforts toward increased scientific rigor.

Collecting Baseline Data and Randomization

Challenge: Lack of characterization and subsequent randomization of animals into treatment groups based on key outcome measures may skew data prior to study start and decrease the probability that significant group differences are meaningful.

Baseline Data

Baseline data refers to data collected prior to study start or the initiation of interventions. Collection of baseline data allows researchers to compare changes in outcome measures before and after intervention. Outcome measures fall into 3 broad categories: tissue biomarkers (eg, blood, urine, or other sample), behavior/physiology, or histopathology. Depending on the outcome measure, the collection of sufficient baseline data can be challenging or impossible to collect. In the case of blood-based biomarkers, a number of factors contribute to the frequent decision not to collect baseline measurements on animals: (1) the rodent’s small size often limits the ability to collect sufficient blood for both baseline and study data, especially in studies requiring frequent sampling; (2) the time required to collect and assay tissue for specialized biomarkers may be too lengthy to be effectively used to baseline the animal, especially if needed for subject selection and distribution into experimental groups (see randomization section below); and (3) the process of collecting blood or urine may alter the physiological or behavioral state of the animal and, as a consequence, confound scientific results.39,40

It is equally challenging to collect behavioral baselines using standard behavioral tasks (eg, tests of anxiety, motor function, cognition). Manual collection of behavioral data is labor intensive and time consuming. In addition, many behavioral tasks display poor test-retest reliability where a different outcome is recorded on subsequent trials,41 posing a problem for the collection of baseline data. As a result, researchers are tempted to abandon baseline readings of behavioral measures and instead may utilize a more easily accessible measure, such as body weight, for randomization and group distribution. However, body weight, for example, is not consistently predictive of behavioral activity.42,43 As a result, experimental groups may be unbalanced for the primary measure, which can lead to skewed findings and misinterpretation of the data.

Histopathology is another important outcome measure; however, its terminal nature makes it impossible to collect a baseline measurement. Therefore, scientists frequently choose to include several additional experimental groups at key time points to create a disease time course or monitor treatment efficacy over time. This strategy greatly increases the number of animals required on study and provides only indirect evidence as to the disease state of animals not collected at that time point.

A number of digital technologies available in the market can help overcome the challenges in acquisition of baseline data. For blood biomarkers, researchers have developed a blood collection technique called microsampling that allows analysis of biomarkers on small samples of blood (10–20 μL).24,44 Following this trend, automated blood collection systems now allow researchers to collect small samples from tethered research animals without human interaction and at any pre-set time of day or night.45 For behavioral analysis, the scoring of many standard tasks has also been automated. The apparatus may have built-in sensors or use specialized computer programs that will track and score behavior,13–15,17,19–21,46 making it easier to collect baseline data. In vivo imaging capabilities have significantly advanced, allowing researchers deeper insights into scientific questions that could previously only be explored using terminal histopathology methods.24,47 For example, researchers can monitor tumor location and size using bioluminescence of fluorescent techniques23,48 and potentially even monitor the uptake drugs in real-time.49

The collection of baseline data has the potential to improve translational relevance of animal studies by allowing subjects to serve as their own comparator on study. Baseline values can naturally vary between animals. For example, an animal can have a physiologically relevant increase or decrease in blood chemistry value that would still fall within normal limits based on published datasets. Collective sharing of large baseline datasets within the scientific community has the potential to improve translational value of animal studies by allowing the better establishment of normal ranges for outcome measures that can then potentially be used as more relevant exclusion criteria. Researchers may decide to exclude animals as outliers whose values fall outside the normal range rather than using more arbitrary decisions (eg, subjects possessing values that are over 2 SDs from group averages).50,51 These strategies involving the use of baseline data ultimately reduce variability to enhance data quality.

Randomization

Randomization refers to the process of assigning subjects (by chance) to an experimental group to create homogenous or counter-balanced treatment groups. Randomization of animals into groups prevents potential bias (conscious or unconscious) that may ultimately result in misinterpretation of outcome data. For example, randomization prevents healthier animals from being preferentially assigned to the treatment group, which may later be misinterpreted as a positive therapeutic effect. Systematic reviews of animal studies found that, although randomization is being increasingly reported in the literature,52,53 the percentage of studies that randomize is still less than 50%.54,55 Two common methods employed for randomization are (1) simple or block randomization, and (2) stratified randomization.56,57

In simple or block randomization, animals are assigned to treatment groups randomly using a random number generator. For small studies with less than 100 animals, block randomization is used to ensure that the sample size is balanced between groups. Free digital tools are available online to assist researchers with this method.58 The absence of baseline outcome measures to compare among experimental groups for randomization may lead to skewed groups prior to study start and confound data interpretation.

Stratified randomization, in contrast, allows researchers to control and balance animals in groups based on key outcome measures.56,59 This method is more challenging than block randomization because it requires (1) collection of baseline data prior to randomization, and (2) availability of randomization software solutions, especially as the complexity of relevant baseline data increases, both with regard to the types of data that can be collected as well as the number of data points collected for each measure. To match the growing data collection capabilities, a free online software program was created to allow researchers to work with larger datasets.60

Stratified randomization improves animal study translation by enhancing the quality of key outcome measures collected at study start that will ultimately influence the interpretation of results. These software tools are an important first step toward enabling researchers to use scientifically sound principles to randomize animals on study. In the future, more sophisticated randomization tools should be built into software programs that direct study conduct and collection of outcome measures.

Blinding

Challenge: Experimenter bias can inadvertently alter study results when technicians performing work with the animals are not blinded to experimental groups.

Similar to randomization, blinding of experimenters to treatment groups prevents potential bias that can influence study results during scoring of disease parameters or in simply handling the animals. These biases (conscious or unconscious) may influence study outcomes toward drugs showing more positive effects on disease. The use of blinding in preclinical cardiovascular research was not reported in 67.3% of studies sampled and did not show improvement over the 10-year evaluation period.54 In addition, analysis of animal models that contained results from both non-blinded and blinded studies showed that the odds ratio was exaggerated by 59% in non-blinded studies, relative to blinded studies.61 As much as possible, researchers are looking to reduce this bias through the introduction of digital biomonitoring technologies which remove human-animal interaction altogether. Examples of how automated digital technologies can reduce experimenter bias will be elaborated on> later.

Data Sharing

Challenge: Sharing of experimental details and raw data is restricted, making it challenging for the scientific community to identify translationally relevant animal models and outcome measures.

Restricted data sharing slows down the scientific community’s ability to identify the most translationally relevant animal models and outcome measures.7,10,62 Here we explore the methods used to share data, the nature of the shared data, and how digital strategies can revolutionize data sharing in the future.

Today, researchers most commonly share data through publications. The preferential acceptance and publication of novel research findings restricts the sharing of data from studies with negative results, studies that only replicate previous findings, or studies that fail to reproduce prior research.7,10,62,63 This bias in publication may impede identification of most translationally relevant animal models. The utility of specific animal models and outcome measures can best be assessed by the scientific community when systematic review of all the data (both positive and negative) is available for evaluation.

Methodological flaws in animal studies are also thought to explain, in part, translational failure.7–11 To allow the scientific community to adequately evaluate published results, sharing of detailed methods as well as raw data in publications is important. The Animals in Research: Reporting In Vivo Experiments guidelines, a 20-item checklist describing the minimum information required, was published to facilitate sharing of sufficient methodological details.64 More details surrounding environmental conditions (eg, temperature and humidity), diet/water source, and other methodological details, in partnership with stricter enforcement of these guidelines, will allow for better comparison and replication of studies. In addition to detailed methods, many journals now encourage or require sharing of raw data through supplementary documents or into a publically available repository. Despite marked improvements, animal studies are still often published with inadequate reporting of experimental details and without access to raw data.

Some scientific communities are accelerating the data sharing process by capitalizing on online digital tools to develop searchable databases with data from both published and unpublished peer-reviewed studies. Preclinical studies of Alzheimer’s Disease have suffered more than most research areas from poor translation of preclinical efficacy models to the clinic. The National Institute on Aging and the NIH Library created Alzheimer’s Disease Preclinical Efficacy Database, a free online searchable platform that curates study information on animal models, rigorous study design principles, outcome measures, and conclusions.65 According to its mission statement, Alzheimer’s Disease Preclinical Efficacy Database was “designed to help identify the critical data, design elements and methodology missing from studies; making them susceptible to misinterpretation, less likely to be reproduced and reducing their translational value.”66

In the future, the explosion of the use of Internet of Things (IoT) sensors and cloud has the potential to enhance the data collected and shared. The IoT, or connection of any laboratory device to the internet, can automatically capture and record study data as well as corresponding experimental conditions.67 For animal studies, data may include information from facility-monitoring devices (eg, comprehensive room temperature and humidity) as well as the study (eg, time the data were collected, personnel responsible, and outcome measurement). Placing the outcome measures in the broader context of how the data were collected is integral to understanding the effect of the environment on study results. Numerous open-access online repositories exist to deposit and share data, spanning different fields from genomics (National Center for Biotechnology Information) to phenomics.68 Although cloud-based databases will make the storage, retrieval, sharing, and analysis of complete data sets more accessible between studies within a laboratory or across laboratories, community standards for data sharing must be established. Ideally, these standards accommodate variations in data type and size across research interests. Data formatting standards should be designed around goals for database use, such as findings verification, additional analyses, and novel hypothesis generation. Other topics for consideration include security, data integrity, database maintenance and costs, as well as data ownership and usage. Work groups and consortiums, representing the scientific communities’ interests, are well positioned to establish standards and continue to update practical and ethical guidelines for cloud-based sharing of research data.

IMPROVING SCIENTIFIC OUTCOME MEASURES

Improving study methodology and data sharing ensures that the scientific community has access to unbiased data, which can be used to determine the animal models and outcome measures that best predict the clinical efficacy of drugs. However, even with the conduct of scientifically rigorous studies, animal studies may still poorly predict clinical results when the outcome measures used are not specific, sensitive, or translatable between animal models and patients.9,69 Here we review how innovative biomonitoring and digital technologies are being used to refine and develop new translatable outcome measures.

Replacing Subjective Outcome Measures With Automated Data Collection

Challenges: Lack of consistency between technicians or experimenter bias can inadvertently alter study results. Infrequent data collection during time periods that may not be relevant to understanding disease profile or treatment can limit data interpretation.

Humans introduce variability and potential bias when they measure, rate, or score an outcome measure. In preclinical studies, variability can arise from inherent biological differences, through introduction by the experimenter, or through the environment.8,70 Experimental bias can arise consciously or subconsciously during study design (see section on blinding) or throughout study execution.63 Variable or biased measurements are not only restricted to subjective scoring or rating systems (eg, when an animal is assigned a number or attribute based on physical characteristics) but also during collection of measurements, for example tumor size in oncology models when tumors are measured with calipers.71 Experimental bias can also be subtle, such as when an experimenter handles diseased animals more gently. Both experimental variability and bias can decrease the quality and integrity of data gathered, leading to possible misinterpretations of the data and irreproducibility.

Biomonitoring, or the collection of digital biomarkers, has the potential to significantly reduce variability associated with experimenter bias. Numerous digital technologies are available that automate collection of study data. At a minimum, the technologies rely on human-animal interaction to acquire the measurement but eliminate the variability and bias associated with subjective scoring or measurement. For example, subcutaneous tumor size can be measured using devices that automatically acquire and analyze 3D images of tumors.71–73 Another example is the assessment of gait using automated analysis of lateral and ventral videos as animals walk on a treadmill.74,75 Although these technologies are an improvement over manual measurement methods, they do not eliminate the potential effects of animal handling or the animal’s response to the procedure itself.

As digital technologies continue to evolve, many systems are focused on eliminating the human-animal interaction altogether. A number of systems have been developed that collect data from sensors implanted in the animal or placed around the home cage. For example, implantable telemetry devices can be used to measure heart rate, blood pressure, respiration, electrocardiogram, body temperature, electroencephalogram, and glucose in freely moving animals.76–78 Specialized caging systems measure a number of metabolic parameters, including activity, body weight, food and water consumption, as well as energy expenditure (eg, volume of oxygen (VO2), volume of carbon dioxide (VCO2), and respiratory quotient (RQ)) in the home cage environment.79,80 Other systems automatically extract data from video collected from animals in the home cage using computer vision algorithms. These monitoring systems can measure a multitude of behaviors such as walking, climbing, wheel running, scratching, grooming, eating, drinking, and breathing rate throughout the course of a study.13–17,19–21,46 A few specialized home cage systems even allow for the assessment of more complex behaviors, including social and cognitive behaviors.16,81

Biomonitoring of digital biomarkers also has the added advantage of collecting data at more biologically relevant time points. Data collection during the night, when rodents are naturally active, can be challenging using standard outcome measures. Therefore, data are most often collected from animals on study during the daytime (ie, when the lights are on and rodents are asleep), which may not be translationally relevant. For example, research shows that cardiovascular health, disease, and efficacy of treatment are fundamentally tied to time of day.82 Given that upwards of 43% of genes showed circadian rhythms in transcription,83 it is possible that many disease processes are affected by time-of-day effects. To improve translational relevance of animal studies, digital biomonitoring strategies provide researchers with the data to tease apart the effects of time of day, without disturbance to the animal, and improve the design and conduct of studies to match clinical design.

Another advantage for continuous biomonitoring is the ability to collect more data points. There is natural variation in behavior and physiology across hours or days within a single animal. For example, heart rate and blood pressure can vary depending on a number of factors, including the time of day, amount of sleep, and current sleep state of the animal.84,85 By collecting multiple time points often across several days, a more accurate and holistic assessment of the behavior or physiological state of an animal can be determined. Similarly, in the clinic, researchers are finding that the power of step count to predict disease state such as multiple sclerosis is increased when over 2 days of data are averaged.86

The future of biomonitoring technologies is promising. New technologies are constantly being developed that automatically collect biomarker data. One area of active innovation is the collection of molecular biomarkers from freely moving animals. Although it is currently possible to collect a small number of metabolites, such as glucose, continuously in real-time, scientists are developing novel strategies to collect any drug, metabolite, or other molecular biomarkers real-time.12,87 The ability to perform such measurements and integrate these with other outcome measures (eg, behavior) will improve our understanding of the complex interplay between genetics, behavior/physiology, environment, and compound pharmacology in animal studies, further increasing its translational relevance to human data.

As more objective phenotypic data are collected across a range of behaviors or physiological parameters, scientists are exploring the possibility that machine learning and artificial intelligence can be used to identify behavioral signatures that better predict disease state or recovery following administration of a therapeutic compound. Proof-of-concept animal studies across multiple disease models, including Huntington’s Disease and rheumatoid arthritis, suggest the validity of this approach.88,89 This approach mirrors that being taken with digital biomarker data collected from patients in clinical trials.90–92 The potential creation of comparable digital signatures between animals and humans represents a potential opportunity to improve animal study translation.

Inclusion of Functional Assessments and Basic Animal Health Measures

Challenge: Preclinical studies often use biomarkers or histology rather than the more common functional assessments used in the clinic.

Preclinical animal studies frequently use molecular biomarkers or histological endpoints to assess the efficacy of a compound. The use of these types of outcome measures have a long history of acceptance and understanding. The opposite is true in clinical trials, in which functional outcomes are most easily and commonly measured. Stroke research provides a primary example of the translational gap between preclinical and clinical assessments. In animal studies, stroke volumes are assessed histologically using chemical markers, whereas clinical trials measure functional loss and recovery in patients.93 Because ultimately the goal of a therapeutic is to improve disease symptoms and the quality of life in patients, many in the scientific community suggest that animal studies should strive to include similar functional measurements.94,95

Biomonitoring may provide outcome measures that better bridge the gap for assessing functional improvements. In stroke, advances within in vivo imaging technologies allow longitudinal monitoring of similar functional outcomes in both animal studies and clinical trials.96,97 For example, magnetic resonance imaging penumbra scans can identify and track the effects of therapeutic interventions on tissue that is still capable of recovery in both rodent and clinical trials.98 Advances in functional magnetic resonance imaging allow parallel investigation into functional recovery through monitoring of brain activity in clinic and animal studies.97 Established behavioral tasks can be used to monitor neurological status and motor reflexes following stroke in animal studies99 and are a good first step toward measuring functional recovery. In the future, continuous behavioral characterization of stroke models in the home cage (no reports published to date) may be a great opportunity to sensitively monitor functional deficits and overall well-being that may be directly comparable with early clinical research with Fitbits in stroke patients.100

These are examples of how the stroke research community is using technological advances to develop functional biomarkers that are translatable between preclinical studies and the clinic. Stroke researchers are not alone; other research communities are using similar technology-driven approaches to identify better biomarkers of functional assessment.

Standardization of Measures Between Preclinical and Clinical Trials (Cross-Validation of Outcome Measures)

Challenge: Outcome measures used in preclinical studies may be mismatched with measures used in clinical trials.

The failure of translation of preclinical efficacy to the clinic may be due to a mismatch between outcome measures used in animal studies and the clinic.9 The benefits of using molecular translational biomarkers, or biomarkers that can be measured across different species, as outcome measures have been well-documented69,101 and are supported by guidance documents and programs developed by the NIH and Food and Drug Administration to identify and validate these biomarkers across diseases.102

Biomonitoring of digital biomarkers can provide additional and novel translational outcome measures. As with molecular biomarkers, relevant physiological or behavioral translational biomarkers respond similarly and are driven by comparable underlying pathways between the animal and human. For example, in psychiatric disorders, prepulse inhibition, a neurological condition in which a weaker pre-stimulus (prepulse) inhibits the reaction to a stronger reflex-eliciting response (pulse), has similar response characteristics between rodents and humans and predicts antipsychotic drug effects.103 In pain research, researchers are moving away from evoked pain responses and towards testing compound efficacy by measuring pain-induced, depression-like behavior (eg, reduced feeding, locomotor activity, burrowing, and wheel running) because these outcome measures are more translationally relevant and consistent with behavioral depression and verbal reports of pain in the clinic.104 Early indications suggest that using activity-based digital biomarkers collected from animals in the home cage to measure depression-like behavior could provide sensitive readouts for pain research.105

Ideally, once an appropriate animal model is identified, animal studies and clinical trials would be designed to ask the same questions using the same objective outcome measures. The use of digital biomarkers as objective translational outcome measures is an interesting future possibility. In the clinic, the investigation of digital biomarkers as novel outcome measures for clinical trials has exploded following the recent proliferation of connected wearables and implanted devices as well as the creation of cloud infrastructure for complex real-time computation and rapidly advancing machine learning algorithms. The identification of digital biomarker signatures in the clinic opens up opportunities to simultaneously develop translationally relevant signatures in animal models of disease, such as multiple sclerosis, Huntington’s disease, Alzheimer’s disease, and rheumatoid arthritis.

CONCLUSION

The preclinical scientific community is beginning to incorporate new innovative technologies to improve the predictive power of preclinical animal studies. Here we have highlighted several examples of how the implementation of 4 core digital technologies are playing key roles in improving scientific rigor and the quality of outcome measures, 2 of the most commonly cited issues limiting animal study translation today. The 4 core digital technologies and their role in enhancing the clinical translation of animal studies include:

  1. Software systems: Free or subscription-based programs that provide researchers with guidance on principles of rigorous scientific study design, conduct, and analysis. Together with the emergence of the IoT, software programs can electronically record and track detailed study methods (eg, diet, water source, etc) and experimental conditions (eg, time stamps of procedures, precise temperature and humidity in animal rooms, personnel performing procedures, etc) from laboratory devices.

  2. Biomonitoring of digital biomarkers: Collecting outcome measures semi-automatically or automatically from devices that are implantable or located in the animal’s environment. The transition of data collection from analog to digital will reduce bias by eliminating subjective measurements and removing many confounding effects introduced by the human-animal interaction. Biomonitoring also increases the number of data points that can be collected at a more biologically relevant time periods. In the future, new innovative technologies will continue to enable the development of more translational digital biomarkers, including those that measure functional outcomes.

  3. Database technologies: Large online or on-premises databases allow easy storage, organization, retrieval, and sharing of study data, including detailed study methods, experimental conditions, and outcome measures. Storage of study data across multiple organizations not only enables the identification of better models and outcome measures to enhance study translation but also creates new collaborative opportunities for hypothesis-driven research.

  4. Artificial intelligence (AI): A combination of digital biomarkers is used to create digital signatures that better predict disease state and efficacy of a compound. AI opens up opportunities for creating translationally relevant digital biomarkers that can be standardized between preclinical and clinical studies.

The full implementation of digital technologies in preclinical research is arguably in its early stages. Although these technologies are slowly being introduced into research programs, the real potential to transform animal study translation will only be realized when all technologies are integrated together. For example, the ability of AI to uncover translationally relevant biological signatures of disease requires large datasets of digital outcome measures stored in well-organized databases. Nonetheless, this ability will be limited if rigorous standards for study design are not upheld. In other words, the use of flawed study data will only identify meaningless disease signatures.

Deeper collaboration between researchers is required to fully implement digital technologies and to ensure collection of sufficient datasets to identify the most appropriate models and translationally relevant outcome measures. Implementation of digital technologies may come at a cost; therefore, researchers and organizations should investigate various methods to share the cost burden similar to what is already being done with equipment sharing in core facilities. The broad sharing of detailed datasets is often met with resistance from both academic and industry researchers for good reason. Academic scientists rely on these datasets for publication and future grant funding, whereas the pharmaceutical industry may be concerned with providing valuable data to competitors.106 Ongoing discussions within research communities may identify ways of sharing detailed data precompetitively. For academics, this may involve sharing data. For industry, pre-competitive data may include detailed data from model development studies and data from all non-drug experimental groups.

Preclinical research communities will benefit from cross-functional efforts with data scientists and clinicians using digital technologies to solve similar problems. Clinical trials are also moving toward more automated data collection and analysis strategies to (1) quantify symptoms with greater sensitivity and objectivity, (2) assess frequently enabling greater accuracy, and (3) passively monitor at home to allow ecologically valid data against which to assess treatment efficacy.90 A key opportunity exists for the preclinical and clinical research communities to learn from one another.

In conclusion, successful animal study translation relies on continued technological innovation. These innovations improve the processes leading not only to study rigor and integrity but also to the identification of the most translationally relevant models and outcome measures.

ACKNOWLEDGEMENTS

Financial support. No funding sources were provided for this work.

Potential conflicts of interest. All authors: No reported conflicts.

References

1.

American Association for the Advancement of Science
.
AAAS Report XXXVIII: Research and Development FY 2014
.
Washington, DC, USA
:
American Association for the Advancement of Science (AAAS)
;
2013
.

2.

Begley
CG
,
Ellis
LM
.
Drug development: raise standards for preclinical cancer research
.
Nature
2012
;
483
(
7391
):
531
533
.

3.

Glasziou
PP
,
Chalmers
I
,
Green
S
et al.
Intervention synthesis: a missing link between a systematic review and practical treatment(s)
.
PLoS Med
2014
;
11
(
8
):e1001690.

4.

Hartshorne
JK
,
Schachner
A
.
Tracking replicability as a method of post-publication open evaluation
.
Front Comput Neurosci
2012
;
6
:
8
.

5.

Prinz
F
,
Schlange
T
,
Asadullah
K
.
Believe it or not: how much can we rely on published data on potential drug targets?
Nat Rev Drug Discov
2011
;
10
(
9
):
712
.

6.

Vasilevsky
NA
,
Brush
MH
,
Paddock
H
et al.
On the reproducibility of science: unique identification of research resources in the biomedical literature
.
PeerJ
2013
;
1
:
e148
.

7.

Freedman
LP
,
Cockburn
IM
,
Simcoe
TS
.
The economics of reproducibility in preclinical research
.
PLoS Biol
2015
;
13
(
6
):e1002165.

8.

Kafkafi
N
,
Agassi
J
,
Chesler
EJ
et al.
Reproducibility and replicability of rodent phenotyping in preclinical studies
.
Neurosci Biobehav Rev
2018
;
87
:
218
232
.

9.

Kannt
A
,
Wieland
T
.
Managing risks in drug discovery: reproducibility of published findings
.
Naunyn Schmiedeberg's Arch Pharmacol
2016
;
389
(
4
):
353
360
.

10.

Ioannidis
JPA
,
Kim
BYS
,
Trounson
A
.
How to design preclinical studies in nanomedicine and cell therapy to maximize the prospects of clinical translation
.
Nat Biomed Eng
2018
;
2
(
11
):
797
809
.

11.

van der Worp
HB
,
Howells
DW
,
Sena
ES
et al.
Can animal models of disease reliably inform human studies?
PLoS Med
2010
;
7
(
3
):e1000245.

12.

Vieira
PA
,
Shin
CB
,
Arroyo-Curras
N
et al.
Ultra-high-precision, in-vivo pharmacokinetic measurements highlight the need for and a route toward more highly personalized medicine
.
Front Mol Biosci
2019
;
6
:
69
.

13.

Pernold
K
,
Iannello
F
,
Low
BE
et al.
Towards large scale automated cage monitoring - diurnal rhythm and impact of interventions on in-cage activity of C57BL/6J mice recorded 24/7 with a non-disrupting capacitive-based technique
.
PLoS One
2019
;
14
(
2
):e0211063.

14.

Brenneis
C
,
Westhof
A
,
Holschbach
J
et al.
Automated tracking of motion and body weight for objective monitoring of rats in colony housing
.
J Am Assoc Lab Anim Sci
2017
;
56
(
1
):
18
31
.

15.

Bains
RS
,
Cater
HL
,
Sillito
RR
et al.
Analysis of individual mouse activity in group housed animals of different inbred strains using a novel automated home cage analysis system
.
Front Behav Neurosci
2016
;
10
:
106
.

16.

Fischer
M
,
Cabello
V
,
Popp
S
et al.
Rsk2 knockout affects emotional behavior in the IntelliCage
.
Behav Genet
2017
;
47
(
4
):
434
448
.

17.

Jhuang
H
,
Garrote
E
,
Mutch
J
et al.
Automated home-cage behavioural phenotyping of mice
.
Nat Commun
2010
;
1
:
68
.

18.

Niemeyer
JE
.
Telemetry for small animal physiology
.
Lab Anim (NY)
2016
;
45
(
7
):
255
257
.

19.

Redfern
WS
,
Tse
K
,
Grant
C
et al.
Automated recording of home cage activity and temperature of individual rats housed in social groups: The Rodent Big Brother project
.
PLoS One
2017
;
12
(
9
):e0181068.

20.

Richardson
CA
.
Automated homecage behavioural analysis and the implementation of the three Rs in research involving mice
.
Altern Lab Anim
2012
;
40
(
5
):
P7
P9
.

21.

Alexandrov
V
,
Brunner
D
,
Hanania
T
et al.
High-throughput analysis of behavior for drug discovery
.
Eur J Pharmacol
2015
;
750
:
82
89
.

22.

Bocci
G
,
Buffa
F
,
Canu
B
et al.
A new biometric tool for three-dimensional subcutaneous tumor scanning in mice
.
In Vivo
2014
;
28
(
1
):
75
80
.

23.

Imamura
T
,
Saitou
T
,
Kawakami
R
.
In vivo optical imaging of cancer cell function and tumor microenvironment
.
Cancer Sci
2018
;
109
(
4
):
912
918
.

24.

Lauber
DT
,
Fulop
A
,
Kovacs
T
et al.
State of the art in vivo imaging techniques for laboratory animals
.
Lab Anim
2017
;
51
(
5
):
465
478
.

25.

Biomarkers Definitions Working Group
.
Biomarkers and surrogate endpoints: preferred definitions and conceptual framework
.
Clin Pharmacol Ther
2001
;
69
(
3
):
89
95
.

26.

Cohen
AB
,
Mathews
SC
.
The digital outcome measure
.
Digit Biomark
2018
;
2
(
3
):
94
105
.

27.

Cook
D
,
Brown
D
,
Alexander
R
et al.
Lessons learned from the fate of Astra Zeneca's drug pipeline: a five-dimensional framework
.
Nat Rev Drug Discov
2014
;
13
(
6
):
419
431
.

28.

Macleod
MR
,
O'Collins
T
,
Horky
LL
et al.
Systematic review and metaanalysis of the efficacy of FK506 in experimental stroke
.
J Cereb Blood Flow Metab
2005
;
25
(
6
):
713
721
.

29.

Aban
IB
,
George
B
.
Statistical considerations for preclinical studies
.
Exp Neurol
2015
;
270
:
82
87
.

30.

de Caestecker
M
,
Humphreys
BD
,
Liu
KD
et al.
Bridging translation by improving preclinical study design in AKI
.
J Am Soc Nephrol
2015
;
26
(
12
):
2905
2916
.

31.

Landis
SC
,
Amara
SG
,
Asadullah
K
et al.
A call for transparent reporting to optimize the predictive value of preclinical research
.
Nature
2012
;
490
(
7419
):
187
191
.

32.

Principles and Guidelines for Reporting Preclinical Research. National Institutes of Health
. Updated December 12,
2017
. https://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research. Accessed May 11, 2021.

33.

Correa
D
,
Bowles
AC
. Clinical translation of cartilage tissue engineering, from embryonic devleopment to a promising long-term solution. In:
Stoddart MJC
AM
,
Pattappa
G
,
Gardner
OFW
, eds.
Developmental Biology And Musculoskeletal Tissue Engineering
.
London, UK
:
Academic Press
;
2018
, pp.
225
246
.

34.

Unger
EF
.
All is not well in the world of translational research
.
J Am Coll Cardiol
2007
;
50
(
8
):
738
740
.

35.

Nieuwenhuis
S
,
Forstmann
BU
,
Wagenmakers
EJ
.
Erroneous analyses of interactions in neuroscience: a problem of significance
.
Nat Neurosci
2011
;
14
(
9
):
1105
1107
.

36.

Goodman
S
.
A dirty dozen: twelve P-value misconceptions
.
Semin Hematol
2008
;
45
(
3
):
135
140
.

37.

Cressey
D
.
Web tool aims to reduce flaws in animal studies
.
Nature
2016
;
531
(
7592
):
128
.

38.

du Sert
NP
,
Bamsey
I
,
Bate
ST
et al.
The experimental design assistant
.
Nat Methods
2017
;
14
(
11
):
1024
1025
.

39.

Kurien
BT
,
Everds
NE
,
Scofield
RH
.
Experimental animal urine collection: a review
.
Lab Anim
2004
;
38
(
4
):
333
361
.

40.

Teilmann
AC
,
Kalliokoski
O
,
Sorensen
DB
et al.
Manual versus automated blood sampling: impact of repeated blood sampling on stress parameters and behavior in male NMRI mice
.
Lab Anim
2014
;
48
(
4
):
278
291
.

41.

Andreatini
R
,
Bacellar
LF
.
Animal models: trait or state measure? The test-retest reliability of the elevated plus-maze and behavioral despair
.
Prog Neuro-Psychopharmacol Biol Psychiatry
2000
;
24
(
4
):
549
560
.

42.

Almundarij
TI
,
Smyers
ME
,
Spriggs
A
et al.
Physical activity, energy expenditure, and defense of body weight in melanocortin 4 receptor-deficient male rats
.
Sci Rep
2016
;
6
:37435.

43.

Thiessen
DD
.
Mouse exploratory behavior and body weight
.
Psychol Rec
1961
;
11
(
3
):
299
304
.

44.

Dillen
L
,
Loomans
T
,
Van de Perre
G
et al.
Blood microsampling using capillaries for drug-exposure determination in early preclinical studies: a beneficial strategy to reduce blood sample volumes
.
Bioanalysis
2014
;
6
(
3
):
293
306
.

45.

Hopper
LD
.
Automated microsampling technologies and enhancements in the 3Rs
.
ILAR J
2016
;
57
(
2
):
166
177
.

46.

Lim
MA
,
Defensor
EB
,
Mechanic
JA
et al.
Retrospective analysis of the effects of identification procedures and cage changing by using data from automated, continuous monitoring
.
J Am Assoc Lab Anim Sci
2019
;
58
(
2
):
126
141
.

47.

Blow
N
.
In vivo molecular imaging: the inside job
.
Nat Methods
2009
;
6
(
6
):
465
469
.

48.

Zinn
KR
,
Chaudhuri
TR
,
Szafran
AA
et al.
Noninvasive bioluminescence imaging in small animals
.
ILAR J
2008
;
49
(
1
):
103
115
.

49.

Motamarry
A
,
Negussie
AH
,
Rossmann
C
et al.
Real-time fluorescence imaging for visualization and drug uptake prediction during drug delivery by thermosensitive liposomes
.
Int J Hyperth
2019
;
36
(
1
):
817
826
.

50.

Jones
PR
.
A note on detecting statistical outliers in psychophysical data
.
Atten Percept Psychophys
2019
;
81
(
5
):
1189
1196
.

51.

Thiese
MS
,
Arnold
ZC
,
Walker
SD
.
The misuse and abuse of statistics in biomedical research
.
Biochem Med (Zagreb)
2015
;
25
(
1
):
5
11
.

52.

Han
S
,
Olonisakin
TF
,
Pribis
JP
et al.
A checklist is associated with increased quality of reporting preclinical biomedical research: a systematic review
.
PLoS One
2017
;
12
(
9
):e0183591.

53.

Hoerauf
JM
,
Moss
AF
,
Fernandez-Bustamante
A
et al.
Study design rigor in animal-experimental research published in anesthesia journals
.
Anesth Analg
2018
;
126
(
1
):
217
222
.

54.

Ramirez
FD
,
Motazedian
P
,
Jung
RG
et al.
Methodological rigor in preclinical cardiovascular studies: targets to enhance reproducibility and promote research translation
.
Circ Res
2017
;
120
(
12
):
1916
1926
.

55.

Sena
E
,
van der Worp
HB
,
Howells
D
et al.
How can we improve the pre-clinical development of drugs for stroke?
Trends Neurosci
2007
;
30
(
9
):
433
439
.

56.

Broglio
K
.
Randomization in clinical trials: permuted blocks and stratification
.
JAMA
2018
;
319
(
21
):
2223
2224
.

57.

Suresh
KP
.
An overview of randomization techniques: an unbiased assessment of outcome in clinical research
.
J Human Reprod Sci
2011
;
4
(
1
):
8
11
.

58.

Random number calculators
.
GraphPad
. https://www.graphpad.com/quickcalcs/randMenu/. Accessed May 11, 2021.

59.

Kang
M
,
Ragan
BG
,
Park
JH
.
Issues in outcomes research: an overview of randomization techniques for clinical trials
.
J Athl Train
2008
;
43
(
2
):
215
221
.

60.

MANILA is an interactive tool for optimal design and analysis of preclinical in vivo studies
.
Biomedportal
. https://biomedportal.utu.fi/utu-apps/Rvivo/. Accessed May 11, 2021.

61.

Bello
S
,
Krogsboll
LT
,
Gruber
J
et al.
Lack of blinding of outcome assessors in animal model experiments implies risk of observer bias
.
J Clin Epidemiol
2014
;
67
(
9
):
973
983
.

62.

Milham
MP
,
Craddock
RC
,
Son
JJ
et al.
Assessment of the impact of shared brain imaging data on the scientific literature
.
Nat Commun
2018
;
9
(
1
):
2818
.

63.

Simundic
AM
.
Bias in research
.
Biochem Med (Zagreb).
2013
;
23
(
1
):
12
15
.

64.

Kilkenny
C
,
Browne
W
,
Cuthill
IC
et al.
Animal research: reporting in vivo experiments: the ARRIVE guidelines
.
Br J Pharmacol
2010
;
160
(
7
):
1577
1579
.

65.

Alzheimer's disease preclinical efficacy database
.
Alzheimer's Disease Preclinical Efficacy Database
. https://alzped.nia.nih.gov/. Accessed May 11, 2021.

66.

Mission
.
Alzheimer's Disease Preclinical Efficacy Database
. https://alzped.nia.nih.gov/mission. Accessed May 11, 2021.

67.

Perkel
JM
.
The internet of things comes to the lab
.
Nature
2017
;
542
(
7639
):
125
126
.

68.

Getting Started with IMPC Data
. International Mouse Phenotyping Consortium.
Access Data Release 14.0 Data
. https://www.mousephenotype.org. Accessed May 11, 2021.

69.

Sasseville
VG
,
Mansfield
KG
,
Brees
DJ
.
Safety biomarkers in preclinical development: translational potential
.
Vet Pathol
2014
;
51
(
1
):
281
291
.

70.

Howard
BR
.
Control of variability
.
ILAR J
2002
;
43
(
4
):
194
201
.

71.

Delgado-SanMartin
J
,
Ehrhardt
B
,
Paczkowski
M
et al.
An innovative non-invasive technique for subcutaneous tumour measurements
.
PLoS One
2019
;
14
(
10
):e0216690.

72.

Jensen
MM
,
Jorgensen
JT
,
Binderup
T
et al.
Tumor volume in subcutaneous mouse xenografts measured by microCT is more accurate and reproducible than determined by 18F-FDG-microPET or external caliper
.
BMC Med Imaging
2008
;
8
:
16
.

73.

Pflanzer
R
,
Hofmann
M
,
Shelke
A
et al.
Advanced 3D-sonographic imaging as a precise technique to evaluate tumor volume
.
Transl Oncol
2014
;
7
(
6
):
681
686
.

74.

Jacobs
BY
,
Lakes
EH
,
Reiter
AJ
et al.
The open source GAITOR Suite for rodent gait analysis
.
Sci Rep
2018
;
8
(
1
):
9797
.

75.

Xu
Y
,
Tian
NX
,
Bai
QY
et al.
Gait assessment of pain and analgesics: comparison of the DigiGait and CatWalk Gait Imaging Systems
.
Neurosci Bull
2019
;
35
(
3
):
401
418
.

76.

Lundt
A
,
Wormuth
C
,
Siwek
ME
et al.
EEG radiotelemetry in small laboratory rodents: a powerful state-of-the art approach in neuropsychiatric, neurodegenerative, and epilepsy research
.
Neural Plast
2016
;
2016
:8213878.

77.

Pedersen
C
,
Porsgaard
T
,
Thomsen
M
et al.
Sustained effect of glucagon on body weight and blood glucose: assessed by continuous glucose monitoring in diabetic rats
.
PLoS One
2018
;
13
(
3
):e0194468.

78.

Meyer
CW
,
Ootsuka
Y
,
Romanovsky
AA
.
Body temperature measurements for metabolic phenotyping in mice
.
Front Physiol
2017
;
8
:
520
.

79.

Liu
S
,
Kim
TH
,
Franklin
DA
et al.
Protection against high-fat-diet-induced obesity in MDM2(C305F) mice due to reduced p53 activity and enhanced energy expenditure
.
Cell Rep
2017
;
18
(
4
):
1005
1018
.

80.

Owen
BM
,
Ding
X
,
Morgan
DA
et al.
FGF21 acts centrally to induce sympathetic nerve activity, energy expenditure, and weight loss
.
Cell Metab
2014
;
20
(
4
):
670
677
.

81.

Lorbach
M
,
Kyriakou
EI
,
Poppe
R
,
van Dam
EA
,
Noldus
L
,
Veltkamp
RC
.
Learning to recognize rat social behavior: novel dataset and cross-dataset application
.
J Neurosci Methods
2018
;
300
:
166
-
172
.

82.

Mistry
P
,
Duong
A
,
Kirshenbaum
L
et al.
Cardiac clocks and preclinical translation
.
Heart Fail Clin
2017
;
13
(
4
):
657
672
.

83.

Zhang
R
,
Lahens
NF
,
Ballance
HI
et al.
A circadian gene expression atlas in mammals: implications for biology and medicine
.
Proc Natl Acad Sci U S A
2014
;
111
(
45
):
16219
16224
.

84.

Sakata
M
,
Sei
H
,
Eguchi
N
et al.
Arterial pressure and heart rate increase during REM sleep in adenosine A2A-receptor knockout mice, but not in wild-type mice
.
Neuropsychopharmacology
2005
;
30
(
10
):
1856
1860
.

85.

Sheward
WJ
,
Naylor
E
,
Knowles-Barley
S
et al.
Circadian control of mouse heart rate and blood pressure by the suprachiasmatic nuclei: behavioral effects are more significant than direct outputs
.
PLoS One
2010
;
5
(
3
):e9783.

86.

DasMahapatra
PCE
,
Bhalerao
R
,
Rhodes
J
.
Free-living physical activity monitoring in adult US patients with multiple sclerosis using a consider wearable device
.
Digit Biomark
2018
;
2
:
47
63
.

87.

Arroyo-Curras
N
,
Somerson
J
,
Vieira
PA
et al.
Real-time measurement of small molecules directly in awake, ambulatory animals
.
Proc Natl Acad Sci U S A
2017
;
114
(
4
):
645
650
.

88.

Alexandrov
V
,
Brunner
D
,
Menalled
LB
et al.
Large-scale phenome analysis defines a behavioral signature for Huntington's disease genotype in mice
.
Nat Biotechnol
2016
;
34
(
8
):
838
844
.

89.

Lim
MA
,
Louie
B
,
Ford
D
et al.
Development of the Digital Arthritis Index, a novel metric to measure disease parameters in a rat model of rheumatoid arthritis
.
Front Pharmacol
2017
;
8
:
818
.

90.

Coravos
A
,
Khozin
S
,
Mandl
KD
.
Developing and adopting safe and effective digital biomarkers to improve patient outcomes
.
NPJ Digit Med
2019
;
2
(
1
):
14
.

91.

Lipsmeier
F
,
Taylor
KI
,
Kilchenmann
T
et al.
Evaluation of smartphone-based testing to generate exploratory outcome measures in a phase 1 Parkinson's disease clinical trial
.
Mov Disord
2018
;
33
(
8
):
1287
1297
.

92.

Zhan
A
,
Mohan
S
,
Tarolli
C
et al.
Using smartphones and machine learning to quantify Parkinson disease severity: The Mobile Parkinson Disease Score
.
JAMA Neurol
2018
;
75
(
7
):
876
880
.

93.

Jickling
GC
,
Sharp
FR
.
Improving the translation of animal ischemic stroke studies to humans
.
Metab Brain Dis
2015
;
30
(
2
):
461
467
.

94.

Freret
T
,
Schumann-Bard
P
,
Boulouard
M
et al.
On the importance of long-term functional assessment after stroke to improve translation from bench to bedside
.
Exp Transl Stroke Med.
2011
;
3
:
6
.

95.

Green
AR
.
Why do neuroprotective drugs that are so promising in animals fail in the clinic? An industry perspective
.
Clin Exp Pharmacol Physiol
2002
;
29
(
11
):
1030
1034
.

96.

Macrae
IM
,
Allan
SM
.
Stroke: the past, present and future
.
Brain Neurosci Adv
2018
;
2
:
2398212818810689
.

97.

Mandeville
ET
,
Ayata
C
,
Zheng
Y
et al.
Translational MR neuroimaging of stroke and recovery
.
Transl Stroke Res
2017
;
8
(
1
):
22
32
.

98.

Muir
KW
,
Macrae
IM
.
Neuroimaging as a selection tool and endpoint in clinical and pre-clinical trials
.
Transl Stroke Res
2016
;
7
(
5
):
368
377
.

99.

Schaar
KL
,
Brenneman
MM
,
Savitz
SI
.
Functional assessments in the rodent stroke model
.
Exp Transl Stroke Med
2010
;
2
(
1
):
13
.

100.

Hui
J
,
Heyden
R
,
Bao
T
et al.
Validity of the Fitbit One for measuring activity in community-dwelling stroke survivors
.
Physiother Can
2018
;
70
(
1
):
81
89
.

101.

Dolgos
H
,
Trusheim
M
,
Gross
D
et al.
Translational medicine guide transforms drug development processes: the recent Merck experience
.
Drug Discov Today
2016
;
21
(
3
):
517
526
.

102.

FDA-NIH Biomarker Working Group
.
BEST (Biomarkers, EndpointS, and other Tools) Resource
.
Silver Spring, MD
:
Food and Drug Adminisrtration (FDA); Bethesda, MD: National Institutes of Health (NIH)
;
2016
.

103.

Swerdlow
NR
,
Weber
M
,
Qu
Y
et al.
Realistic expectations of prepulse inhibition in translational models for schizophrenia research
.
Psychopharmacology
2008
;
199
(
3
):
331
388
.

104.

Negus
SS
.
Core outcome measures in preclinical assessment of candidate analgesics
.
Pharmacol Rev
2019
;
71
(
2
):
225
266
.

105.

Peng
Q
,
Mechanic
J
,
Shoieb
A
et al.
Circulating microRNA and automated motion analysis as novel methods of assessing chemotherapy-induced peripheral neuropathy in mice
.
PLoS One
2019
;
14
(
1
):e0210995.

106.

Popkin
G
.
Data sharing and how it can benefit your scientific career
.
Nature
2019
;
569
(
7756
):
445
447
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/journals/pages/open_access/funder_policies/chorus/standard_publication_model)