Abstract

Objective

Assess the feasibility and concurrent validity of a modified Uniform Data Set version 3 (UDSv3) for remote administration for individuals with normal cognition (NC), mild cognitive impairment (MCI), and early dementia.

Method

Participants (N = 93) (age: 72.8 [8.9] years; education: 15.6 [2.5] years; 72% female; 84% White) were enrolled from the Wake Forest ADRC. Portions of the UDSv3 cognitive battery, plus the Rey Auditory Verbal Learning Test, were completed by telephone or video within ~6 months of participant’s in-person visit. Adaptations for phone administration (e.g., Oral Trails for Trail Making Test [TMT] and Blind Montreal Cognitive Assessment [MoCA] for MoCA) were made. Participants reported on the pleasantness, difficulty, and preference for each modality. Staff provided validity ratings for assessments. Participants’ remote data were adjudicated by cognitive experts blinded to the in person-diagnosis (NC [N = 44], MCI [N = 35], Dementia [N = 11], or other [N = 3]).

Results

Remote assessments were rated as pleasant as in-person assessments by 74% of participants and equally difficult by 75%. Staff validity rating (video = 92%; phone = 87.5%) was good. Concordance between remote/in-person scores was generally moderate to good (r = .3 −.8; p < .05) except for TMT-A/OTMT-A (r = .3; p > .05). Agreement between remote/in-person adjudicated cognitive status was good (k = .61–.64).

Conclusions

We found preliminary evidence that older adults, including those with cognitive impairment, can be assessed remotely using a modified UDSv3 research battery. Adjudication of cognitive status that relies on remotely collected data is comparable to classifications using in-person assessments.

INTRODUCTION

Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments, and as such, most neuropsychological measures are intended to be administered in person. However, conditions often arise when research participants and patients cannot meet face to face for assessments, for example, because they must travel great distances, may have limited mobility or poor eyesight, or do not have transportation. Further, there is ample evidence that underserved populations, including rural populations and racial/ethnic minorities, cite travel-related burdens due to study/clinic locations as a barrier to care and research participation (Buzza et al., 2011; Williams et al., 2010). Additionally, these populations have been disproportionately affected by the COVID-19 pandemic further widening gaps in access to care and clinical research participation (Henning-Smith, 2020; Ruprecht et al., 2021). Alternatives to face-to-face assessments may provide an opportunity to alleviate the testing burden caused by barriers such as the COVID-19 pandemic and longer-standing social inequities.

Prior studies have demonstrated that both telephone- and video-administered cognitive assessments can be used with older adults (Alegret et al., 2021; Castanho et al., 2017; Crooks et al., 2005; Grosch et al., 2011; Hunter et al., 2021; Miller & Barr, 2017; Wilson et al., 2010). Studies have documented that video and telephone cognitive assessments also have suitable levels of acceptability and are perceived positively by older adults, even those with cognitive impairment or dementia ( Castanho et al., 2016; Caze et al., 2020; Cullum et al., 2014; Lacritz et al., 2020; Loh et al., 2007). One study found 98% satisfaction ratings with a clinical tele-neuropsychology appointment for patients with mild cognitive impairment (MCI) or early-stage AD (Parikh et al., 2013). A recent meta-analysis found that across 12 studies with nearly 500 participants, there was high concordance between video-administered and face-to-face assessments (Brearly et al., 2017). A slight difference was noted for timed tests, with virtual scores falling a 1/10th of a standard deviation unit lower than face-to-face scores. Several measures, including the Mini-Mental Status Exam (MMSE), Hopkins Verbal Learning Test–Revised (HVLT-R), Digit Span test, Oral Trail Making Test (OTMT), and Letter and Category Fluency, have also been shown to have intraclass correlations (ICCs) between video and in-person administrations averaging ~0.75–0.85 (Wadsworth et al., 2018). Visually dependent tasks, like the Clock drawing task, tended to have lower ICCs than verbally mediated tasks, but differences in scores across modalities were not statistically significant (Brearly et al., 2017). Importantly, several studies have shown that assessments administered by video could discriminate cognitively impaired from unimpaired older adults and were well tolerated by both groups (Alegret et al., 2021; Barton et al., 2011; Loh et al., 2007; Wadsworth et al., 2018). A few studies have also used video-based assessment techniques successfully with minority and underserved populations including rural Native American and rural Latino adults tested in Spanish or Portuguese (Castanho et al., 2016; Vahia et al., 2015; Wadsworth et al., 2016).

The COVID-19 pandemic created an urgency to put in place alternatives to in-person assessment, including telehealth and tele-neuropsychology. However, clinicians and investigators alike require validated assessment tools to conduct comprehensive cognitive exams remotely. Recent studies document the use of remote assessments in patients with epilepsy-related cognitive concerns (Tailby et al., 2020), pediatric neuropsychology patients (Lichtenstein et al., 2022; Salinas et al., 2020; Sherwood & MacDonald, 2020), Parkinson’s disease patients (York et al., 2021), and older adults. A review of remote administration of neuropsychological tests with older adults found strong evidence of its validity compared to face-to-face assessments (Marra et al., 2020). They also reported no differences between face-to-face and remote testing in mild-to-moderate AD but did report differences in more severe disease. While encouraging, a recent survey of neuropsychologists identified a need for improved norms, additional domain coverage, improved access to technology, and further validation studies (Rochette et al., 2021).

The Uniform Data Set Neuropsychological Battery, Version 3 (UDSv3) was developed by the Neuropsychology Work Group of the NIH-NIA Clinical Task Force in 2015 for use by a network of Alzheimer’s disease research centers (ADRCs). These measures, described in detail in Weintraub and colleagues (2018), were chosen to assess a broad range of cognitive domains, including global cognitive status, attention/working memory, verbal episodic memory, and language. To our knowledge, remote administration of the UDSv3 cognitive battery has not been systematically evaluated and reported. Given its broad use within the ADRC network and potential use beyond, the current study describes the development of a UDSv3 cognitive battery modified for telephone and video administration and provides preliminary data on the concordance with in-person cognitive testing and adjudicated cognitive outcomes. We also report user ratings of remote assessment compared with in person.

METHODS

Participants

We assessed the feasibility of remote administration of an adapted UDSv3 battery in a sample of 93 existing, English-speaking participants in the Wake Forest ADRC Clinical Core cohort who completed or were due to complete their annual cognitive assessment within 6 months of enrollment into the present study. We enrolled participants who, at their most recent in-person adjudications, were adjudicated to have MCI (amnestic single and multi-domain, and non-amnestic single and multi-domain presentations included; N = 35), Dementia (N = 11), or normal cognition (NC; N = 44). Three participants were classified as having “other cognitive status.” The race, gender, and educational background of these participants mirrored that of the larger ADRC Clinical Core. All participants gave informed consent to participate in these ADRC-related study procedures.

Cognitive Testing and Adaptation of UDSv3

All participants completed cognitive testing from their own homes via an HIPAA-compliant Zoom meeting, the link for which was provided via email to the participant or a trusted contact. Participants were able to use their own device if they had at-home Wi-Fi and a device capable of video interfacing with both audio and visual capabilities, such as a laptop computer, desktop computer, or tablet-type device. Use of smartphones was not permitted due to their small screen size. Individuals who did not have an acceptable at-home device and Wi-Fi, those who were unsure about their Wi-Fi access, and those who preferred not to use their own devices were given the option of using a study-provided tablet (GrandPad®). GrandPad® screens are ~5 in. × 9 in. and were the smallest devices used for this study. GrandPads® are designed specifically for use by older adults with few buttons and icons and come with a built-in cellular network and Zoom functionality. They do not require a Wi-Fi connection. Accessing the link for the Zoom meeting on this device required only three steps—turning the device on and clicking two buttons—which were explained to participants via telephone as well as via written and pictorial instructions shipped to participants with the device.

Cognitive tests were organized into a Core Battery and an Expanded Battery (Table 1). The “Core Battery” included tests from the UDSv3 measuring global cognitive function, memory, language, executive function, and mood that did not include visual stimuli and thus could be administered by phone (i.e., Benson Complex Figure, portions of the MoCA, Trail Making Test (TMT), and Multilingual Naming Test). An adapted version of the MoCA (Blind/Telephone MoCA; Nasredine, 2005; Nasredine, 2019; Wittich et al., 2010) that excludes visual items was added. The other tests within the Core Battery did not require modifications for remote administration and comprise the minimum cognitive data required for completion of a remote UDSv3 visit within an ADRC. To allow for a more comprehensive assessment of cognitive functioning, the Expanded Battery included all of the Core measures plus several additional tests of executive function (OTMT; Ricker et al., 1996; Ricker & Axelrod, 1994) and naming (Verbal Naming Test; Wynn et al., 2020; Yochim et al., 2015). To strengthen the assessment of episodic verbal memory recall, the Rey Auditory Verbal Learning Test (RAVLT; Lezak, 1997) was also added as a verbal learning task (Table 1). The RAVLT is not a standard part of the UDSv3 but is administered to all Wake Forest ADRC participants as part of their annual cognitive evaluations.

Table 1

Assessment batteries

Assessment measureCognitive domainOriginal UDSv3 measureCore batteryExpanded battery
MoCAaGlobal cognitionXXX
Craft Story—Immediate and Delay RecallMemoryXXX
Benson Figure—Immediate & Delay RecallMemoryX
RAVLTMemoryX
Verbal FluencyLanguage/ ExecutiveXXX
Category FluencyLanguageXXX
Verbal Naming TestbLanguageX
MINTLanguageX
Trail Making TestExecutiveX
Oral Trail Making TestbExecutiveX
Number SpanExecutiveXXX
GDS-15MoodXXX
Assessment measureCognitive domainOriginal UDSv3 measureCore batteryExpanded battery
MoCAaGlobal cognitionXXX
Craft Story—Immediate and Delay RecallMemoryXXX
Benson Figure—Immediate & Delay RecallMemoryX
RAVLTMemoryX
Verbal FluencyLanguage/ ExecutiveXXX
Category FluencyLanguageXXX
Verbal Naming TestbLanguageX
MINTLanguageX
Trail Making TestExecutiveX
Oral Trail Making TestbExecutiveX
Number SpanExecutiveXXX
GDS-15MoodXXX

Notes: MoCA = Montreal Cognitive Assessment v.8.1, RAVLT = Rey Auditory Verbal Learning Test, MINT = Multilingual Naming Test, GDS-15 = Geriatric Depression Scale 15.

aModified for remote administration.

bAdditional/new measure.

Table 1

Assessment batteries

Assessment measureCognitive domainOriginal UDSv3 measureCore batteryExpanded battery
MoCAaGlobal cognitionXXX
Craft Story—Immediate and Delay RecallMemoryXXX
Benson Figure—Immediate & Delay RecallMemoryX
RAVLTMemoryX
Verbal FluencyLanguage/ ExecutiveXXX
Category FluencyLanguageXXX
Verbal Naming TestbLanguageX
MINTLanguageX
Trail Making TestExecutiveX
Oral Trail Making TestbExecutiveX
Number SpanExecutiveXXX
GDS-15MoodXXX
Assessment measureCognitive domainOriginal UDSv3 measureCore batteryExpanded battery
MoCAaGlobal cognitionXXX
Craft Story—Immediate and Delay RecallMemoryXXX
Benson Figure—Immediate & Delay RecallMemoryX
RAVLTMemoryX
Verbal FluencyLanguage/ ExecutiveXXX
Category FluencyLanguageXXX
Verbal Naming TestbLanguageX
MINTLanguageX
Trail Making TestExecutiveX
Oral Trail Making TestbExecutiveX
Number SpanExecutiveXXX
GDS-15MoodXXX

Notes: MoCA = Montreal Cognitive Assessment v.8.1, RAVLT = Rey Auditory Verbal Learning Test, MINT = Multilingual Naming Test, GDS-15 = Geriatric Depression Scale 15.

aModified for remote administration.

bAdditional/new measure.

Cognitive raters were staff psychometrists within the Wake Forest ADRC experienced in administration of the UDSv3. Additional training was provided to orient the raters to non-UDSv3 assessments (e.g., Verbal Naming Test and OTMT) as well as issues unique to remote administration of neuropsychological assessment.

Adjudication Procedures

After remote testing was completed, participants’ cognitive status (NC, MCI, or Dementia) was adjudicated using identical procedures that are used with face-to-face administrations/adjudications. Adjudications were performed by a panel of neurologists and neuropsychologists (BS, SC, BW, & JB) who were blinded to adjudication status from participants’ corresponding most recent testing in the other modality. Adjudicators met regularly as a group to review cases and reach a consensus decision for each participant.

Participant and Staff Ratings of Remote Assessment

Likert-scale questions were developed to measure participants’ ratings of acceptability and pleasantness of each remote modality compared to their face-to-face administration and their preference for remote versus in-person sessions. Participants answered three questions on 1–10-point Likert scales regarding the simplicity of the visit (how challenging was it to use the phone/video technology for this visit?), their likelihood of selecting a remote visit over an in-person visit (how likely are you to choose a phone/video visit over coming into the clinic again in the future?), and their own perception of how validly the session measured their functioning (how confident are you that today’s test results are a valid sample of your memory and thinking?). Additionally, they reported how the difficulty and pleasantness of remote testing compared to in-person testing. Participants also provided information about the auditory quality, fatigue, and any technical problems. We further queried participants about what they liked and disliked about the process with an open-ended question. Additionally, cognitive testers provided ratings of perceived validity of the assessment by indicating whether they thought the assessment was valid, questionably valid, or invalid. If testing sessions were rated as questionably valid or invalid, raters were asked to provide the reason for compromised validity (hearing issues, distractions, interruptions, etc.).

Statistical Analysis

Participants (Table 2) completed either the core battery by video or telephone or the expanded battery by video or telephone. Data analyses were done using SAS and R. Data were checked for normality using scatterplots and deemed acceptable for subsequent analyses. Frequency counts were reported to assess participant’s ratings of their subjective experience and perceived validity of testing. Statistical analyses included paired samples t-tests, as well as Pearson’s correlations, to assess the relationship between scores obtained in person and remotely. Kappa coefficients (k) and raw percent agreement were computed between adjudication diagnoses obtained in person, which served as the index diagnosis, and the remote adjudication, which was the comparator (Gisev et al., 2013).

Table 2

Demographic characteristics (N = 93)

OverallVideoPhone
Age (years)72.8 (8.9)69.4 (7.7)75.1 (8.9)
Education15.6 (2.5)16.5 (2.4)15.0 (2.4)
Race
 White78 (83.9%)30 (79.0%)48 (87.3%)
 Black15 (16.1%)8 (21.1%)7 (12.7%)
Gender
 Male26 (28.0%)13 (34.2%)13 (23.6%)
 Female67 (72.0%)25 (65.8%)42 (76.4%)
Days between assessments151.0 (110.0, 288.0)155.0 (112.0, 295.0)145.0 (102.0, 288.0)
In-person diagnosis
 No impairment44 (47.3%)21 (55.3%)23 (41.8%)
 MCI35 (37.6%)11 (29.0%)24 (43.6%)
 Dementia11 (11.8%)6 (15.8%)5 (9.1%)
 Other3 (3.2%)0 (0%)3 (5.5%)
Past MoCA score23.8 (3.8)24.9 (4.1)23.1 (3.5)
OverallVideoPhone
Age (years)72.8 (8.9)69.4 (7.7)75.1 (8.9)
Education15.6 (2.5)16.5 (2.4)15.0 (2.4)
Race
 White78 (83.9%)30 (79.0%)48 (87.3%)
 Black15 (16.1%)8 (21.1%)7 (12.7%)
Gender
 Male26 (28.0%)13 (34.2%)13 (23.6%)
 Female67 (72.0%)25 (65.8%)42 (76.4%)
Days between assessments151.0 (110.0, 288.0)155.0 (112.0, 295.0)145.0 (102.0, 288.0)
In-person diagnosis
 No impairment44 (47.3%)21 (55.3%)23 (41.8%)
 MCI35 (37.6%)11 (29.0%)24 (43.6%)
 Dementia11 (11.8%)6 (15.8%)5 (9.1%)
 Other3 (3.2%)0 (0%)3 (5.5%)
Past MoCA score23.8 (3.8)24.9 (4.1)23.1 (3.5)

Mean (SD) or median (IQR) or N (%).

Table 2

Demographic characteristics (N = 93)

OverallVideoPhone
Age (years)72.8 (8.9)69.4 (7.7)75.1 (8.9)
Education15.6 (2.5)16.5 (2.4)15.0 (2.4)
Race
 White78 (83.9%)30 (79.0%)48 (87.3%)
 Black15 (16.1%)8 (21.1%)7 (12.7%)
Gender
 Male26 (28.0%)13 (34.2%)13 (23.6%)
 Female67 (72.0%)25 (65.8%)42 (76.4%)
Days between assessments151.0 (110.0, 288.0)155.0 (112.0, 295.0)145.0 (102.0, 288.0)
In-person diagnosis
 No impairment44 (47.3%)21 (55.3%)23 (41.8%)
 MCI35 (37.6%)11 (29.0%)24 (43.6%)
 Dementia11 (11.8%)6 (15.8%)5 (9.1%)
 Other3 (3.2%)0 (0%)3 (5.5%)
Past MoCA score23.8 (3.8)24.9 (4.1)23.1 (3.5)
OverallVideoPhone
Age (years)72.8 (8.9)69.4 (7.7)75.1 (8.9)
Education15.6 (2.5)16.5 (2.4)15.0 (2.4)
Race
 White78 (83.9%)30 (79.0%)48 (87.3%)
 Black15 (16.1%)8 (21.1%)7 (12.7%)
Gender
 Male26 (28.0%)13 (34.2%)13 (23.6%)
 Female67 (72.0%)25 (65.8%)42 (76.4%)
Days between assessments151.0 (110.0, 288.0)155.0 (112.0, 295.0)145.0 (102.0, 288.0)
In-person diagnosis
 No impairment44 (47.3%)21 (55.3%)23 (41.8%)
 MCI35 (37.6%)11 (29.0%)24 (43.6%)
 Dementia11 (11.8%)6 (15.8%)5 (9.1%)
 Other3 (3.2%)0 (0%)3 (5.5%)
Past MoCA score23.8 (3.8)24.9 (4.1)23.1 (3.5)

Mean (SD) or median (IQR) or N (%).

RESULTS

Sample Characteristics

Ninety-three individuals (mean age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent index-adjudicated cognitive status was NC (N = 44), MCI (N = 35), mild Dementia (N = 11), or other (N = 3).

Feasibility and Preference of Administration Modality

Seventy-four of 93 participants completed user perception questionnaires about their experience with remote assessment. Across both batteries (Core and Expanded) and both administration modes (telephone and video), ~75% of participants rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing, 16% rated the remote assessment as less difficult, and only 8% rated the remote assessment as more difficult. All but one of the reports of greater difficulty came from a telephone assessment with the core battery. Regarding pleasantness, 74% of participants felt the remote assessment was as pleasant as the in-person assessment, 20% felt it was more pleasant, and only 6% felt it was less pleasant. Of the four individuals who rated the experience as less pleasant, three received the telephone battery. When asked to rate how likely they were to choose the remote assessment over an in-person visit, (10 = “extremely likely,” 1 = “not likely at all”), the median ratings were 9.5 and 8, for video and telephone, respectively. The perceived validity of testing results by the participants and examiners was high. Participants had median ratings of 9 and 8 for video and telephone, respectively. Examiners rated 92% of the remotely collected video data as valid and 87.5% of the telephone data as valid (Table 3). Eight percent and 12%, respectively, were judged “questionably valid.” Questionably valid ratings occurred for four MCI participants, two NC participants, and two early Dementia participants. Three were deemed questionably valid because of suspicion that the participant used aids during testing (e.g., paper and pencil). The other reasons were not specific to remote assessment (e.g., examiner administration error). Qualitative comments from participants showed that participants liked the convenience of remote assessment but disliked occasional hearing difficulties and distractions in their homes, and some missed human contact with staff.

Table 3

Interviewer’s perception of validity of participant’s responses

Very validQuestionably validInvalid
ModeVideo (n = 37)34 (92%)3 (8%)0 (0%)
Telephone (n = 40)35 (88%)5 (12%)0 (0%)
BatteryExpanded (n = 39)33 (85%)6 (15%)0 (0%)
Core (n = 38)36 (95%)2 (5%)0 (0%)
Very validQuestionably validInvalid
ModeVideo (n = 37)34 (92%)3 (8%)0 (0%)
Telephone (n = 40)35 (88%)5 (12%)0 (0%)
BatteryExpanded (n = 39)33 (85%)6 (15%)0 (0%)
Core (n = 38)36 (95%)2 (5%)0 (0%)
Table 3

Interviewer’s perception of validity of participant’s responses

Very validQuestionably validInvalid
ModeVideo (n = 37)34 (92%)3 (8%)0 (0%)
Telephone (n = 40)35 (88%)5 (12%)0 (0%)
BatteryExpanded (n = 39)33 (85%)6 (15%)0 (0%)
Core (n = 38)36 (95%)2 (5%)0 (0%)
Very validQuestionably validInvalid
ModeVideo (n = 37)34 (92%)3 (8%)0 (0%)
Telephone (n = 40)35 (88%)5 (12%)0 (0%)
BatteryExpanded (n = 39)33 (85%)6 (15%)0 (0%)
Core (n = 38)36 (95%)2 (5%)0 (0%)

Concordance Between Test Scores

Video administration showed a somewhat stronger and more consistent pattern of correlations with face-to-face administrations than did telephone administration. This was also true for the supplemental tests—the MoCA, VNT, and Oral TMT-Part B—which all correlated equally well or better with video than telephone modality. This pattern was particularly true for the RAVLT Trial 5, for which the video administrations (r = .85; p < .05) correlated noticeably better with in-person administrations than the telephone administration did (r = .39, p < .05). The Oral TMT-Part A correlated weakly with TMT-A when administered by either telephone (r = .28; p > .05) or video (r = .30; p > .05), likely due to the dissimilarity of TMT-A and OTMT-A. Overall, correlations between face-to-face and remote assessment were mostly moderate to high (r = .3 to .8) (Table 4). Paired-samples t-tests (Table 5; in person—phone; in person—video) revealed no significant differences, regardless of modality of administration (p > .05) on tests of phonemic fluency, semantic fluency, number span forwards and backwards, and Craft Immediate recall—paraphrase and verbatim. Participants tested by phone tended to recall one to two more words (of 15) on the short and long delay variables on the RAVLT (p < .05), though this pattern held true only for long delay recall on video testing. No significant differences were noted on Craft—delayed recall (verbatim or paraphrase) for phone testing; on video testing, participants tended to earn 1 or 2 more points (out of 25 or 44 possible points, respectively) compared to in-person testing (p < .05).

Table 4

Pearson correlations between in-person and remote test administrations

Core/Expanded battery testsTelephone (N = 55)Video (N = 38)
Correlation coefficientp-valueCorrelation coefficientp-value
RAVLTTrial 5 (Learning trial)0.39.030.85.00
Trial 6 (Short delay recall)0.46.010.78.00
Delayed Recall0.68.000.72.00
Craft StoryImmediate Recall Paraphrase0.74.000.69.00
Immediate Recall Verbatim0.78.000.64.00
Delay Recall Paraphrase0.86.000.87.00
Delay Recall Verbatim0.88.000.84.00
Number SpanForward Sum0.63.000.52.00
Forward Span0.53.000.39.01
Backward Sum0.43.000.66.00
Backward Span0.35.000.59.00
Verbal FluencyF & L words0.59.000.83.00
Category FluencyAnimals0.80.000.70.00
Vegetables0.71.000.75.00
Substituted testsTelephoneVideo
OriginalSubstituteCorrelation coefficientp-valueCorrelation coefficientp-value
MoCABlind MoCA0.65.000.79.00
Trails AOral Trails A0.28.110.30.17
Trails BOral Trails B0.69.000.51.02
MINTVNT0.59.010.78.00
Core/Expanded battery testsTelephone (N = 55)Video (N = 38)
Correlation coefficientp-valueCorrelation coefficientp-value
RAVLTTrial 5 (Learning trial)0.39.030.85.00
Trial 6 (Short delay recall)0.46.010.78.00
Delayed Recall0.68.000.72.00
Craft StoryImmediate Recall Paraphrase0.74.000.69.00
Immediate Recall Verbatim0.78.000.64.00
Delay Recall Paraphrase0.86.000.87.00
Delay Recall Verbatim0.88.000.84.00
Number SpanForward Sum0.63.000.52.00
Forward Span0.53.000.39.01
Backward Sum0.43.000.66.00
Backward Span0.35.000.59.00
Verbal FluencyF & L words0.59.000.83.00
Category FluencyAnimals0.80.000.70.00
Vegetables0.71.000.75.00
Substituted testsTelephoneVideo
OriginalSubstituteCorrelation coefficientp-valueCorrelation coefficientp-value
MoCABlind MoCA0.65.000.79.00
Trails AOral Trails A0.28.110.30.17
Trails BOral Trails B0.69.000.51.02
MINTVNT0.59.010.78.00

Notes: RAVLT = Rey Auditory Verbal Learning Test; MoCA = Montreal Cognitive Assessment v.8.1; MINT = Multilingual Naming Test.

Table 4

Pearson correlations between in-person and remote test administrations

Core/Expanded battery testsTelephone (N = 55)Video (N = 38)
Correlation coefficientp-valueCorrelation coefficientp-value
RAVLTTrial 5 (Learning trial)0.39.030.85.00
Trial 6 (Short delay recall)0.46.010.78.00
Delayed Recall0.68.000.72.00
Craft StoryImmediate Recall Paraphrase0.74.000.69.00
Immediate Recall Verbatim0.78.000.64.00
Delay Recall Paraphrase0.86.000.87.00
Delay Recall Verbatim0.88.000.84.00
Number SpanForward Sum0.63.000.52.00
Forward Span0.53.000.39.01
Backward Sum0.43.000.66.00
Backward Span0.35.000.59.00
Verbal FluencyF & L words0.59.000.83.00
Category FluencyAnimals0.80.000.70.00
Vegetables0.71.000.75.00
Substituted testsTelephoneVideo
OriginalSubstituteCorrelation coefficientp-valueCorrelation coefficientp-value
MoCABlind MoCA0.65.000.79.00
Trails AOral Trails A0.28.110.30.17
Trails BOral Trails B0.69.000.51.02
MINTVNT0.59.010.78.00
Core/Expanded battery testsTelephone (N = 55)Video (N = 38)
Correlation coefficientp-valueCorrelation coefficientp-value
RAVLTTrial 5 (Learning trial)0.39.030.85.00
Trial 6 (Short delay recall)0.46.010.78.00
Delayed Recall0.68.000.72.00
Craft StoryImmediate Recall Paraphrase0.74.000.69.00
Immediate Recall Verbatim0.78.000.64.00
Delay Recall Paraphrase0.86.000.87.00
Delay Recall Verbatim0.88.000.84.00
Number SpanForward Sum0.63.000.52.00
Forward Span0.53.000.39.01
Backward Sum0.43.000.66.00
Backward Span0.35.000.59.00
Verbal FluencyF & L words0.59.000.83.00
Category FluencyAnimals0.80.000.70.00
Vegetables0.71.000.75.00
Substituted testsTelephoneVideo
OriginalSubstituteCorrelation coefficientp-valueCorrelation coefficientp-value
MoCABlind MoCA0.65.000.79.00
Trails AOral Trails A0.28.110.30.17
Trails BOral Trails B0.69.000.51.02
MINTVNT0.59.010.78.00

Notes: RAVLT = Rey Auditory Verbal Learning Test; MoCA = Montreal Cognitive Assessment v.8.1; MINT = Multilingual Naming Test.

Table 5

Comparison of mean scores obtained in person vs. telephone/video

TestTelephone (N = 55)Video (N = 38)
In person—telephonetp-valueIn person—videotp-value
RAVLTTrial 5 (Learning trial)−1.27 (2.86)−2.55.02−0.41 (1.94)−0.99.33
Trial 6 (Short delay recall)−1.58 (3.26)−2.78.01−0.05 (3.17)−0.07.95
Delayed Recall−1.64 (2.93)−3.20.00−2.14 (3.57)−2.75.01
Craft StoryImmediate Recall Paraphrase−0.82 (3.36)−1.81.08−0.66 (3.77)−1.08.29
Immediate Recall Verbatim−0.89 (4.86)−1.36.18−0.68 (6.54)−0.65.52
Delay Recall Paraphrase−0.38 (3.04)−0.93.36−1.08 (3.10)−2.15.04
Delay Recall Verbatim−0.33 (3.96)−0.61.54−1.87 (4.83)−2.38.02
Number SpanForward Sum−0.11 (2.05)−0.39.690.13 (1.89)0.430.67
Forward Span−0.05 (1.28)−0.32.750.18 (1.14)1.00.32
Backward Sum−0.71 (2.77)−1.90.06−0.18 (1.75)−0.65.52
Backward Span−0.35 (1.66)−1.55.13−0.13 (1.07)−0.76.45
Verbal FluencyF & L words1.40 (7.73)1.34.180.37 (4.83)0.47.64
Category FluencyAnimals0.76 (3.30)1.72.09−0.05 (4.40)−0.07.94
Vegetables0.18 (3.10)0.44.67−0.16 (3.00)−0.32.75
TestTelephone (N = 55)Video (N = 38)
In person—telephonetp-valueIn person—videotp-value
RAVLTTrial 5 (Learning trial)−1.27 (2.86)−2.55.02−0.41 (1.94)−0.99.33
Trial 6 (Short delay recall)−1.58 (3.26)−2.78.01−0.05 (3.17)−0.07.95
Delayed Recall−1.64 (2.93)−3.20.00−2.14 (3.57)−2.75.01
Craft StoryImmediate Recall Paraphrase−0.82 (3.36)−1.81.08−0.66 (3.77)−1.08.29
Immediate Recall Verbatim−0.89 (4.86)−1.36.18−0.68 (6.54)−0.65.52
Delay Recall Paraphrase−0.38 (3.04)−0.93.36−1.08 (3.10)−2.15.04
Delay Recall Verbatim−0.33 (3.96)−0.61.54−1.87 (4.83)−2.38.02
Number SpanForward Sum−0.11 (2.05)−0.39.690.13 (1.89)0.430.67
Forward Span−0.05 (1.28)−0.32.750.18 (1.14)1.00.32
Backward Sum−0.71 (2.77)−1.90.06−0.18 (1.75)−0.65.52
Backward Span−0.35 (1.66)−1.55.13−0.13 (1.07)−0.76.45
Verbal FluencyF & L words1.40 (7.73)1.34.180.37 (4.83)0.47.64
Category FluencyAnimals0.76 (3.30)1.72.09−0.05 (4.40)−0.07.94
Vegetables0.18 (3.10)0.44.67−0.16 (3.00)−0.32.75

Notes: RAVLT = Rey Auditory Verbal Learning Test.

Table 5

Comparison of mean scores obtained in person vs. telephone/video

TestTelephone (N = 55)Video (N = 38)
In person—telephonetp-valueIn person—videotp-value
RAVLTTrial 5 (Learning trial)−1.27 (2.86)−2.55.02−0.41 (1.94)−0.99.33
Trial 6 (Short delay recall)−1.58 (3.26)−2.78.01−0.05 (3.17)−0.07.95
Delayed Recall−1.64 (2.93)−3.20.00−2.14 (3.57)−2.75.01
Craft StoryImmediate Recall Paraphrase−0.82 (3.36)−1.81.08−0.66 (3.77)−1.08.29
Immediate Recall Verbatim−0.89 (4.86)−1.36.18−0.68 (6.54)−0.65.52
Delay Recall Paraphrase−0.38 (3.04)−0.93.36−1.08 (3.10)−2.15.04
Delay Recall Verbatim−0.33 (3.96)−0.61.54−1.87 (4.83)−2.38.02
Number SpanForward Sum−0.11 (2.05)−0.39.690.13 (1.89)0.430.67
Forward Span−0.05 (1.28)−0.32.750.18 (1.14)1.00.32
Backward Sum−0.71 (2.77)−1.90.06−0.18 (1.75)−0.65.52
Backward Span−0.35 (1.66)−1.55.13−0.13 (1.07)−0.76.45
Verbal FluencyF & L words1.40 (7.73)1.34.180.37 (4.83)0.47.64
Category FluencyAnimals0.76 (3.30)1.72.09−0.05 (4.40)−0.07.94
Vegetables0.18 (3.10)0.44.67−0.16 (3.00)−0.32.75
TestTelephone (N = 55)Video (N = 38)
In person—telephonetp-valueIn person—videotp-value
RAVLTTrial 5 (Learning trial)−1.27 (2.86)−2.55.02−0.41 (1.94)−0.99.33
Trial 6 (Short delay recall)−1.58 (3.26)−2.78.01−0.05 (3.17)−0.07.95
Delayed Recall−1.64 (2.93)−3.20.00−2.14 (3.57)−2.75.01
Craft StoryImmediate Recall Paraphrase−0.82 (3.36)−1.81.08−0.66 (3.77)−1.08.29
Immediate Recall Verbatim−0.89 (4.86)−1.36.18−0.68 (6.54)−0.65.52
Delay Recall Paraphrase−0.38 (3.04)−0.93.36−1.08 (3.10)−2.15.04
Delay Recall Verbatim−0.33 (3.96)−0.61.54−1.87 (4.83)−2.38.02
Number SpanForward Sum−0.11 (2.05)−0.39.690.13 (1.89)0.430.67
Forward Span−0.05 (1.28)−0.32.750.18 (1.14)1.00.32
Backward Sum−0.71 (2.77)−1.90.06−0.18 (1.75)−0.65.52
Backward Span−0.35 (1.66)−1.55.13−0.13 (1.07)−0.76.45
Verbal FluencyF & L words1.40 (7.73)1.34.180.37 (4.83)0.47.64
Category FluencyAnimals0.76 (3.30)1.72.09−0.05 (4.40)−0.07.94
Vegetables0.18 (3.10)0.44.67−0.16 (3.00)−0.32.75

Notes: RAVLT = Rey Auditory Verbal Learning Test.

Concordance of Cognitive Status

Overall, there was substantial agreement between adjudications conducted with in-person and remote data (79% agreement, k = .64), with slightly better agreement between video and in-person (82%, k = .68) versus phone and in person (76%, k = .61). Agreement was better for CN participants (86%) than for participants with cognitive impairment (MCI: 77%; Dementia: 73%), though there were only slight differences between MCI and Dementia adjudication agreements. When adjudication was made based on video data versus data obtained in person (index adjudication), three Dementia cases were classified as MCI (27%); seven MCIs were classified as normal (20%) and one classified as Dementia (3%); five normals were classified as MCI (11%) and one as other (2%); and two others were classified as MCI (67%) and one as normal (33%) (Table 6).

Table 6

Frequency of adjudicated outcomes for in person and remote data

In personRemote
FrequencyDementiaMCINCOtherTotal
Row %
Col %
Dementia830011
72.7327.270.000.00
88.898.110.000.00
MCI1277035
2.8677.1420.000.00
11.1172.9715.220.00
NC0538144
0.0011.3686.362.27
0.0013.5182.61100.00
Other02103
0.0066.6733.330.00
0.005.412.170.00
Total93746193
In personRemote
FrequencyDementiaMCINCOtherTotal
Row %
Col %
Dementia830011
72.7327.270.000.00
88.898.110.000.00
MCI1277035
2.8677.1420.000.00
11.1172.9715.220.00
NC0538144
0.0011.3686.362.27
0.0013.5182.61100.00
Other02103
0.0066.6733.330.00
0.005.412.170.00
Total93746193

Notes: MCI = mild cognitive impairment; NC = normal cognition.

Table 6

Frequency of adjudicated outcomes for in person and remote data

In personRemote
FrequencyDementiaMCINCOtherTotal
Row %
Col %
Dementia830011
72.7327.270.000.00
88.898.110.000.00
MCI1277035
2.8677.1420.000.00
11.1172.9715.220.00
NC0538144
0.0011.3686.362.27
0.0013.5182.61100.00
Other02103
0.0066.6733.330.00
0.005.412.170.00
Total93746193
In personRemote
FrequencyDementiaMCINCOtherTotal
Row %
Col %
Dementia830011
72.7327.270.000.00
88.898.110.000.00
MCI1277035
2.8677.1420.000.00
11.1172.9715.220.00
NC0538144
0.0011.3686.362.27
0.0013.5182.61100.00
Other02103
0.0066.6733.330.00
0.005.412.170.00
Total93746193

Notes: MCI = mild cognitive impairment; NC = normal cognition.

DISCUSSION

This study assessed the utility of a modified UDSv3 research cognitive battery adapted for remote administration by telephone or video in older adults with and without cognitive impairment within an ADRC.

We found generally moderate-to-high correlations between the same tests administered across in-person and remote modalities, though there was some variability by test and modality. We also found that there was generally good concordance between adjudicated diagnostic classifications based on either in-person or remote assessments. Participants rated remote assessment as equally difficult, pleasant, and valid relative to in-person assessment. However, it is worth nothing that individuals with MCI and Dementia may have difficulty accurately remembering their in-person testing well enough to compare it with their remote testing experience.

There was strong correspondence between remotely administered tests and in-person tests, though these relationships were slightly stronger for video administrations than they were for telephone administrations, and some differences (one to two points) were observed on tests of memory recall variables, most notably for verbal list learning tests administered over the phone. The slightly better performance on remotely administered verbal list learning tests may be due to multiple reasons; for instance, the order of test modality was not randomized for this study, so it may reflect practice effects. Alternatively, because participants tested by phone are unobserved, they may not strictly adhere to standard test protocols and instructions that require testing be done without the use of paper and pencil. The fact that these score differences on phone administrations were observed on the list learning test, which is much more easily aided by paper and pencil, appears to provide some possible support for the latter hypothesis. The Trail Making Tests, particularly TMT-A and OTMT-A, also correlated less well, which is consistent with past literature (Kaemmerer & Riordan, 2016) and unsurprising given the notable discrepancy between the oral and written versions of this task. The OTMT-A task of counting aloud from 1 to 25 is a highly overlearned task and requires none of the visuomotor functioning required by the written task. Though TMT-B and OTMT-B appear similar in that they both require set shifting, the cognitive abilities required to successfully shift-set likely vary as a consequence of how those shifts are cognitively and/or motorically executed, with the latter requiring mental-shifting more heavily tasking working memory circuits. Lastly, adjudication agreement was high between in-person data and remote data, particularly for individuals without cognitive impairment.

These findings replicate existing work demonstrating that remotely administered research cognitive assessments are generally well tolerated by older adults and can discriminate between those with and without cognitive impairment (Alegret et al., 2021; Barton et al., 2011; Loh et al., 2007; Wadsworth et al., 2018). They extend these findings by demonstrating that the widely used UDSv3 cognitive test battery translates adequately into remote administration and results in non-discordant adjudication outcomes.

These results have several important implications. Comparability of remote assessment to in-person assessment can help centers expand recruitment to traditionally under-studied populations—this is especially relevant to those centers that focus on the oldest old, rural, and underserved populations and participants for whom transportation to research centers is difficult. Inconvenience and limited time for travel have been documented as barriers to clinical research participation, particularly for underserved groups and those in rural communities (Friedman et al., 2015; Williams et al., 2010), and remote assessment has the potential to partially alleviate these barriers. In fact, this potential has already been demonstrated for veterans participating in telehealth programs implemented by the VA care network. For example, veterans participating in a sleep telemedicine program have reported increased care access and improvements in sleep quality (Nicosia et al., 2021). Additionally, the majority of veterans surveyed about healthcare visits via VA-distributed tablets have reported video appointments to be preferable or the same as in-person visits (Slightam et al., 2020). There is a pressing need for more accessible and representative research in the Alzheimer’s disease field (Babulal et al., 2019) which could be partially addressed with remote assessments.

One concern regarding remote assessments with older adults is limited technological literacy. Though there are generational differences in technology use and competency, older adults are increasingly using technology (Anderson & Perrin, 2017). There is mounting evidence that they are quite capable and willing to use technological devices for research studies (Joddrell & Astell, 2016; Nicosia et al., 2022). Providing participants with user-friendly, pre-programmed tablet devices can reduce technical difficulties and extend access to those who do not have video-interfacing equipment or Wi-Fi available via their own devices. Devices such as the GrandPad® have been created specifically for use by older adults with adaptations to mitigate hearing and vision difficulty (e.g., easy-to-adjust volume and oversized text and buttons). These devices can be pre-loaded with cellular network access to allow for use by older adults without home internet. The use of video-devices allows for closer observation of participant behaviors during the testing session. A follow-up randomized study comparing video-administered to in-person assessment with the UDSv3 in a large sample of ADRC participants with no impairment, MCI, and mild Dementia is currently underway with the support of NIH funding.

Limitations, Challenges, and Future Directions

There are several limitations of the present study. As noted, some participant “satisfaction” ratings rely on their memory of in-person testing, which may be imprecise, especially in those with MCI or mild Dementia. The small sample size and its limited diversity constrain generalization. Though there was representation of Black participants, there was no representation comparable to census benchmarks, nor was there representation of additional racial/ethnic minorities which is in part a reflection of the demographic makeup of the larger Wake Forest ADRC Clinical Cohort. Sampling from a more diverse population would strengthen the generalizability of the results and include representation of populations for which this burden is often greatest (rural, underserved, and underrepresented groups of individuals; Henning-Smith, 2020; Ruprecht et al., 2021). Larger studies of the remote neuropsychological assessment with a more diverse cohort of older adults could allow for the development of both normative data and crosswalks to improve the interpretability at the group level and increase the utility of these assessments in the adjudication of cognitive status.

Though, in this study, many of the remotely administered assessments correlated well with their in-person counterparts, one clear exception was TMT-A. This is consistent with other research demonstrating weak cross-modal correlations and poor ability to discriminate between groups for OTMT-A suggesting that the oral and written versions of this task are fundamentally different from one another and likely measure different cognitive constructs (Bastug et al., 2013; Jaywant et al., 2018). If remote assessments that do not clearly correspond to well-validated in-person assessments are to continue to be administered, it will be important to assess what cognitive abilities they do capture as well as to develop norms or test “crosswalks” that allow for interpretation of this data moving forward.

An additional challenge with remote neuropsychological testing in general is the possibility of non-adherence to standardized test protocols, namely, not following standard test instructions (e.g., utilizing external aids such as computers, clocks, or watches; transcribing words on tests that are meant to be completed orally) that may affect tasks like list-learning paradigms, for example. Modifying testing scripts to emphasize the importance of not using aids may help. Video administration has an advantage over phone administration because it allows the rater to better monitor the participant’s behavior during testing. Video administrations also allow raters more insight into other household distractions and the general test environment of the participant.

A lack of standardization in the test–retest window and balance in the remote assignment groups may have influenced the results. On average, there were ~4 months between participants’ two assessment sessions, with an outer time of ~9 months. This long interval could allow for changes in cognitive testing performance due to factors other than the testing modality, including progression of disease, and this potentially influenced test scores and the adjudication process. Further, practice effects on the second exam may have also affected scores, as order of administration was not randomized. In the future, it would be ideal for studies comparing remote neuropsychological assessments to in-person assessments to standardize the test–retest window and randomize participants from each cognitive strata to receive remote testing either before or after in-person testing. It will also be important to examine longitudinal data to assess how comparable the methods are in detecting cognitive changes over time.

CONCLUSION

This study offers preliminary evidence that remote cognitive assessments are feasible for older adults, including individuals with normal cognition and those with milder forms of cognitive impairment, though there are important nuances to consider such as selection of tests and modality. Though the COVID-19 pandemic was the initial impetus for increased interest in remote assessment, these findings have additional implications for reaching underrepresented and underserved communities for which in-person testing burden is often highest.

FUNDING

This work was supported by the National Institute of Health (P30 AG049638 and AG049638-05S1).

CONFLICT OF INTEREST

None declared.

REFERENCES

Alegret
,
M.
,
Espinosa
,
A.
,
Ortega
,
G.
,
Pérez-Cordón
,
A.
,
Sanabria
,
Á.
,
Hernández
,
I.
, et al. (
2021
).
From face-to-face to home-to-home: Validity of a Teleneuropsychological battery
.
Journal of Alzheimer’s Disease
,
81
(
4
),
1541
1553
. https://doi.org/10.3233/JAD-201389.

Anderson
,
M.
&
Perrin
,
A.
(
2017, May 17
).
Technology use among seniors
.
Pew Research Center
. https://www.pewresearch.org/internet/2017/05/17/technology-use-among-seniors/.

Babulal
,
G. M.
,
Quiroz
,
Y. T.
,
Albensi
,
B. C.
,
Arenaza-Urquijo
,
E.
,
Astell
,
A. J.
,
Babiloni
,
C.
, et al. (
2019
).
International society to advance Alzheimer’s research and treatment, Alzheimer’s association. Perspectives on ethnic and racial disparities in Alzheimer’s disease and related dementias: Update and areas of immediate need
.
Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association
,
15
(
2
),
292
312
. https://doi.org/10.1016/j.jalz.2018.09.009.

Barton
,
C.
,
Morris
,
R.
,
Rothlind
,
J.
, &
Yaffe
,
K.
(
2011
).
Video-telemedicine in a memory disorders clinic: Evaluation and management of rural elders with cognitive impairment
.
Telemedicine and E-Health
,
17
(
10
),
789
793
. https://doi.org/10.1089/tmj.2011.0083.

Bastug
,
G.
,
Ozel-Kizil
,
E. T.
,
Sakarya
,
A.
,
Altintas
,
O.
,
Kirici
,
S.
, &
Altunoz
,
U.
(
2013
).
Oral Trail making task as a discriminative tool for different levels of cognitive impairment and Normal aging
.
Archives of Clinical Neuropsychology
,
28
(
5
),
411
417
. https://doi.org/10.1093/arclin/act035.

Brearly
,
T. W.
,
Shura
,
R. D.
,
Martindale
,
S. L.
,
Lazowski
,
R. A.
,
Luxton
,
D. D.
,
Shenal
,
B. V.
, et al. (
2017
).
Neuropsychological test administration by videoconference: A systematic review and meta-analysis
.
Neuropsychology Review
,
27
(
2
),
174
186
. https://doi.org/10.1007/s11065-017-9349-1.

Buzza
,
C.
,
Ono
,
S. S.
,
Turvey
,
C.
,
Wittrock
,
S.
,
Noble
,
M.
,
Reddy
,
G.
, et al. (
2011
).
Distance is relative: Unpacking a principal barrier in rural healthcare
.
Journal of General Internal Medicine
,
26
(
Suppl 2
),
648
654
. https://doi.org/10.1007/s11606-011-1762-1.

Castanho
,
T. C.
,
Amorim
,
L.
,
Moreira
,
P. S.
,
Mariz
,
J.
,
Palha
,
J. A.
,
Sousa
,
N.
, et al. (
2016
).
Assessing cognitive function in older adults using a videoconference approach
.
eBioMedicine
,
11
,
278
284
. https://doi.org/10.1016/j.ebiom.2016.08.001.

Castanho
,
T. C.
,
Sousa
,
N.
, &
Santos
,
N. C.
(
2017
).
When new technology is an answer for old problems: The use of videoconferencing in cognitive aging assessment
.
Journal of Alzheimer’s Disease Reports
,
1
(
1
),
15
21
. https://doi.org/10.3233/ADR-170007.

Caze
,
T. I.
,
Dorsman
,
K. A.
,
Carlew
,
A. R.
,
Diaz
,
A.
, &
Bailey
,
K. C.
(
2020
).
Can you hear me now? Telephone-based teleneuropsychology improves utilization rates in underserved populations
.
Archives of Clinical Neuropsychology
,
35
(
8
),
1234
1239
. https://doi.org/10.1093/arclin/acaa098.

Crooks
,
V. C.
,
Clark
,
L.
,
Petitti
,
D. B.
,
Chui
,
H.
, &
Chiu
,
V.
(
2005
).
Validation of multi-stage telephone-based identification of cognitive impairment and dementia
.
BMC Neurology
,
5
(
1
),
5
8
. https://wake.idm.oclc.org/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2014-49806-001&site=ehost-live, https://doi.org/10.1186/1471-2377-5-8.

Cullum
,
M.
,
Hynan
,
L. S.
,
Grosch
,
M.
,
Parikh
,
M.
, &
Weiner
,
M. F.
(
2014
).
Teleneuropsychology: Evidence for video teleconference-based neuropsychological assessment
.
Journal of the International Neuropsychological Society
,
20
(
10
),
1028
1033
. https://doi.org/10.1017/S1355617714000873.

Friedman
,
D. B.
,
Foster
,
C.
,
Bergeron
,
C. D.
,
Tanner
,
A.
, &
Kim
,
S.-H.
(
2015
).
A qualitative study of recruitment barriers, motivators, and community-based strategies for increasing clinical trials participation among rural and urban populations
.
American Journal of Health Promotion
,
29
(
5
),
332
338
. https://doi.org/10.4278/ajhp.130514-QUAL-247.

Gisev
,
N.
,
Bell
,
J. S.
, &
Chen
,
T. F.
(
2013
).
Interrater agreement and interrater reliability: Key concepts, approaches, and applications
.
Research in Social & Administrative Pharmacy: RSAP
,
9
(
3
),
330
338
. https://doi.org/10.1016/j.sapharm.2012.04.004.

Grosch
,
M. C.
,
Gottlieb
,
M. C.
, &
Cullum
,
C. M.
(
2011
).
Initial practice recommendations for teleneuropsychology
.
The Clinical Neuropsychologist
,
25
(
7
),
1119
1133
. https://doi.org/10.1080/13854046.2011.609840.

Henning-Smith
,
C.
(
2020
).
The unique impact of COVID-19 on older adults in rural areas
.
Journal of Aging & Social Policy
,
32
(
4–5
),
396
402
. https://doi.org/10.1080/08959420.2020.1770036.

Hunter
,
M. B.
,
Jenkins
,
N.
,
Dolan
,
C.
,
Pullen
,
H.
,
Ritchie
,
C.
, &
Muniz-Terrera
,
G.
(
2021
).
Reliability of telephone and videoconference methods of cognitive assessment in older adults with and without dementia
.
Journal of Alzheimer’s Disease
,
81
(
4
),
1625
1647
. https://doi.org/10.3233/JAD-210088.

Jaywant
,
A.
,
Barredo
,
J.
,
Ahern
,
D. C.
, &
Resnik
,
L.
(
2018
).
Neuropsychological assessment without upper limb involvement: A systematic review of oral versions of the trail making test and symbol-digit modalities test
.
Neuropsychological Rehabilitation
,
28
(
7
),
1055
1077
. https://doi.org/10.1080/09602011.2016.1240699.

Joddrell
,
P.
, &
Astell
,
A. J.
(
2016
).
Studies involving people with dementia and touchscreen technology: A literature review
.
JMIR Rehabilitation and Assistive Technologies
,
3
(
2
), e10. https://doi.org/10.2196/rehab.5788.

Kaemmerer
,
T.
, &
Riordan
,
P.
(
2016
).
Oral adaptation of the trail making test: A practical review
.
Applied Neuropsychology: Adult
,
23
(
5
),
384
389
. https://doi.org/10.1080/23279095.2016.1178645.

Lacritz
,
L. H.
,
Carlew
,
A. R.
,
Livingstone
,
J.
,
Bailey
,
K. C.
,
Parker
,
A.
, &
Diaz
,
A.
(
2020
).
Patient satisfaction with telephone neuropsychological assessment
.
Archives of Clinical Neuropsychology
,
35
(
8
),
1240
1248
. https://doi.org/10.1093/arclin/acaa097.

Lezak
,
M. D.
(
1997
).
Neuropsychological assessment
(3rd ed.).
New York
:
Oxford University Press
.

Lichtenstein
,
J. D.
,
Amato
,
J. T.
,
Holding
,
E. Z.
,
Grodner
,
K. D.
,
Pollock
,
E. N.
,
Marschall
,
K. P.
, et al. (
2022
).
How we work now: Preliminary review of a pediatric neuropsychology hybrid model in the era of COVID-19 and beyond
.
Archives of Clinical Neuropsychology
,
37
(
1
),
40
49
. https://doi.org/10.1093/arclin/acab041.

Loh
,
P.-K.
,
Donaldson
,
M.
,
Flicker
,
L.
,
Maher
,
S.
, &
Goldswain
,
P.
(
2007
).
Development of a telemedicine protocol for the diagnosis of Alzheimer’s disease
.
Journal of Telemedicine and Telecare
,
13
(
2
),
90
94
. https://doi.org/10.1258/135763307780096159.

Marra
,
D. E.
,
Hamlet
,
K. M.
,
Bauer
,
R. M.
, &
Bowers
,
D.
(
2020
).
Validity of teleneuropsychology for older adults in response to COVID-19: A systematic and critical review
.
The Clinical Neuropsychologist
,
34
(
7–8
),
1411
1452
. https://doi.org/10.1080/13854046.2020.1769192.

Miller
,
J. B.
, &
Barr
,
W. B.
(
2017
).
The technology crisis in neuropsychology
.
Archives of Clinical Neuropsychology
,
32
(
5
),
541
554
. https://doi.org/10.1093/arclin/acx050.

Nasredine
,
Z. S.
,
Phillips
,
N. A.
,
Bédirian
,
V.
,
Charbonneau
,
S.
,
Whitehead
,
V.
,
Collin
,
I.
,
Cummings
,
J. L.
, &
Chertkow
,
H.
 
(2005)
.
The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment
.
Journal of the American Geriatrics Society
,
53
(
4
),
695
699
.

Nasredine
,
Z.
 
MoCA Montreal - cognitive assessment. MoCA Montreal cognitive assessment
. MoCA Cognition. https://www.mocatest.org/. Published
2019
.
Accessed April 28, 2020
.

Nicosia
,
F. M.
,
Kaul
,
B.
,
Totten
,
A. M.
,
Silvestrini
,
M. C.
,
Williams
,
K.
,
Whooley
,
M. A.
, et al. (
2021
).
Leveraging telehealth to improve access to care: A qualitative evaluation of veterans’ experience with the VA TeleSleep program
.
BMC Health Services Research
,
21
(
1
),
77
. https://doi.org/10.1186/s12913-021-06080-5.

Nicosia
,
J.
,
Aschenbrenner
,
A. J.
,
Adams
,
S. L.
,
Tahan
,
M.
,
Stout
,
S. H.
,
Wilks
,
H.
, et al. (
2022
).
Bridging the technological divide: Stigmas and challenges with Technology in Digital Brain Health Studies of older adults
.
Frontiers in Digital Health
,
4
, 880055. https://doi.org/10.3389/fdgth.2022.880055.

Parikh
,
M.
,
Grosch
,
M. C.
,
Graham
,
L. L.
,
Hynan
,
L. S.
,
Weiner
,
M.
,
Shore
,
J. H.
, et al. (
2013
).
Consumer acceptability of brief videoconference-based neuropsychological assessment in older individuals with and without cognitive impairment
.
The Clinical Neuropsychologist
,
27
(
5
),
808
817
. https://doi.org/10.1080/13854046.2013.791723.

Ricker
,
J. H.
, &
Axelrod
,
B. N.
(
1994
).
Analysis of an Oral paradigm for the trail making test
.
Assessment
,
1
(
1
),
47
51
. https://doi.org/10.1177/1073191194001001007.

Ricker
,
J. H.
,
Axelrod
,
B. N.
, &
Houtler
,
B. D.
(
1996
).
Clinical validation of the oral trail making test
.
Neuropsychiatry, Neuropsychology, & Behavioral Neurology
,
9
(
1
),
50
53
.

Rochette
,
A. D.
,
Rahman-Filipiak
,
A.
,
Spencer
,
R. J.
,
Marshall
,
D.
, &
Stelmokas
,
J. E.
(
2021
).
Teleneuropsychology practice survey during COVID-19 within the United States
.
Applied Neuropsychology. Adult
,
29
(
6
),
1312
1322
. https://doi.org/10.1080/23279095.2021.1872576.

Ruprecht
,
M. M.
,
Wang
,
X.
,
Johnson
,
A. K.
,
Xu
,
J.
,
Felt
,
D.
,
Ihenacho
,
S.
, et al. (
2021
).
Evidence of social and structural COVID-19 disparities by sexual orientation, gender identity, and race/ethnicity in an urban environment
.
Journal of Urban Health
,
98
(
1
),
27
40
. https://doi.org/10.1007/s11524-020-00497-9.

Salinas
,
C. M.
,
Bordes Edgar
,
V.
,
Berrios Siervo
,
G.
, &
Bender
,
H. A.
(
2020
).
Transforming pediatric neuropsychology through video-based teleneuropsychology: An innovative private practice model pre-COVID-19
.
Archives of Clinical Neuropsychology
,
35
(
8
),
1189
1195
. https://doi.org/10.1093/arclin/acaa101.

Sherwood
,
A. R.
, &
MacDonald
,
B.
(
2020
).
A teleneuropsychology consultation service model for children with neurodevelopmental and acquired disorders residing in rural state regions
.
Archives of Clinical Neuropsychology
,
35
(
8
),
1196
1203
. https://doi.org/10.1093/arclin/acaa099.

Slightam
,
C.
,
Gregory
,
A. J.
,
Hu
,
J.
,
Jacobs
,
J.
,
Gurmessa
,
T.
,
Kimerling
,
R.
, et al. (
2020
).
Patient perceptions of video visits using veterans affairs telehealth tablets: Survey study
.
Journal of Medical Internet Research
,
22
(
4
), e15682. https://doi.org/10.2196/15682.

Tailby
,
C.
,
Collins
,
A. J.
,
Vaughan
,
D. N.
,
Abbott
,
D. F.
,
O’Shea
,
M.
,
Helmstaedter
,
C.
, et al. (
2020
).
Teleneuropsychology in the time of COVID-19: The experience of the Australian epilepsy project
.
Seizure
,
83
,
89
97
. https://doi.org/10.1016/j.seizure.2020.10.005.

Vahia
,
I. V.
,
Ng
,
B.
,
Camacho
,
A.
,
Cardenas
,
V.
,
Cherner
,
M.
,
Depp
,
C. A.
, et al. (
2015
).
Telepsychiatry for neurocognitive testing in older rural Latino adults
.
The American Journal of Geriatric Psychiatry
,
23
(
7
),
666
670
. https://doi.org/10.1016/j.jagp.2014.08.006.

Wadsworth
,
H. E.
,
Dhima
,
K.
,
Womack
,
K. B.
,
Hart
,
J.
,
Weiner
,
M. F.
,
Hynan
,
L. S.
, et al. (
2018
).
Validity of Teleneuropsychological assessment in older patients with cognitive disorders
.
Archives of Clinical Neuropsychology: The Official Journal of the National Academy of Neuropsychologists
,
33
(
8
),
1040
1045
. https://doi.org/10.1093/arclin/acx140.

Wadsworth
,
H. E.
,
Galusha-Glasscock
,
J. M.
,
Womack
,
K. B.
,
Quiceno
,
M.
,
Weiner
,
M. F.
,
Hynan
,
L. S.
, et al. (
2016
).
Remote neuropsychological assessment in rural American Indians with and without cognitive impairment
.
Archives of Clinical Neuropsychology
,
31
(
5
),
420
425
. https://doi.org/10.1093/arclin/acw030.

Weintraub
,
S.
,
Besser
,
L.
,
Dodge
,
H. H.
,
Teylan
,
M.
,
Ferris
,
S.
,
Goldstein
,
F. C.
, et al. (
2018
).
Version 3 of the Alzheimer disease centers’ neuropsychological test battery in the uniform data set (UDS)
.
Alzheimer Disease & Associated Disorders
,
32
(
1
),
10
17
. https://doi.org/10.1097/WAD.0000000000000223.

Williams
,
M. M.
,
Scharff
,
D. P.
,
Mathews
,
K. J.
,
Hoffsuemmer
,
J. S.
,
Jackson
,
P.
,
Morris
,
J. C.
, et al. (
2010
).
Barriers and facilitators of African American participation in Alzheimer disease biomarker research
.
Alzheimer Disease and Associated Disorders
,
24
(
Suppl 1
),
S24
S29
. https://doi.org/10.1097/WAD.0b013e3181f14a14.

Wilson
,
R. S.
,
Leurgans
,
S. E.
,
Foroud
,
T. M.
,
Sweet
,
R. A.
,
Graff-Radford
,
N.
,
Mayeux
,
R.
, et al. (
2010
).
Telephone assessment of cognitive function in the late-onset Alzheimer’s disease family study
.
Archives of Neurology
,
67
(
7
),
855
861
. https://doi.org/10.1001/archneurol.2010.129.

Wittich
,
W.
,
Phillips
,
N.
,
Nasreddine
,
Z. S.
, &
Chertkow
,
H.
(
2010
).
Sensitivity and specificity of the Montreal cognitive assessment modified for individuals who are visually impaired
.
Journal of Visual Impairment & Blindness
,
104
(
6
),
360
368
. https://doi.org/10.1177/0145482X1010400606.

Wynn
,
M. J.
,
Sha
,
A. Z.
,
Lamb
,
K.
,
Carpenter
,
B. D.
, &
Yochim
,
B. P.
(
2020
).
Performance on the verbal naming test among healthy, community-dwelling older adults
.
The Clinical Neuropsychologist
,
34
(
5
),
956
968
. https://doi.org/10.1080/13854046.2019.1683232.

Yochim
,
B. P.
,
Beaudreau
,
S. A.
,
Kaci Fairchild
,
J.
,
Yutsis
,
M. V.
,
Raymond
,
N.
,
Friedman
,
L.
, et al. (
2015
).
Verbal naming test for use with older adults: Development and initial validation
.
Journal of the International Neuropsychological Society: JINS
,
21
(
3
),
239
248
. https://doi.org/10.1017/S1355617715000120.

York
,
M. K.
,
Farace
,
E.
,
Pollak
,
L.
,
Floden
,
D.
,
Lin
,
G.
,
Wyman-Chick
,
K.
, et al. (
2021
).
The global pandemic has permanently changed the state of practice for pre-DBS neuropsychological evaluations
.
Parkinsonism & Related Disorders
,
86
,
135
138
. https://doi.org/10.1016/j.parkreldis.2021.04.029.

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]