-
PDF
- Split View
-
Views
-
Cite
Cite
O Tan, J.Z Wu, K.K Yeo, S.Y.A Lee, J.S Hon, P Chiang, Y.H Lau, R Moh, J.M Fam, Building a predictive model for warfarin dosing via machine learning, European Heart Journal, Volume 41, Issue Supplement_2, November 2020, ehaa946.3491, https://doi.org/10.1093/ehjci/ehaa946.3491
- Share Icon Share
Abstract
Warfarin titration via International Normalised Ratio (INR) monitoring can be a challenge for patients on long term anticoagulation due to its multiple drug interactions, long half life and patient's individual reponse, yet critically important due to its narrow therapeutic index. Machine learning may have a role in learning and replicating existing warfarin prescribing practices, to be potentially incorporated into an automated warfarin titration model.
We aim to explore the feasibility of using machine learning to develop a model that can learn and predict actual warfarin titration practices.
A retrospective dataset of 4,247 patients with 48,895 data points of INR values were obtained from our institutional database. Patients who had less than 5 visits recorded, invalid or missing values were excluded. Variables studied included age, warfarin indication, warfarin dose, target INR range, actual INR values, time between titration and time in therapeutic range (TTR as defined by the Rosenthal formula).
The machine learning model was developed on an unbiased training data set (1,805 patients), further refined on a handpicked balanced validation set (400 patients), before being evaluated on two balanced test sets of 100 patients each. The test sets were handpicked based on the criteria of TTR (“in vs out of range”) and stability of INR results (“low vs high fluctuation”) (Table 1). Given the time series nature of the data, a Recurrent Neural Network (RNN) was chosen to learn warfarin prescription practices. Long-short term memory (LSTM) cells were further employed to address the problem of time gaps between warfarin titration visits which could result in vanishing gradients.
A total of 2,163 patients with 42,622 data points were studied (mean age 65±11.7 years, 54.7% male). The mean TTR was 65.4%. The total warfarin dose per week as predicted by the RNN was compared with actual total warfarin dose per week prescribed for each patient in the test sets. The coefficient of determination for the RNN in the “in vs out of range” and “low vs high fluctuation” test sets were 0.941 and 0.962 respectively (Figure 1).
This proof of concept study demonstrated that a RNN based machine learning model was able to learn and predict warfarin dosage changes with reasonable accuracy. The findings merit further evaluation of the potential use of machine learning in an automated warfarin titration model.
Table 1. Description of the test set
Test set 1 (n=100) | 1A “in range”) | 1B (“out of range”) |
50 patients with TTR>80% | 50 patients with TTR <20% | |
Test set 2 (n=100) | 2A (“low fluctuation”) | 2B (“high fluctuation”) |
50 patients with INR fluctuation <1.5 standard deviation | 50 patients with INR fluctuation >1.5 standard deviation |
Test set 1 (n=100) | 1A “in range”) | 1B (“out of range”) |
50 patients with TTR>80% | 50 patients with TTR <20% | |
Test set 2 (n=100) | 2A (“low fluctuation”) | 2B (“high fluctuation”) |
50 patients with INR fluctuation <1.5 standard deviation | 50 patients with INR fluctuation >1.5 standard deviation |
Type of funding source: None