Table 2.

Pathways for new use cases.

Onboarding new use cases
Intake and onboardingUsers submit project intake request to be onboarded to HDAP.
Data accessCurrently available notesOnboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1).
New types of notesHDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems.
Data annotationA supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556
Model trainingWe provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP.
Productionizing new use casesCurrent downstream locations (HDAP and CDW)When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team.
New downstream operational locationsWhen necessary, additional ETLs to new locations can be established by the HDAP data science team.
Onboarding new use cases
Intake and onboardingUsers submit project intake request to be onboarded to HDAP.
Data accessCurrently available notesOnboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1).
New types of notesHDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems.
Data annotationA supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556
Model trainingWe provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP.
Productionizing new use casesCurrent downstream locations (HDAP and CDW)When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team.
New downstream operational locationsWhen necessary, additional ETLs to new locations can be established by the HDAP data science team.
Table 2.

Pathways for new use cases.

Onboarding new use cases
Intake and onboardingUsers submit project intake request to be onboarded to HDAP.
Data accessCurrently available notesOnboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1).
New types of notesHDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems.
Data annotationA supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556
Model trainingWe provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP.
Productionizing new use casesCurrent downstream locations (HDAP and CDW)When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team.
New downstream operational locationsWhen necessary, additional ETLs to new locations can be established by the HDAP data science team.
Onboarding new use cases
Intake and onboardingUsers submit project intake request to be onboarded to HDAP.
Data accessCurrently available notesOnboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1).
New types of notesHDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems.
Data annotationA supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556
Model trainingWe provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP.
Productionizing new use casesCurrent downstream locations (HDAP and CDW)When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team.
New downstream operational locationsWhen necessary, additional ETLs to new locations can be established by the HDAP data science team.
Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close