Onboarding new use cases . | ||
---|---|---|
Intake and onboarding | Users submit project intake request to be onboarded to HDAP. | |
Data access | Currently available notes | Onboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1). |
New types of notes | HDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems. | |
Data annotation | A supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556 | |
Model training | We provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP. | |
Productionizing new use cases | Current downstream locations (HDAP and CDW) | When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team. |
New downstream operational locations | When necessary, additional ETLs to new locations can be established by the HDAP data science team. |
Onboarding new use cases . | ||
---|---|---|
Intake and onboarding | Users submit project intake request to be onboarded to HDAP. | |
Data access | Currently available notes | Onboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1). |
New types of notes | HDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems. | |
Data annotation | A supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556 | |
Model training | We provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP. | |
Productionizing new use cases | Current downstream locations (HDAP and CDW) | When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team. |
New downstream operational locations | When necessary, additional ETLs to new locations can be established by the HDAP data science team. |
Onboarding new use cases . | ||
---|---|---|
Intake and onboarding | Users submit project intake request to be onboarded to HDAP. | |
Data access | Currently available notes | Onboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1). |
New types of notes | HDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems. | |
Data annotation | A supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556 | |
Model training | We provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP. | |
Productionizing new use cases | Current downstream locations (HDAP and CDW) | When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team. |
New downstream operational locations | When necessary, additional ETLs to new locations can be established by the HDAP data science team. |
Onboarding new use cases . | ||
---|---|---|
Intake and onboarding | Users submit project intake request to be onboarded to HDAP. | |
Data access | Currently available notes | Onboarded users have immediate access to the centralized and cleaned notes that we currently process (TIU, pathology, radiology) (from step 1). |
New types of notes | HDAP already has daily updates of all major free text notes in VA; however new types of notes can also be uploaded, if external, or pulled into our daily ETL processes, if they already exist elsewhere in our systems. | |
Data annotation | A supported data annotation environment (including eHOST) is available for use across projects with set-up and access supported by our internal HDAP data science team. Scripts and workflows are also available for document prepping, content loading and schema conversion, to send data to and from eHOST schemas. For the SDoH use case, a public website is used to support annotation and provides guidelines and concept definitions. This is the current link: http://ec2-18-206-230-88.compute-1.amazonaws.com/wordpress/?page_id=556 | |
Model training | We provide code and workflows for model training and fine-tuning from data annotation outputs, as well as a range of open-source models that have been brought into HDAP. | |
Productionizing new use cases | Current downstream locations (HDAP and CDW) | When text processing algorithms are ready for productionization, they can be included in the current daily model running workflows and ETLs, with support from our HDAP data science team. |
New downstream operational locations | When necessary, additional ETLs to new locations can be established by the HDAP data science team. |
This PDF is available to Subscribers Only
View Article Abstract & Purchase OptionsFor full access to this pdf, sign in to an existing account, or purchase an annual subscription.