Figure 6
The procedures of ICG feature selection and the performance of ICGe model for the prediction of ICB response. (A) Ranking algorithms for ICG feature selection in 33 TCGA CTs. ICG features consistently associated with nine IP scores in more than 10 CTs were selected as candidate features. Further screening using the hill climbing algorithm was performed to build a prediction model containing five features, the GSVA score of activated ICGs and Pair 13 (CD209, BTN2A1) and Pair 3 (ICOS, ICOSLG) and the expression fraction of HAVCR2 against TIC-ICGs and TIGIT against IC-ICGs. (B) The performance of the ICGe model on validation datasets. Legends show the basic information of each dataset, n/m indicates the response number/non-response number of each dataset. (C) A comparison of ICGe with other published methods using the mean AUC, which is defined by the average AUC of 5-fold cross-validation and 100 repetitions. Each fold randomly selected 60% samples as the training set to train the model. The remaining samples were used as a test dataset to assess the AUC. ICGe was not trained further.

The procedures of ICG feature selection and the performance of ICGe model for the prediction of ICB response. (A) Ranking algorithms for ICG feature selection in 33 TCGA CTs. ICG features consistently associated with nine IP scores in more than 10 CTs were selected as candidate features. Further screening using the hill climbing algorithm was performed to build a prediction model containing five features, the GSVA score of activated ICGs and Pair 13 (CD209, BTN2A1) and Pair 3 (ICOS, ICOSLG) and the expression fraction of HAVCR2 against TIC-ICGs and TIGIT against IC-ICGs. (B) The performance of the ICGe model on validation datasets. Legends show the basic information of each dataset, n/m indicates the response number/non-response number of each dataset. (C) A comparison of ICGe with other published methods using the mean AUC, which is defined by the average AUC of 5-fold cross-validation and 100 repetitions. Each fold randomly selected 60% samples as the training set to train the model. The remaining samples were used as a test dataset to assess the AUC. ICGe was not trained further.

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close