Model . | Precision ↑ . | Coverage ↑ . | Accuracy ↑ . | Absolute true ↑ . | Absolute false ↓ . |
---|---|---|---|---|---|
iMFP-LG | 0.797 | 0.803 | 0.796 | 0.788 | 0.078 |
w/o ada | 0.777 | 0.785 | 0.776 | 0.767 | 0.082 |
w/o GATb | 0.785 | 0.791 | 0.784 | 0.776 | 0.080 |
w/o pretrainc | 0.754 | 0.769 | 0.752 | 0.733 | 0.095 |
Model . | Precision ↑ . | Coverage ↑ . | Accuracy ↑ . | Absolute true ↑ . | Absolute false ↓ . |
---|---|---|---|---|---|
iMFP-LG | 0.797 | 0.803 | 0.796 | 0.788 | 0.078 |
w/o ada | 0.777 | 0.785 | 0.776 | 0.767 | 0.082 |
w/o GATb | 0.785 | 0.791 | 0.784 | 0.776 | 0.080 |
w/o pretrainc | 0.754 | 0.769 | 0.752 | 0.733 | 0.095 |
Note: The highest values are highlighted in bold. ↑ means that a larger value is better on this metric; ↓ means that a smaller value is better on this metric; a w/o ad is a variant in which the adversarial training is not used during training process; b w/o GAT is a variant without GAT; c w/o pretrain is a variant in which the protein language model is re-initialized randomly instead of using pre-trained weights. GAT, graph attention network; MFBP, multi-functional bioactive peptide.
Model . | Precision ↑ . | Coverage ↑ . | Accuracy ↑ . | Absolute true ↑ . | Absolute false ↓ . |
---|---|---|---|---|---|
iMFP-LG | 0.797 | 0.803 | 0.796 | 0.788 | 0.078 |
w/o ada | 0.777 | 0.785 | 0.776 | 0.767 | 0.082 |
w/o GATb | 0.785 | 0.791 | 0.784 | 0.776 | 0.080 |
w/o pretrainc | 0.754 | 0.769 | 0.752 | 0.733 | 0.095 |
Model . | Precision ↑ . | Coverage ↑ . | Accuracy ↑ . | Absolute true ↑ . | Absolute false ↓ . |
---|---|---|---|---|---|
iMFP-LG | 0.797 | 0.803 | 0.796 | 0.788 | 0.078 |
w/o ada | 0.777 | 0.785 | 0.776 | 0.767 | 0.082 |
w/o GATb | 0.785 | 0.791 | 0.784 | 0.776 | 0.080 |
w/o pretrainc | 0.754 | 0.769 | 0.752 | 0.733 | 0.095 |
Note: The highest values are highlighted in bold. ↑ means that a larger value is better on this metric; ↓ means that a smaller value is better on this metric; a w/o ad is a variant in which the adversarial training is not used during training process; b w/o GAT is a variant without GAT; c w/o pretrain is a variant in which the protein language model is re-initialized randomly instead of using pre-trained weights. GAT, graph attention network; MFBP, multi-functional bioactive peptide.
This PDF is available to Subscribers Only
View Article Abstract & Purchase OptionsFor full access to this pdf, sign in to an existing account, or purchase an annual subscription.