Figure 1
Overview of the proposed scMMN method. (a) The raw scRNA-seq data are preprocessed in the data preprocessing module. In the multi-view construction module, shallow views are constructed using the two different graph learning methods, and deep features are constructed using several GLFs. Afterwards, the learned views are fed into the representation learning module. This module initially learns latent representations through GCNs, and then guides model training through feature and structural contrastive learning. (b) The structural consistency contrastive learning module mainly computes the contrastive loss between the self-loop adjacency matrix $\hat{A}^{i}$ and the learned adjacency matrix $S_{ij}^{(a,b)}$. (c) The feature fusion module employs MLP to fuse multi-view data and learn a common representation for the final clustering.

Overview of the proposed scMMN method. (a) The raw scRNA-seq data are preprocessed in the data preprocessing module. In the multi-view construction module, shallow views are constructed using the two different graph learning methods, and deep features are constructed using several GLFs. Afterwards, the learned views are fed into the representation learning module. This module initially learns latent representations through GCNs, and then guides model training through feature and structural contrastive learning. (b) The structural consistency contrastive learning module mainly computes the contrastive loss between the self-loop adjacency matrix |$\hat{A}^{i}$| and the learned adjacency matrix |$S_{ij}^{(a,b)}$|⁠. (c) The feature fusion module employs MLP to fuse multi-view data and learn a common representation for the final clustering.

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close