Multiscale Subgraph Adversarial Contrastive Learning.

Journal: IEEE Transactions On Neural Networks And Learning Systems
Published:
Abstract

Graph contrastive learning (GCL), as a typical self-supervised learning paradigm, has been able to achieve promising performance without labels and gradually attracts much attention. Graph-level method aims to learn representations of each graph by contrasting two augmented graphs. Previous studies usually simply apply contrastive learning to keep the embeddings of augmented views from the same anchor graph (positive pairs) close to each other, as well as separate the embeddings of augmented views from different anchor graphs (negative pairs). However, it is well-known that the structure of graph is always complex and multiscale, which gives rise to a fundamental question: after graph augmentation, will the previous assumption still hold in reality? Through experimental analytics, we find that the semantic information of two augmented graphs from the same anchor graph may be not consistent, and whether two augmented graphs are positive or negative sample pairs is highly correlated with the multiscale structure of the graph. Based on this observation, we then propose a multiscale subgraph contrastive learning method, named MSSGCL, which can characterize the fine-grained semantic information. Specifically, we generate global and local views at different scales based on subgraph sampling and construct multiple contrastive relationships according to their semantic associations to provide richer self-supervised information. Furthermore, to further improve the generalization performance of the model, we propose an extended model called MSSGCL++. It adopts an asymmetric structure to avoid pushing semantically similar negative samples far away. We further introduce adversarial training to perturb the augmented view and thus construct a more difficult self-supervised training task. Finally, a min-max saddle point problem is optimized and the "free" strategy is used to speed up the training process. Extensive experiments and parametric analysis on 16 real-world graph classification datasets confirm the effectiveness of our proposed approach. Compared with state of the art (SOTA) method, our method achieves improvements of 2% and 1.6% in unsupervised and transfer learning settings, respectively.

Authors
Yanbei Liu, Yu Zhao, Zhitao Xiao, Lei Geng, Xiao Wang, Yanwei Pang, Jerry Chun Wei Lin