Multi-Channel Disentangled Graph Neural Networks with Different Types of Self-constraints.

Journal: IEEE Transactions On Pattern Analysis And Machine Intelligence
Published:
Abstract

Graph Neural Network (GNN) is a popular semi-supervised graph representation learning method, whose performance strongly relies on the quality and quantity of labeled nodes. Given the insufficiency of labeled nodes in many real applications, many multi-channel GNNs have been developed to extract self-supervised information by leveraging consistency and complementarity among augmented graphs from different channels. However, these methods often struggle to balance conflicting self-supervised constraints, enhancing certain types of information at the expense of others. To tackle this problem, we propose a Multi-channel Disentangled Graph Neural Network (MD-GraphNet), which effectively classifies self-supervised constraints by learning disentangled representations. Specifically, our model enforces consistency constraints for shared representations, graph reconstruction constraints for complementary (or private) representations, and aligning constraints for fused representations. Our model overcomes the confusion and loss problems of different types of self-supervised signals. Experimental results on benchmark datasets demonstrate the effectiveness of MD-GraphNet for semi-supervised node classification.

Authors
Zhuomin Liang, Liang Bai, Xian Yang, Jiye Liang