Self-Supervised Learning With Adaptive Distillation for Hyperspectral Image Classification
Hyperspectral image (HSI) classification is an important topic in the community of remote sensing, which has a wide range of applications in geoscience. Recently, deep learning-based methods have been widely used in HSI classification. However, due to the scarcity of labeled samples in HSI, the potential of deep learning-based methods has not been fully exploited. To solve this problem, a self-supervised learning (SSL) method with adaptive distillation is proposed to train the deep neural network with extensive unlabeled samples. The proposed method consists of two modules: adaptive knowledge distillation with spatial–spectral similarity and 3-D transformation on HSI cubes. The SSL with adaptive knowledge distillation uses the self-supervised information to train the network by knowledge distillation, where self-supervised knowledge is the adaptive soft label generated by spatial–spectral similarity measurement. The SSL with adaptive knowledge distillation mainly includes the following three steps. First, the similarity between unlabeled samples and object classes in HSI is generated based on the spatial–spectral joint distance (SSJD) between unlabeled samples and labeled samples. Second, the adaptive soft label of each unlabeled sample is generated to measure the probability that the unlabeled sample belongs to each object class. Third, a progressive convolutional network (PCN) is trained by minimizing the cross-entropy between the adaptive soft labels and the probabilities generated by the forward propagation of the PCN. The SSL with 3-D transformation rotates the HSI cube in both the spectral domain and the spatial domain to fully exploit the labeled samples. Experiments on three public HSI data sets have demonstrated that the proposed method can achieve better performance than existing state-of-the-art methods.