News

Home > News > NEWS > Content

CUG Team Publishes Paper in Remote Sensing

Nov 25, 2019  

“Multimodal and Multi-Model Deep Fusion for Fine Classification of Regional Complex Landscape Areas Using ZiYuan-3 Imagery” is published in Remote Sensing on November 19, 2019. It is an achievement of Associate Professor CHEN Weitao’s team from the School of Computer Science. The first author is Associate Professor LI Xianjv, and CHEN is the corresponding author.

Land cover classification (LCC) of complex landscapes is attractive to the remote sensing community but poses great challenges. In complex open pit mining and agricultural development landscapes (CMALs), the landscape-specific characteristics limit the accuracy of LCC. The combination of traditional feature engineering and machine learning algorithms (MLAs) is not sufficient for LCC in CMALs. Deep belief network (DBN) methods achieved success in some remote sensing applications because of their excellent unsupervised learning ability in feature extraction. The usability of DBN has not been investigated in terms of LCC of complex landscapes and integrating multimodal inputs. A novel multimodal and multi-model deep fusion strategy based on DBN was developed and tested for fine LCC (FLCC) of CMALs in a 109.4 km2 area of Wuhan City, China. First, low-level and multimodal spectral–spatial and topographic features derived from ZiYuan-3 imagery were extracted and fused. The features were then input into a DBN for deep feature learning. The developed features were fed to random forest and support vector machine (SVM) algorithms for classification. Experiments were conducted that compared the deep features with the softmax function and low-level features with MLAs. Five groups of training, validation, and test sets were performed with some spatial auto-correlations. A spatially independent test set and generalized McNemar tests were also employed to assess the accuracy. The fused model of DBN-SVM achieved overall accuracies (OAs) of 94.74% ± 0.35% and 81.14% in FLCC and LCC, respectively, which significantly outperformed almost all other models. From this model, only three of the twenty land covers achieved OAs below 90%. In general, the developed model can contribute to FLCC and LCC in CMALs, and more deep learning algorithm-based models should be investigated in future for the application of FLCC and LCC in complex landscapes.

Figure 1. ZiYuan-3 fused imagery and location of the study area and field samples.

Figure 2. Schematic diagram of restricted Boltzmann machine.

Figure 3. The structure of deep belief network. RBM: restricted Boltzmann machine.


Link: https://www.mdpi.com/2072-4292/11/22/2716


Pre.:2 CUG Professors and 2 CUG Alumni Elected as Members of Chinese Academy of Sciences
Next:​Opening Ceremony of ASEAN Institute of China-ASEAN Geosciences Cooperation Center

Close