News

Home > News > Academic Papers > Content

Split Depth-wise Separable Graph Convolution Network for Road Extraction in Complex Environment from High-resolution Remote Sensing Imagery

Dec 7, 2021  

Title: Split Depth-wise Separable Graph Convolution Network for Road Extraction in Complex Environment from High-resolution Remote Sensing Imagery

Authors: Gaodian Zhou, Weitao Chen*, Qianshan Gui, Xianju Li, Lizhe Wang

Source: IEEE Transactions on Geoscience and Remote Sensing

DOI: 10.1109/TGRS.2021.3128033

Available online: 15 November 2021

Link: https://ieeexplore.ieee.org/abstract/document/9614130

Link of the code in the paper: https://github.com/tist0bsc/SGCN

Link of the dataset: 

Google Driver: https://drive.google.com/drive/folders/1ySCIbjRgzZVLiKjcE-AWxfqHmSWJy0ZF?usp=sharing


Abstract:

Road information from high-resolution remote sensing images is widely used in various fields, and deep-learning-based methods have effectively shown high road-extraction performance. However, for the detection of roads sealed with tarmac, or covered by trees in high-resolution remote sensing images, some challenges still limit the accuracy of extraction: 1) large intra-class differences between roads, and unclear inter-class differences between urban objects, especially roads and buildings; 2) roads occluded by trees, shadows, and buildings are difficult to extract; and 3) lack of high-precision remote-sensing datasets for roads. To increase the accuracy of road extraction from high-resolution remote-sensing images, we propose a split depth-wise (DW) separable graph convolutional network (SGCN). Firstly, we split DW-separable convolution to obtain channel and spatial features, to enhance the expression ability of road features. Thereafter, we present a graph convolutional network to capture global contextual road information in channel and spatial features. The Sobel gradient operator is used to construct an adjacency matrix of the feature graph. A total of thirteen deep-learning networks were used on the Massachusetts Roads Dataset and nine on our self-constructed mountain road dataset, for comparison with our proposed SGCN. Our model achieved a mean Intersection over Union (IOU) of 81.65% with an F1-score of 78.99% for the Massachusetts Roads dataset, and a mean IOU of 62.45% with an F1-score of 45.06% for our proposed dataset. The visualization results showed that SGCN performs better in extracting covered and tiny roads and is able to effectively extract roads from high-resolution remote-sensing images.


Pre.:Deep learning-based nondestructive evaluation of reinforcement bars using ground-penetrating radar and electromagnetic induction data
Next:Variations in wetland hydrology drive rapid changes in the microbial community, carbon metabolic activity, and greenhouse gas fluxes

Close