News

Home > News > Academic Papers > Content

BR2Net: Defocus Blur Detection via a Bi-directional Channel Attention Residual Refining Network

Apr 26, 2020  

Title: BR2Net: Defocus Blur Detection via a Bi-directional Channel Attention Residual Refining Network

Authors: Chang Tang, Xinwang Liu, Shan An, Pichao Wang

Source: IEEE Transactions on Multimedia

Published: April 2020

DOI: 10.1109/TMM.2020.2985541

Link: https://ieeexplore.ieee.org/document/9057632


Abstract:

Due to the remarkable potential applications, defocus blur detection, which aims to separate blurry regions from an image, has attracted much attention. Although significant progress has been made by many methods, there are still various challenges that hinder the results, e.g., confusing background areas, sensitivity to the scale and missing the boundary details of the defocus blur regions. To solve these issues, in this paper, we propose a deep convolutional neural network (CNN) for defocus blur detection via a Bidirectional Residual Refining network (BR2Net). Specifically, a residual learning and refining module (RLRM) is designed to correct the prediction errors in the intermediate defocus blur map. Then, we develop a bidirectional residual feature refining network with two branches by embedding multiple RLRMs into it for recurrently combining and refining the residual features. One branch of the network refines the residual features from the shallow layers to the deep layers, and the other branch refines the residual features from the deep layers to the shallow layers. In such a manner, both the low-level spatial details and high-level semantic information can be encoded step by step in two directions to suppress background clutter and enhance the detected region details. The outputs of the two branches are fused to generate the final results. In addition, with the observation that different feature channels have different extents of discrimination for detecting blurred regions, we add a channel attention module to each feature extraction layer to select more discriminative features for residual learning. To promote further research on defocus blur detection, we create a new dataset with various challenging images and manually annotate their corresponding pixelwise ground truths. The proposed network is validated on two commonly used defocus blur detection datasets and our newly collected dataset by comparing it with 10 other state-of-the-art methods. Extensive experiments with ablation studies demonstrate that BR2Net consistently and significantly outperforms the competitors in terms of both the efficiency and accuracy.


Pre.:Total natural resource rents, trade openness and economic growth in the top mineral-rich countries: New evidence from nonlinear and asymmetric analysis
Next:Geometry and Filling Features of Hydraulic Fractures in Coalbed Methane Reservoirs based on Subsurface Observations

Close