TitleSACNN: Self-Attention Convolutional Neural Network for Low-Dose CT Denoising With Self-Supervised Perceptual Loss Network
AuthorsLi, Meng
Hsu, William
Xie, Xiaodong
Cong, Jason
Gao, Wen
AffiliationPeking Univ, Dept Elect Engn & Comp Sci, Beijing 100871, Peoples R China
Univ Calif Los Angeles, David Geffen Sch Med, Dept Radiol Sci, Los Angeles, CA 90024 USA
Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90095 USA
KeywordsCOMPUTED-TOMOGRAPHY
IMAGE-RECONSTRUCTION
NOISE-REDUCTION
ALGORITHM
Issue DateJul-2020
PublisherIEEE TRANSACTIONS ON MEDICAL IMAGING
AbstractComputed tomography (CT) is a widely used screening and diagnostic tool that allows clinicians to obtain a high-resolution, volumetric image of internal structures in a non-invasive manner. Increasingly, efforts have been made to improve the image quality of low-dose CT (LDCT) to reduce the cumulative radiation exposure of patients undergoing routine screening exams. The resurgence of deep learning has yielded a new approach for noise reduction by training a deep multi-layer convolutional neural networks (CNN) to map the low-dose to normal-dose CT images. However, CNN-based methods heavily rely on convolutional kernels, which use fixed-sizefilters to process one local neighborhood within the receptive field at a time. As a result, they are not efficient at retrieving structural information across large regions. In this paper, we propose a novel 3D self-attention convolutional neural network for the LDCT denoising problem. Our 3D self-attention module leverages the 3D volume of CT images to capture a wide range of spatial information both within CT slices and between CT slices. With the help of the 3D self-attention module, CNNs are able to leverage pixels with stronger relationships regardless of their distance and achieve better denoising results. In addition, we propose a self-supervised learning scheme to train a domain-specific autoencoder as the perceptual loss function. We combine these two methods and demonstrate their effectiveness on both CNN-based neural networks and WGAN-based neural networks with comprehensive experiments. Tested on the AAPM-Mayo Clinic Low Dose CT Grand Challenge data set, our experiments demonstrate that self-attention (SA) module and autoencoder (AE) perceptual loss function can efficiently enhance traditional CNNs and can achieve comparable or better results than the state-of-the-art methods.
URIhttp://hdl.handle.net/20.500.11897/590270
ISSN0278-0062
DOI10.1109/TMI.2020.2968472
IndexedSCI(E)
Appears in Collections:工学院

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

Scopus®



Checked on Current Time

百度学术™



Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.