Nucl Med Mol Imaging.  2023 Apr;57(2):86-93. 10.1007/s13139-022-00745-7.

Automatic Lung Cancer Segmentation in [ 18 F]FDG PET/CT Using a Two-Stage Deep Learning Approach

Affiliations
  • 1Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul 08826, Korea
  • 2Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 03080, Korea
  • 3Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Korea
  • 4Artificial Intelligence Institute, Seoul National University, Seoul 08826, Korea
  • 5Brightonix Imaging Inc., Seoul 03080, Korea
  • 6Division of Nuclear Medicine, Department of Radiology, Seoul St Mary’s Hospital, The Catholic University of Korea, Seoul 06591, Korea
  • 7Department of Nuclear Medicine, Korea University Guro Hospital, 148 Gurodong-ro, Guro-gu, Seoul 08308, Korea
  • 8Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul 03080, Korea

Abstract

Purpose
Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [ 18 F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [ 18 F]FDG PET/CT.
Methods
The whole-body [ 18 F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output.
Results
The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net.
Conclusion
The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [ 18 F]FDG PET/CT.

Keyword

Lung cancer; Deep learning; Segmentation; PET/CT
Full Text Links
  • NMMI
Actions
Cited
CITED
export Copy
Close
Share
  • Twitter
  • Facebook
Similar articles
Copyright © 2024 by Korean Association of Medical Journal Editors. All rights reserved.     E-mail: koreamed@kamje.or.kr