Enhancing Gamma Knife Cone-beam Computed Tomography Image Quality Using Pix2pix Generative Adversarial Networks: A Deep Learning Approach.

Journal: Journal Of Medical Physics
Published:
Abstract

The study aims to develop a modified Pix2Pix convolutional neural network framework to enhance the quality of cone-beam computed tomography (CBCT) images. It also seeks to reduce the Hounsfield unit (HU) variations, making CBCT images closely resemble the internal anatomy as depicted in computed tomography (CT) images. We used datasets from 50 patients who underwent Gamma Knife treatment to develop a deep learning model that translates CBCT images into high-quality synthetic CT (sCT) images. Paired CBCT and ground truth CT images from 40 patients were used for training and 10 for testing on 7484 slices of 512 × 512 pixels with the Pix2Pix model. The sCT images were evaluated against ground truth CT scans using image quality assessment metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation, and dice similarity coefficient. The results demonstrate significant improvements in image quality when comparing sCT images to CBCT, with SSIM increasing from 0.85 ± 0.05 to 0.95 ± 0.03 and MAE dropping from 77.37 ± 20.05 to 18.81 ± 7.22 (p < 0.0001 for both). PSNR and RMSE also improved, from 26.50 ± 1.72 to 30.76 ± 2.23 and 228.52 ± 53.76 to 82.30 ± 23.81, respectively (p < 0.0001). The sCT images show reduced noise and artifacts, closely matching CT in HU values, and demonstrate a high degree of similarity to CT images, highlighting the potential of deep learning to significantly improve CBCT image quality for radiosurgery applications.

Authors
Prabhakar Ramachandran, Darcie Anderson, Zachery Colbert, Daniel Arrington, Michael Huo, Mark Pinkham, Matthew Foote, Andrew Fielding