Electrical Engineering and Systems Science > Image and Video Processing
[Submitted on 23 Mar 2022 (v1), last revised 28 Jan 2024 (this version, v2)]
Title:Stable Optimization for Large Vision Model Based Deep Image Prior in Cone-Beam CT Reconstruction
View PDFAbstract:Large Vision Model (LVM) has recently demonstrated great potential for medical imaging tasks, potentially enabling image enhancement for sparse-view Cone-Beam Computed Tomography (CBCT), despite requiring a substantial amount of data for training. Meanwhile, Deep Image Prior (DIP) effectively guides an untrained neural network to generate high-quality CBCT images without any training data. However, the original DIP method relies on a well-defined forward model and a large-capacity backbone network, which is notoriously difficult to converge. In this paper, we propose a stable optimization method for the forward-model-free, LVM-based DIP model for sparse-view CBCT. Our approach consists of two main characteristics: (1) multi-scale perceptual loss (MSPL) which measures the similarity of perceptual features between the reference and output images at multiple resolutions without the need for any forward model, and (2) a reweighting mechanism that stabilizes the iteration trajectory of MSPL. One shot optimization is used to simultaneously and stably reweight MSPL and optimize LVM. We evaluate our approach on two publicly available datasets: SPARE and Walnut. The results show significant improvements in both image quality metrics and visualization that demonstrates reduced streak artifacts. The source code is available upon request.
Submission history
From: Hongxiang Lin PhD [view email][v1] Wed, 23 Mar 2022 15:16:29 UTC (12,714 KB)
[v2] Sun, 28 Jan 2024 13:08:26 UTC (464 KB)
Current browse context:
eess.IV
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.