[go: up one dir, main page]

 
 

Inverse Problems and Imaging

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 August 2021) | Viewed by 65888

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics, University of Bologna, 40127 Bologna, Italy
Interests: regularization algorithms; inverse problems in imaging; numerical optimization; parameter estimation; inversion algorithms for NMR relaxometry data; algorithms for sparse MRI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Bologna, 40127 Bologna, Italy
Interests: optimization and regularization algorithms; inverse problems in imaging; neural networks for deblurring and denoising problems; neural networks for image reconstruction from sparse data
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Inverse problems represent the model of applications that has a crucial impact on human life. Such models are characteristic of applications where data coming from scanners or sensors are used to obtain information about objects that are not directly measurable. Visual representation of such objects is a fundamental tool in the decision and analysis in various applicative areas such as medicine, life sciences, and technology, in both the public and private sector. The development of new sensors and scanners leads to sophisticated mathematical models and requires efficient computational methods. Researchers are increasing their efforts to develop new variational algorithms, as well as learning algorithms based on neural networks to tackle the challenges of recent technological evolution.

We invite authors to submit original research papers related to modern challenges in the solution of inverse problems in imaging and data inversion with a focus on tomographic imaging, MRI, and NMR reconstruction problems. Papers with a focus on optimizations and regularization methods for inverse problems in imaging, computational optimization, and regularization methods and applications are equally welcome.

Prof. Dr. Fabiana Zama
Prof. Dr. Elena Loli Piccolomini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • variational regularization algorithms
  • inverse problems
  • neural networks
  • learning algorithms in image processing
  • regularization algorithms for NMR data inversion
  • image deblurring
  • image denoising
  • tomographic imaging
  • MRI
  • optimization methods for image processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (22 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

25 pages, 999 KiB  
Article
Perspective Shape-from-Shading Problem: A Unified Convergence Result for Several Non-Lambertian Models
by Silvia Tozza
J. Imaging 2022, 8(2), 36; https://doi.org/10.3390/jimaging8020036 - 1 Feb 2022
Cited by 1 | Viewed by 2313
Abstract
Shape-from-Shading represents the problem of computing the three-dimensional shape of a surface given a single gray-value image of it as input. In a recent paper, we showed that the introduction of an attenuation factor in the brightness equations related to various perspective Shape-from-Shading [...] Read more.
Shape-from-Shading represents the problem of computing the three-dimensional shape of a surface given a single gray-value image of it as input. In a recent paper, we showed that the introduction of an attenuation factor in the brightness equations related to various perspective Shape-from-Shading models allows us to ensure the well-posedness of the corresponding differential problems. Here, we propose a unified convergence result of a numerical scheme for several non-Lambertian reflectance models. This result is interesting since it can be easily extended to other non-Lambertian models in a unified and, therefore, powerful framework. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Test 1. Plots of the exact and approximated images related to the four reflectance models. From top-left to bottom-right: images related to Lambertian model; ON-model with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.1</mn> </mrow> </semantics></math>; PH-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <mi>α</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2</mn> </mrow> </semantics></math>; BP-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Test 1. Plots of the exact and approximated scenes defined according to Equation (<a href="#FD3-jimaging-08-00036" class="html-disp-formula">3</a>) and related to the four reflectance models. From top-left to bottom-right: images related to Lambertian model; ON-model with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.1</mn> </mrow> </semantics></math>; PH-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <mi>α</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2</mn> </mrow> </semantics></math>; BP-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Test 1. Plots of the exact and approximated heights of the surfaces related to the four reflectance models. From top-left to bottom-right: images related to Lambertian model; ON-model with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.1</mn> </mrow> </semantics></math>; PH-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; BP-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Test 1. Plots of the RMSEs in the four cases. From top-left to bottom-right: images related to Lambertian model; ON-model with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>; PH-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <mi>α</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2</mn> </mrow> </semantics></math>; BP-model with <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>A</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Test 2. Plots of the exact and approximated scenes related to two reflectance models. From left to right: scenes related to PH-model and BP-model, with the parameters <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>a</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>α</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>. Focal length <math display="inline"><semantics> <mrow> <mi>f</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Test 2. Plots of the exact and approximated heights of the surfaces related to two reflectance models. From left to right: surface heights related to PH-model and BP-model, with the parameters <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>a</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>α</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>. Focal length <math display="inline"><semantics> <mrow> <mi>f</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Test 2. Plots of the exact and approximated images related to two reflectance models. From left to right: images related to PH-model and BP-model, with the parameters <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>a</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>α</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>. Focal length <math display="inline"><semantics> <mrow> <mi>f</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Test 2. Plots of the RMSEs related to the two cases reported in <a href="#jimaging-08-00036-f006" class="html-fig">Figure 6</a>. From left to right: images related to PH-model and BP-model, with the parameters <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>a</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo>,</mo> <msub> <mi>k</mi> <mi>D</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.8</mn> <mo>,</mo> <mo> </mo> <msub> <mi>k</mi> <mi>S</mi> </msub> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.2</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>α</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mi>c</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math>. Focal length <math display="inline"><semantics> <mrow> <mi>f</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">
35 pages, 5964 KiB  
Article
Nearly Exact Discrepancy Principle for Low-Count Poisson Image Restoration
by Francesca Bevilacqua, Alessandro Lanza, Monica Pragliola and Fiorella Sgallari
J. Imaging 2022, 8(1), 1; https://doi.org/10.3390/jimaging8010001 - 23 Dec 2021
Cited by 7 | Viewed by 2771
Abstract
The effectiveness of variational methods for restoring images corrupted by Poisson noise strongly depends on the suitable selection of the regularization parameter balancing the effect of the regulation term(s) and the generalized Kullback–Liebler divergence data term. One of the approaches still commonly used [...] Read more.
The effectiveness of variational methods for restoring images corrupted by Poisson noise strongly depends on the suitable selection of the regularization parameter balancing the effect of the regulation term(s) and the generalized Kullback–Liebler divergence data term. One of the approaches still commonly used today for choosing the parameter is the discrepancy principle proposed by Zanella et al. in a seminal work. It relies on imposing a value of the data term approximately equal to its expected value and works well for mid- and high-count Poisson noise corruptions. However, the series truncation approximation used in the theoretical derivation of the expected value leads to poor performance for low-count Poisson noise. In this paper, we highlight the theoretical limits of the approach and then propose a nearly exact version of it based on Monte Carlo simulation and weighted least-square fitting. Several numerical experiments are presented, proving beyond doubt that in the low-count Poisson regime, the proposed modified, nearly exact discrepancy principle performs far better than the original, approximated one by Zanella et al., whereas it works similarly or slightly better in the mid- and high-count regimes. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Original images (<b>first</b> column), observed images corrupted by blur and Poisson noise (<b>second</b> column), restored images obtained by the TV-KL model (8) and (9) with <math display="inline"><semantics> <mi>μ</mi> </semantics></math> selected according to the approximate DP [<a href="#B5-jimaging-08-00001" class="html-bibr">5</a>,<a href="#B7-jimaging-08-00001" class="html-bibr">7</a>] (<b>third</b> column), and the proposed nearly exact DP (<b>last</b> column).</p>
Full article ">Figure 2
<p>Comparison between the approximation <math display="inline"><semantics> <mrow> <mspace width="0.166667em"/> <msup> <mi>δ</mi> <mrow> <mo>(</mo> <mi>A</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> <mspace width="0.166667em"/> </mrow> </semantics></math> of <math display="inline"><semantics> <mrow> <mspace width="0.166667em"/> <msup> <mi>δ</mi> <mrow> <mo>(</mo> <mi>E</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>λ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>E</mi> <mfenced separators="" open="{" close="}"> <mi>F</mi> <mfenced separators="" open="(" close=")"> <msub> <mi>Y</mi> <mi>λ</mi> </msub> </mfenced> </mfenced> <mspace width="0.166667em"/> </mrow> </semantics></math> used in the (15) proposed in [<a href="#B5-jimaging-08-00001" class="html-bibr">5</a>,<a href="#B7-jimaging-08-00001" class="html-bibr">7</a>] and the Monte Carlo estimates <math display="inline"><semantics> <mrow> <mspace width="0.166667em"/> <msup> <mover accent="true"> <mi>δ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mspace width="0.166667em"/> <mo>(</mo> <mi>E</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>λ</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mspace width="0.166667em"/> </mrow> </semantics></math> for some <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>i</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>8</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Visual representation of the first 34 terms of the sequence of coefficients <math display="inline"><semantics> <msubsup> <mi>γ</mi> <mi>i</mi> <mrow> <mo>(</mo> <mo>∞</mo> <mo>)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> </mrow> </semantics></math>, in (26) (<b>left</b>) and the behavior of the probability measure defined in (28) as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> (<b>right</b>).</p>
Full article ">Figure 4
<p>Results of Monte Carlo simulation and weighted least-squares fitting for <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>6</mn> <mo>]</mo> </mrow> </semantics></math> (<b>first</b> column), <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>∈</mo> <mo>[</mo> <mn>6</mn> <mo>,</mo> <mn>66</mn> <mo>]</mo> </mrow> </semantics></math> (<b>second</b> column), and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>∈</mo> <mo>[</mo> <mn>66</mn> <mo>,</mo> <mn>250</mn> <mo>]</mo> </mrow> </semantics></math> (<b>third</b> column).</p>
Full article ">Figure 5
<p>Grayscale test images considered for the numerical experiments.</p>
Full article ">Figure 6
<p>Test image <tt>cameraman</tt>. <b>Top</b>: discrepancy curve divided by <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> (<b>a</b>,<b>c</b>) and ISNR/SSIM values (<b>b</b>,<b>d</b>) achieved for different <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values with <math display="inline"><semantics> <mrow> <mi>κ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 (<b>first</b> row) and <tt>band</tt> = 13, <tt>sigma</tt> = 3 (<b>second</b> row). <b>Bottom</b>: output <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts <math display="inline"><semantics> <mi>κ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Test image <tt>cameraman</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">Figure 8
<p>Test image <tt>cameraman</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 13, <tt>sigma</tt> = 3 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">Figure 9
<p>Test image <tt>brain</tt>. <b>Top</b>: discrepancy curve divided by <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> (<b>a</b>,<b>c</b>) and ISNR/SSIM values (<b>b</b>,<b>d</b>) achieved for different <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values with <math display="inline"><semantics> <mrow> <mi>κ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 (<b>first</b> row) and <tt>band</tt> = 13, <tt>sigma</tt> = 3 (<b>second</b> row). <b>Bottom</b>: output <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts <math display="inline"><semantics> <mi>κ</mi> </semantics></math>.</p>
Full article ">Figure 10
<p>Test image <tt>brain</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">Figure 11
<p>Test image <tt>brain</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 13, <tt>sigma</tt> = 3 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">Figure 12
<p>Test image <tt>phantom</tt>. <b>Top</b>: discrepancy curve divided by <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> (<b>a</b>,<b>c</b>) and ISNR/SSIM values (<b>b</b>,<b>d</b>) achieved for different <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values with <math display="inline"><semantics> <mrow> <mi>κ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 (<b>first</b> row) and <tt>band</tt> = 13, <tt>sigma</tt> = 3 (<b>second</b> row). <b>Bottom</b>: output <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts <math display="inline"><semantics> <mi>κ</mi> </semantics></math>.</p>
Full article ">Figure 13
<p>Test image <tt>phantom</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">Figure 14
<p>Test image <tt>phantom</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 13, <tt>sigma</tt> = 3 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">Figure 15
<p>Test image <tt>cells</tt>. <b>Top</b>: discrepancy curve divided by <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> (<b>a</b>,<b>c</b>) and ISNR/SSIM values (<b>b</b>,<b>d</b>) achieved for different <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values with <math display="inline"><semantics> <mrow> <mi>κ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 (<b>first</b> row) and <tt>band</tt> = 13, <tt>sigma</tt> = 3 (<b>second</b> row). <span class="html-italic">Bottom</span>: output <math display="inline"><semantics> <mi>μ</mi> </semantics></math>-values and ISNR/SSIM values obtained by the ADP-ADMM and the NEDP-ADMM for the two blur levels considered and different photon counts <math display="inline"><semantics> <mi>κ</mi> </semantics></math>.</p>
Full article ">Figure 16
<p>Test image <tt>cells</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 5, <tt>sigma</tt> = 1 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">Figure 17
<p>Test image <tt>cells</tt>. <b>Left</b> column: observed data <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> corrupted by Gaussian blur with parameters <tt>band</tt> = 13, <tt>sigma</tt> = 3 and Poisson noise with different <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-values ranging from 3 to 1000. <b>Middle</b> column: restorations by ADP-ADMM. <b>Right</b> column: restorations by NEDP-ADMM.</p>
Full article ">
14 pages, 1990 KiB  
Article
Learned Primal Dual Reconstruction for PET
by Alessandro Guazzo and Massimiliano Colarieti-Tosti
J. Imaging 2021, 7(12), 248; https://doi.org/10.3390/jimaging7120248 - 24 Nov 2021
Cited by 7 | Viewed by 2284
Abstract
We have adapted, implemented and trained the Learned Primal Dual algorithm suggested by Adler and Öktem and evaluated its performance in reconstructing projection data from our PET scanner. Learned Primal Dual reconstructions are compared to Maximum Likelihood Expectation Maximisation (MLEM) reconstructions. Different strategies [...] Read more.
We have adapted, implemented and trained the Learned Primal Dual algorithm suggested by Adler and Öktem and evaluated its performance in reconstructing projection data from our PET scanner. Learned Primal Dual reconstructions are compared to Maximum Likelihood Expectation Maximisation (MLEM) reconstructions. Different strategies for training are also compared. Whenever the noise level of the data to reconstruct is sufficiently represented in the training set, the Learned Primal Dual algorithm performs well on the recovery of the activity concentrations and on noise reduction as compared to MLEM. The algorithm is also shown to be robust against the appearance of artefacts, even when the images that are to be reconstructed present features were not present in the training set. Once trained, the algorithm reconstructs images in few seconds or less. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>A graphical representation of the LPD algorithm in the three-step case is shown.</p>
Full article ">Figure 2
<p>Architecture of the U-Net used at each single step of the LPD-algorithm. The first input has dimensions 147 · 147 · N, with N being equal to the iteration or step number, since the present, as well as all previous iterates, are fed into the CNN at each step.</p>
Full article ">Figure 3
<p>Example of synthetic images used for generating training data.</p>
Full article ">Figure 4
<p>Training phantom #1 and #2. The insert on the left plus the container on the right constitute phantom #1. The insert in the centre in the container on the right is phantom #2. Objects in the inserts and the container are filled with activity (FDG) in concentrations specified in Table 2.</p>
Full article ">Figure 5
<p>Training phantom #3. Insert on the left, container on the right.</p>
Full article ">Figure 6
<p>Example of experimental training data from miniPET-3.</p>
Full article ">Figure 7
<p>Mouse-like test phantom. Side view (<b>left</b>) and top view (<b>right</b>).</p>
Full article ">Figure 8
<p>Comparison of LPD-reconstruction and 10-iterations MLEM on synthetic data. The ground truth image and the projection data are also shown.</p>
Full article ">Figure 9
<p>The target image is the 20-iteeration reconstruction on the full data set. The MLEM(20) reconstruction is obtained with only 68% of the data just like the LPD 3 iterations.</p>
Full article ">Figure 10
<p>Target image reconstruction of the mouse-like phantom from the full projection data set; transverse, coronal and sagittal planes.</p>
Full article ">Figure 11
<p><math display="inline"><semantics> <msub> <mi>MLEM</mi> <mn>20</mn> </msub> </semantics></math> reconstruction of the mouse-like phantom from 68% of the full projection data set; transverse, coronal and sagittal planes.</p>
Full article ">Figure 12
<p>LPD reconstruction of the mouse-like phantom from 68% of the full projection data set; transverse, coronal and sagittal planes.</p>
Full article ">Figure 13
<p>Reconstruction with the 3-iteration LDP-algorithm trained with: synthetic data only (<b>left</b>), hybrid training (<b>centre</b>) and miniPET only training (<b>right</b>).</p>
Full article ">Figure 14
<p>Output of the first (<b>left</b>), second (<b>centre</b>) and third (<b>right</b>) iteration of the LPD-algorithm.</p>
Full article ">Figure 15
<p>Target image, <math display="inline"><semantics> <msub> <mi>MLEM</mi> <mn>20</mn> </msub> </semantics></math>, LPD reconstruction on 40% of projection data.</p>
Full article ">
27 pages, 2345 KiB  
Article
Conditional Invertible Neural Networks for Medical Imaging
by Alexander Denker, Maximilian Schmidt, Johannes Leuschner and Peter Maass
J. Imaging 2021, 7(11), 243; https://doi.org/10.3390/jimaging7110243 - 17 Nov 2021
Cited by 32 | Viewed by 4557
Abstract
Over recent years, deep learning methods have become an increasingly popular choice for solving tasks from the field of inverse problems. Many of these new data-driven methods have produced impressive results, although most only give point estimates for the reconstruction. However, especially in [...] Read more.
Over recent years, deep learning methods have become an increasingly popular choice for solving tasks from the field of inverse problems. Many of these new data-driven methods have produced impressive results, although most only give point estimates for the reconstruction. However, especially in the analysis of ill-posed inverse problems, the study of uncertainties is essential. In our work, we apply generative flow-based models based on invertible neural networks to two challenging medical imaging tasks, i.e., low-dose computed tomography and accelerated medical resonance imaging. We test different architectures of invertible neural networks and provide extensive ablation studies. In most applications, a standard Gaussian is used as the base distribution for a flow-based model. Our results show that the choice of a radial distribution can improve the quality of reconstructions. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Input image (<b>a</b>) and output of checkerboard downsampling (<b>b</b>) and haar downsampling (<b>c</b>). Inspired by [<a href="#B26-jimaging-07-00243" class="html-bibr">26</a>].</p>
Full article ">Figure 2
<p>Multi-scale architecture with conditioning network <span class="html-italic">H</span>. The conditioning network processes the conditioning input <math display="inline"><semantics> <msup> <mi>y</mi> <mi>δ</mi> </msup> </semantics></math> and outputs this to the respective conditional coupling layer.</p>
Full article ">Figure 3
<p>End-to-end invertible UNet with conditioning network <span class="html-italic">H</span>. The conditioning network processes the conditioning input <math display="inline"><semantics> <msup> <mi>y</mi> <mi>δ</mi> </msup> </semantics></math> and outputs this to the respective conditional coupling layer.</p>
Full article ">Figure 4
<p>Reconstruction and measurements for the low-dose LoDoPaB-CT data.</p>
Full article ">Figure 5
<p>Measurements and reconstruction for the single-coil fastMRI data.</p>
Full article ">Figure 6
<p>Conditioned mean and standard deviation for the different inversion layers.</p>
Full article ">Figure 7
<p>Samples from the posterior learned by the cINN. The ground-truth sample is shown in the upper-left corner. In (<b>a</b>), we used the conditioning based on the TV-regularized reconstruction, and in (<b>b</b>), the conditioning was chosen as the generalized inverse. It can be seen that individual samples from the generalized inverse conditioning do not look realistic.</p>
Full article ">Figure 8
<p>Cond. mean and point-wise standard deviation for the iUNet and the multi-scale architecture on the LoDoPaB-CT data.</p>
Full article ">Figure 9
<p>Cond. mean and point-wise standard deviation for the best-performing multi-scale architecture and iUNet on the fastMRI data. Both networks use the radial base distribution and no additional training noise, and the iUNet is trained with conditional loss.</p>
Full article ">Figure 10
<p>(<b>Left</b>) Moving average of loss during training. (<b>Right</b>) Ground-truth image from the LoDoPaB test dataset and the corresponding iUNet reconstruction. The pixels in white visualize exploding values in the reconstruction.</p>
Full article ">Figure A1
<p>Intermediate activations from a single layer in a conditioning UNet model. (<b>Top</b>) cINN Model trained with just the log-likelihood. (<b>Bottom</b>) cINN model trained with an additional loss for the conditioning UNet. One can observe that the conditioning network focuses on different parts of the image if no special loss is used. Otherwise, it produces activations that are close to the final reconstruction. In addition, there are many empty activations.</p>
Full article ">
13 pages, 1403 KiB  
Article
Discretization of Learned NETT Regularization for Solving Inverse Problems
by Stephan Antholzer and Markus Haltmeier
J. Imaging 2021, 7(11), 239; https://doi.org/10.3390/jimaging7110239 - 15 Nov 2021
Cited by 8 | Viewed by 1845
Abstract
Deep learning based reconstruction methods deliver outstanding results for solving inverse problems and are therefore becoming increasingly important. A recently invented class of learning-based reconstruction methods is the so-called NETT (for Network Tikhonov Regularization), which contains a trained neural network as regularizer in [...] Read more.
Deep learning based reconstruction methods deliver outstanding results for solving inverse problems and are therefore becoming increasingly important. A recently invented class of learning-based reconstruction methods is the so-called NETT (for Network Tikhonov Regularization), which contains a trained neural network as regularizer in generalized Tikhonov regularization. The existing analysis of NETT considers fixed operators and fixed regularizers and analyzes the convergence as the noise level in the data approaches zero. In this paper, we extend the frameworks and analysis considerably to reflect various practical aspects and take into account discretization of the data space, the solution space, the forward operator and the neural network defining the regularizer. We show the asymptotic convergence of the discretized NETT approach for decreasing noise levels and discretization errors. Additionally, we derive convergence rates and present numerical results for a limited data problem in photoacoustic tomography. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Top from left to right: phantom, masked phantom, and initial reconstruction <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="bold">A</mi> <mo>+</mo> </msup> <mi mathvariant="bold">A</mi> <mi>x</mi> </mrow> </semantics></math>. The difference between the phantoms on the left and the middle one shows the mask region <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>⊆</mo> <msub> <mi>D</mi> <mn>1</mn> </msub> </mrow> </semantics></math> where no data is generated. Bottom from left to right: data without noise, low noise <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, and high noise <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Top row: reconstructions using post-processing network <math display="inline"><semantics> <msup> <mo>Φ</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </semantics></math>. Middle row: NETT reconstructions using <math display="inline"><semantics> <msup> <mi mathvariant="script">R</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </semantics></math>. Bottom row: NETT reconstructions using <math display="inline"><semantics> <msup> <mi mathvariant="script">R</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics></math>. From Left to Right: Reconstructions from data without noise, low noise (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>) and high noise (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Semilogarithmic plot of the mean squared errors of the NETT using <math display="inline"><semantics> <msup> <mi mathvariant="script">R</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi mathvariant="script">R</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics></math> depending on the noise level. The crosses are the values for the phantoms in <a href="#jimaging-07-00239-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>Left column: phantom with a structure not contained in the training data (<b>top</b>) and pseudo inverse reconstruction (<b>bottom</b>). Middle column: Post-processing reconstructions with <math display="inline"><semantics> <msup> <mo>Φ</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics></math> using exact (<b>top</b>) and noisy data (<b>bottom</b>). Right column: NETT reconstructions with <math display="inline"><semantics> <msup> <mi mathvariant="script">R</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics></math> using exact (<b>top</b>) and noisy data (<b>bottom</b>).</p>
Full article ">
10 pages, 616 KiB  
Article
Recovering the Magnetic Image of Mars from Satellite Observations
by Igor Kolotov, Dmitry Lukyanenko, Inna Stepanova, Yanfei Wang and Anatoly Yagola
J. Imaging 2021, 7(11), 234; https://doi.org/10.3390/jimaging7110234 - 9 Nov 2021
Cited by 10 | Viewed by 2092
Abstract
One of the possible approaches to reconstructing the map of the distribution of magnetization parameters in the crust of Mars from the data of the Mars MAVEN orbiter mission is considered. Possible ways of increasing the accuracy of reconstruction of the magnetic image [...] Read more.
One of the possible approaches to reconstructing the map of the distribution of magnetization parameters in the crust of Mars from the data of the Mars MAVEN orbiter mission is considered. Possible ways of increasing the accuracy of reconstruction of the magnetic image of Mars are discussed. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Planetary coordinate systems used to solve the problem.</p>
Full article ">Figure 2
<p>The location relative to Mars of the magnetic field measurement points by the MAVEN mission used in the calculations.</p>
Full article ">Figure 3
<p>The normalized value of the magnitude of the retrieved magnetic moment density <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">M</mi> <mo>(</mo> <mi>R</mi> <mo form="prefix">cos</mo> <mi>φ</mi> <mo form="prefix">sin</mo> <mi>θ</mi> <mo>,</mo> <mi>R</mi> <mo form="prefix">sin</mo> <mi>φ</mi> <mo form="prefix">sin</mo> <mi>θ</mi> <mo>,</mo> <mi>R</mi> <mo form="prefix">cos</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics> </math>.</p>
Full article ">
29 pages, 25475 KiB  
Article
An Optimization-Based Meta-Learning Model for MRI Reconstruction with Diverse Dataset
by Wanyu Bian, Yunmei Chen, Xiaojing Ye and Qingchao Zhang
J. Imaging 2021, 7(11), 231; https://doi.org/10.3390/jimaging7110231 - 31 Oct 2021
Cited by 16 | Viewed by 3351
Abstract
This work aims at developing a generalizable Magnetic Resonance Imaging (MRI) reconstruction method in the meta-learning framework. Specifically, we develop a deep reconstruction network induced by a learnable optimization algorithm (LOA) to solve the nonconvex nonsmooth variational model of MRI image reconstruction. In [...] Read more.
This work aims at developing a generalizable Magnetic Resonance Imaging (MRI) reconstruction method in the meta-learning framework. Specifically, we develop a deep reconstruction network induced by a learnable optimization algorithm (LOA) to solve the nonconvex nonsmooth variational model of MRI image reconstruction. In this model, the nonconvex nonsmooth regularization term is parameterized as a structured deep network where the network parameters can be learned from data. We partition these network parameters into two parts: a task-invariant part for the common feature encoder component of the regularization, and a task-specific part to account for the variations in the heterogeneous training and testing data. We train the regularization parameters in a bilevel optimization framework which significantly improves the robustness of the training process and the generalization ability of the network. We conduct a series of numerical experiments using heterogeneous MRI data sets with various undersampling patterns, ratios, and acquisition settings. The experimental results show that our network yields greatly improved reconstruction quality over existing methods and can generalize well to new reconstruction problems whose undersampling patterns/trajectories are not present during training. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>The pictures (from top to bottom) display the reconstruction results, zoomed-in details, point-wise errors with a color bar, and associated <b>radial</b> masks for meta-learning, conventional learning, and ISTA-Net<math display="inline"><semantics> <msup> <mrow/> <mo>+</mo> </msup> </semantics></math> with four different CS ratios of 10%, 20%, 30%, 40% (from left to right). The top-right image is the ground truth fully-sampled image.</p>
Full article ">Figure 2
<p>The pictures (from top to bottom) display the T2 brain image reconstruction results, zoomed-in details, point-wise errors with a color bar, and associated <b>radial</b> masks for meta-learning and conventional learningwith four different CS ratios of 10%, 20%, 30%, 40% (from left to right). The top-right iage is the ground truth fully-sampled image.</p>
Full article ">Figure 3
<p>The pictures (from top to bottom) display the T1 brain image reconstruction results, zoomed-in details, point-wise errors with a color bar, and associated <b>radial</b> masks for meta-learning and conventional learning. Meta-learning was trained with CS ratios of 10%, 20%, 30%, and 40% and tested with three different unseen CS ratios of 15%, 25%, and 35% (from left to right). Conventional learning was trained and tested with the same CS ratios of 15%, 25%, and 35%. The top-right image is the ground truth fully-sampled image.</p>
Full article ">Figure 4
<p>The pictures (from top to bottom) display the T2 brain image reconstruction results, zoomed-in details, point-wise errors with a color bar, and associated <b>radial</b> masks for meta-learning and conventional learning. Meta-learning was trained with CS ratios of 10%, 20%, 30%, and 40% and test edwith three different unseen CS ratios of 15%, 25%, and 35% (from left to right). Conventional learning was trained and tested with the same CS ratios of 15%, 25%, and 35%. The top-right image is the ground truth fully-sampled image.</p>
Full article ">Figure 5
<p>The pictures (from top to bottom) display the T2 brain image reconstruction results, zoomed-in details, point-wise errors with a color bar, and associated <b>Cartesian</b> masks for meta-learning and conventional learningwith four different CS ratios of 10%, 20%, 30%, and 40% (from left to right). The top-right image is the ground truth fully-sampled image.</p>
Full article ">Figure 6
<p>Convergence behavior of Algorithm 1 on T1 weighted MRI image reconstruction with four different CS ratios using radial mask. <b>Left</b>: Objective function value <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> versus phase number. <b>Right</b>: PSNR value versus phase number.</p>
Full article ">
15 pages, 16522 KiB  
Article
On a Variational and Convex Model of the Blake–Zisserman Type for Segmentation of Low-Contrast and Piecewise Smooth Images
by Liam Burrows, Anis Theljani and Ke Chen
J. Imaging 2021, 7(11), 228; https://doi.org/10.3390/jimaging7110228 - 28 Oct 2021
Cited by 3 | Viewed by 2254
Abstract
This paper proposes a new variational model for segmentation of low-contrast and piecewise smooth images. The model is motivated by the two-stage image segmentation work of Cai–Chan–Zeng (2013) for the Mumford–Shah model. To deal with low-contrast images more effectively, especially in treating higher-order [...] Read more.
This paper proposes a new variational model for segmentation of low-contrast and piecewise smooth images. The model is motivated by the two-stage image segmentation work of Cai–Chan–Zeng (2013) for the Mumford–Shah model. To deal with low-contrast images more effectively, especially in treating higher-order discontinuities, we follow the idea of the Blake–Zisserman model instead of the Mumford–Shah. Two practical ideas are introduced here: first, a convex relaxation idea is used to derive an implementable formulation, and second, a game reformulation is proposed to reduce the strong dependence of coupling parameters. The proposed model is then analysed for existence and further solved by an ADMM solver. Numerical experiments can show that the new model outperforms the current state-of-the-art models for some challenging and low-contrast images. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Results from an ultrasound image: (<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of the CRCV model. (<b>d</b>) CRCV contour. (<b>e</b>) Output of CCZ. (<b>f</b>) CCZ after thresholding. (<b>g</b>) CCZ contour. (<b>h</b>) Output of CNC. (<b>i</b>) CNC after thresholding. (<b>j</b>) CNC contour. (<b>k</b>) Output of T-ROF. (<b>l</b>) T-ROF after thresholding. (<b>m</b>) T-ROF contour. (<b>n</b>) Output <span class="html-italic">g</span> of our model. (<b>o</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>p</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>q</b>) Ours after thresholding. (<b>r</b>) Our contour.</p>
Full article ">Figure 2
<p>Results from a blood vessel image: (<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of the CRCV model. (<b>d</b>) CRCV contour. (<b>e</b>) Output of CCZ. (<b>f</b>) CCZ after thresholding. (<b>g</b>) CCZ contour. (<b>h</b>) Output of CNC. (<b>i</b>) CNC after thresholding. (<b>j</b>) CNC contour. (<b>k</b>) Output of T-ROF. (<b>l</b>) T-ROF after thresholding. (<b>m</b>) T-ROF contour. (<b>n</b>) Output <span class="html-italic">g</span> of our model. (<b>o</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>p</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>q</b>) Ours after thresholding. (<b>r</b>) Our contour.</p>
Full article ">Figure 3
<p>MRI segmentation: (<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of CCZ. (<b>d</b>) CCZ after thresholding. (<b>e</b>) Output of CNC. (<b>f</b>) CNC after thresholding. (<b>g</b>) Output of T-ROF. (<b>h</b>) T-ROF after thresholding. (<b>i</b>) Output <span class="html-italic">g</span> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>k</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>l</b>) Ours after thresholding.</p>
Full article ">Figure 4
<p>MRI segmentation: (<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of CCZ. (<b>d</b>) CCZ after thresholding. (<b>e</b>) Output of CNC. (<b>f</b>) CNC after thresholding. (<b>g</b>) Output of T-ROF. (<b>h</b>) T-ROF after thresholding. (<b>i</b>) Output <span class="html-italic">g</span> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>k</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>l</b>) Ours after thresholding.</p>
Full article ">Figure 4 Cont.
<p>MRI segmentation: (<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of CCZ. (<b>d</b>) CCZ after thresholding. (<b>e</b>) Output of CNC. (<b>f</b>) CNC after thresholding. (<b>g</b>) Output of T-ROF. (<b>h</b>) T-ROF after thresholding. (<b>i</b>) Output <span class="html-italic">g</span> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>k</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>l</b>) Ours after thresholding.</p>
Full article ">Figure 5
<p>MRI segmentation: (<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of CCZ. (<b>d</b>) CCZ after thresholding. (<b>e</b>) Output of CNC. (<b>f</b>) CNC after thresholding. (<b>g</b>) Output of T-ROF. (<b>h</b>) T-ROF after thresholding. (<b>i</b>) Output <span class="html-italic">g</span> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>k</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>l</b>) Ours after thresholding.</p>
Full article ">Figure 5 Cont.
<p>MRI segmentation: (<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of CCZ. (<b>d</b>) CCZ after thresholding. (<b>e</b>) Output of CNC. (<b>f</b>) CNC after thresholding. (<b>g</b>) Output of T-ROF. (<b>h</b>) T-ROF after thresholding. (<b>i</b>) Output <span class="html-italic">g</span> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>k</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>l</b>) Ours after thresholding.</p>
Full article ">Figure 6
<p>(<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of the CRCV model. (<b>d</b>) CCZ after thresholding. (<b>e</b>) CNC after thresholding. (<b>f</b>) T-ROF after thresholding. (<b>g</b>) DL Output. (<b>h</b>) Output <span class="html-italic">g</span> of our model. (<b>i</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>k</b>) Ours after thresholding. (<b>l</b>) Our contour.</p>
Full article ">Figure 7
<p>(<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of the CRCV model. (<b>d</b>) CCZ after thresholding. (<b>e</b>) CNC after thresholding. (<b>f</b>) T-ROF after thresholding. (<b>g</b>) DL Output. (<b>h</b>) Output <span class="html-italic">g</span> of our model. (<b>i</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>k</b>) Ours after thresholding. (<b>l</b>) Our contour.</p>
Full article ">Figure 8
<p>(<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of the CRCV model. (<b>d</b>) CCZ after thresholding. (<b>e</b>) CNC after thresholding. (<b>f</b>) T-ROF after thresholding. (<b>g</b>) DL Output. (<b>h</b>) Output <span class="html-italic">g</span> of our model. (<b>i</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>k</b>) Ours after thresholding. (<b>l</b>) Our contour.</p>
Full article ">Figure 9
<p>(<b>a</b>) Clean image. (<b>b</b>) Noisy image used as input to the models. (<b>c</b>) Output of the CRCV model. (<b>d</b>) CCZ after thresholding. (<b>e</b>) CNC after thresholding. (<b>f</b>) T-ROF after thresholding. (<b>g</b>) DL Output. (<b>h</b>) Output <span class="html-italic">g</span> of our model. (<b>i</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>x</mi> </msub> </semantics></math> of our model. (<b>j</b>) Output <math display="inline"><semantics> <msub> <mi mathvariant="bold">G</mi> <mi>y</mi> </msub> </semantics></math> of our model. (<b>k</b>) Ours after thresholding. (<b>l</b>) Our contour.</p>
Full article ">Figure 10
<p>Comparison of six methods (CRCV, CCZ, CNCS, TROF, DL, Ours): box plots of the quantitative results for <span class="html-italic">DICE</span> (<b>a</b>) and <span class="html-italic">JACCARD</span> (<b>b</b>) scores.</p>
Full article ">
26 pages, 4540 KiB  
Article
Fast Fiber Orientation Estimation in Diffusion MRI from kq-Space Sampling and Anatomical Priors
by Marica Pesce, Audrey Repetti, Anna Auría, Alessandro Daducci, Jean-Philippe Thiran and Yves Wiaux
J. Imaging 2021, 7(11), 226; https://doi.org/10.3390/jimaging7110226 - 27 Oct 2021
Cited by 2 | Viewed by 2326
Abstract
High spatio-angular resolution diffusion MRI (dMRI) has been shown to provide accurate identification of complex neuronal fiber configurations, albeit, at the cost of long acquisition times. We propose a method to recover intra-voxel fiber configurations at high spatio-angular resolution relying on a 3D [...] Read more.
High spatio-angular resolution diffusion MRI (dMRI) has been shown to provide accurate identification of complex neuronal fiber configurations, albeit, at the cost of long acquisition times. We propose a method to recover intra-voxel fiber configurations at high spatio-angular resolution relying on a 3D kq-space under-sampling scheme to enable accelerated acquisitions. The inverse problem for the reconstruction of the fiber orientation distribution (FOD) is regularized by a structured sparsity prior promoting simultaneously voxel-wise sparsity and spatial smoothness of fiber orientation. Prior knowledge of the spatial distribution of white matter, gray matter, and cerebrospinal fluid is also leveraged. A minimization problem is formulated and solved via a stochastic forward–backward algorithm. Simulations and real data analysis suggest that accurate FOD mapping can be achieved from severe kq-space under-sampling regimes potentially enabling high spatio-angular resolution dMRI in the clinical setting. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the matrices <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math> involved in the FOD reconstruction. Each column of <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> spans the FOD coefficients of the voxels containing white matter tissue (blue columns). Coefficients modeling gray matter and CSF are stored in <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> (green columns) and in <math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math> (yellow columns), respectively. The FOD reconstruction is performed by estimating <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math>, whose active columns are provided by the tissue segmentation maps (top).</p>
Full article ">Figure 2
<p>Equatorial plane projection of the q-space under-sampling schemes considered for the synthetic data acquired with 6 (<b>a</b>), 10 (<b>b</b>), 15 (<b>c</b>), 20 (<b>d</b>) and 30 (<b>e</b>) diffusion gradients.</p>
Full article ">Figure 3
<p>First row: <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math> image (<b>A</b>) and coil sensitivity maps (magnitude (<b>top B</b>) and phase (<b>bottom B</b>)) used in the experiments performed with synthetic data. (<b>C</b>) Example of phase map estimated with synthetic data when considering motion and magnetic field inhomogeneities. Second row: <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math> image (<b>D</b>) and magnitude and phase estimated for two different coils (<b>E</b>,<b>F</b>) for the experiments performed with real data.</p>
Full article ">Figure 4
<p>Illustration of the acquisition of the 3D volume containing the brain. Each sub-volume is acquired considering a 3D two-phase EPI scheme (bottom line). The presented results were obtained considering a sequence of identical slice-by-slice 2D k-space under-sampling schemes (top line), which can be brought back to a specific class of 3D k-space under-sampling scheme to which a Fourier transform in the third dimension was applied. The proposed pattern results from the combination of two different regular schemes: the fully sampled central zone (yellow lines) and the regularly under-sampled periphery (white lines).</p>
Full article ">Figure 5
<p>Quantitative evaluation of the fiber orientation estimated by using four different methods for the synthetic data from the ISMRM Tractography Challenge 2015. For the evaluation, the SR (<b>first row</b>), the mean angular error (<b>second row</b>), the number of <span class="html-italic">false positives</span> (<b>third row</b>) and <span class="html-italic">false negatives</span> (<b>fourth row</b>) are shown for each different method along each column. Reconstructions were performed considering different number of q-points (i.e., <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>∈</mo> <mo>{</mo> <mn>6</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>15</mn> <mo>,</mo> <mn>20</mn> <mo>,</mo> <mn>30</mn> <mo>}</mo> </mrow> </semantics></math>), shown using different colors and for different <span class="html-italic">k-space under-sampling factors</span> (i.e., <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>/</mo> <mi>K</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>10</mn> <mo>}</mo> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>Visualization of the quantitative results obtained by using the four different methods described in <a href="#sec4dot1-jimaging-07-00226" class="html-sec">Section 4.1</a>, applied to the synthetic data of the ISMRM Tractography Challenge 2015. Mean angular error (<b>1A</b>–<b>4A</b>), success rate (<b>1B</b>–<b>4B</b>), <span class="html-italic">false positive</span> (<b>1C</b>–<b>4C</b>) and <span class="html-italic">false negative</span> (<b>1D</b>–<b>4D</b>) maps obtained from the reconstructions considering M = 15 q-points, with a <span class="html-italic">k-space under-sampling factor</span> N/K = 4. On the bottom right corner, values of the corresponding quantitative index are reported for the displayed slice.</p>
Full article ">Figure 7
<p>Quantitative evaluation of the fiber orientation reconstruction performed on the real data by using the proposed kq-space under-sampling framework. Reconstructions considering 15 diffusion gradients with full k-space (<b>first column</b>), 30 diffusion gradients with <span class="html-italic">k-space under-sampling factor</span> of 2 (<b>second column</b>), 45 diffusion gradients with <span class="html-italic">k-space under-sampling factor</span> of 3 (<b>third column</b>) and 60 gradient with <span class="html-italic">k-space under-sampling</span> factor of 4 (<b>fourth column</b>) were compared with the fiber configuration obtained using 60 q-points and full k-space. The maps of the mean angular error (<b>first row</b>), success rate (<b>second row</b>), <span class="html-italic">false positives</span> (<b>third row</b>) and negatives (<b>fourth row</b>) are considered for the evaluation.</p>
Full article ">Figure 8
<p>Qualitative evaluation of the fiber orientation reconstruction performed on the real data by using the proposed kq-space under-sampling framework. The fiber configurations displayed correspond to the region framed highlighted by the white frame in <a href="#jimaging-07-00226-f007" class="html-fig">Figure 7</a>. Reconstructions were performed considering 60 diffusion with full k-space (<b>A</b>), and with <span class="html-italic">k-space under-sampling factor</span> of 4 (<b>B</b>), considering 45 diffusion gradients with <span class="html-italic">k-space under-sampling factor</span> of 3 (<b>C</b>), 30 diffusion gradients with with <span class="html-italic">k-space under-sampling factor</span> of 2 (<b>D</b>) and 15 diffusion gradients with full k-space (<b>E</b>).</p>
Full article ">
24 pages, 2863 KiB  
Article
Flexible Krylov Methods for Edge Enhancement in Imaging
by Silvia Gazzola, Sebastian James Scott and Alastair Spence
J. Imaging 2021, 7(10), 216; https://doi.org/10.3390/jimaging7100216 - 18 Oct 2021
Cited by 2 | Viewed by 2567
Abstract
Many successful variational regularization methods employed to solve linear inverse problems in imaging applications (such as image deblurring, image inpainting, and computed tomography) aim at enhancing edges in the solution, and often involve non-smooth regularization terms (e.g., total variation). Such regularization methods can [...] Read more.
Many successful variational regularization methods employed to solve linear inverse problems in imaging applications (such as image deblurring, image inpainting, and computed tomography) aim at enhancing edges in the solution, and often involve non-smooth regularization terms (e.g., total variation). Such regularization methods can be treated as iteratively reweighted least squares problems (IRLS), which are usually solved by the repeated application of a Krylov projection method. This approach gives rise to an inner–outer iterative scheme where the outer iterations update the weights and the inner iterations solve a least squares problem with fixed weights. Recently, flexible or generalized Krylov solvers, which avoid inner–outer iterations by incorporating iteration-dependent weights within a single approximation subspace for the solution, have been devised to efficiently handle IRLS problems. Indeed, substantial computational savings are generally possible by avoiding the repeated application of a traditional Krylov solver. This paper aims to extend the available flexible Krylov algorithms in order to handle a variety of edge-enhancing regularization terms, with computationally convenient adaptive regularization parameter choice. In order to tackle both square and rectangular linear systems, flexible Krylov methods based on the so-called flexible Golub–Kahan decomposition are considered. Some theoretical results are presented (including a convergence proof) and numerical comparisons with other edge-enhancing solvers show that the new methods compute solutions of similar or better quality, with increased speedup. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Experiment 1. Simple geometric pattern that has been blurred by some Gaussian blur and corrupted by some Gaussian noise. From the left, the first and second frames are for the <tt>N = 32</tt> case, and the third and fourth frames are for the <tt>N = 256</tt> case.</p>
Full article ">Figure 2
<p>Experiment 1, with <tt>N = 32</tt>. (<b>Left frame</b>): History of the relative error norms, comparing FBTV, F-TV, F-TV(p), and LSQR-L. (<b>Right frame</b>): History of the automatically selected regularization parameter for the projected problem (for the F-TV, F-TV(p), and LSQR-L methods), according to the discrepancy principle (<a href="#FD29-jimaging-07-00216" class="html-disp-formula">29</a>). Diamond markers indicate the iteration at which the stopping criterion (<a href="#FD31-jimaging-07-00216" class="html-disp-formula">31</a>) is satisfied.</p>
Full article ">Figure 3
<p>Experiment 1, with <tt>N = 32</tt>. Reconstructed solutions from LSQR-L, FBTV, F-TV, and F-TV(p) when the respective stopping criterion has been satisfied. The RRE and corresponding stopping iteration are displayed in the frame titles.</p>
Full article ">Figure 4
<p>Experiment 1, with <tt>N = 256</tt>. Reconstructions achieved by various methods at the stopping iteration (if a stopping criterion is used, else the maximum iteration). The RRE and stopping iteration for each reconstruction are displayed in the frame titles.</p>
Full article ">Figure 5
<p>Experiment 1, with <tt>N = 256</tt>. (<b>Top row</b>): History of the relative error norms, comparing F-TV, F-aTV, F-diag, FBTV, and LSQR-L. Diamond markers indicate the iteration at which the stopping criterion (<a href="#FD31-jimaging-07-00216" class="html-disp-formula">31</a>) is satisfied. (<b>Second row</b>, <b>left</b>): history of the relative error for the new F-TV method implemented with iteration-dependent regularization parameter <math display="inline"><semantics> <msup> <mi>λ</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msup> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> </mrow> </semantics></math>, set according to the discrepancy principle (<a href="#FD30-jimaging-07-00216" class="html-disp-formula">30</a>), and fixed regularization parameter <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <msup> <mi>λ</mi> <mrow> <mo>(</mo> <mn>200</mn> <mo>)</mo> </mrow> </msup> </mrow> </semantics></math>. (<b>Second row</b>, <b>right</b>): progress of the F-TV, LSQR-L, FBTV, GKS and IRN-TV relative errors versus elapsed computational time. (<b>Third row</b>): SSIMs values versus iteration number for the methods listed in <a href="#jimaging-07-00216-t001" class="html-table">Table 1</a>. (<b>Bottom row</b>): history of the total variation of the reconstructions computed by each method.</p>
Full article ">Figure 6
<p>Experiment 2. Test problem involving shaking blur followed by inpainting operator.</p>
Full article ">Figure 7
<p>Experiment 2. Reconstructions achieved by various methods. The RRE and stopping iteration for each reconstruction are displayed in the frame title.</p>
Full article ">Figure 8
<p>Experiment 2. (<b>Top row</b>): history of the relative error norms, comparing F-TV, F-aTV, F-diag, FBTV, and LSQR-L. Diamond markers indicate the iteration at which the stopping criterion (<a href="#FD31-jimaging-07-00216" class="html-disp-formula">31</a>) is satisfied. (<b>Second row</b>, <b>left</b>): progress of the F-TV, LSQR-L, FBTV, GKS and IRN-TV relative errors versus elapsed computational time. (<b>Second row</b>, <b>right</b>): SSIMs values versus iteration number for F-TV, LSQR-L, FBTV, GKS and IRN-TV. (<b>Bottom row</b>): history of the total variation of the reconstructions computed by each method.</p>
Full article ">Figure 9
<p>Experiment 3. Computed tomography test problem: the true object to be imaged (phantom) is displayed on the left and the collected data (sinogram) are displayed on the right.</p>
Full article ">Figure 10
<p>Experiment 3. (<b>Top row</b>): history of the relative error norms, comparing F-TV, F-aTV, F-diag, FBTV, and LSQR-L. Diamond markers indicate the iteration at which the stopping criterion (<a href="#FD31-jimaging-07-00216" class="html-disp-formula">31</a>) is satisfied. (<b>Second row</b>, <b>left</b>): progress of the F-TV, LSQR-L, FBTV, GKS and IRN-TV relative errors versus elapsed computational time. (<b>Second row</b>, <b>right</b>): SSIMs values versus iteration number for F-TV, LSQR-L, FBTV, GKS and IRN-TV. (<b>Bottom row</b>): history of the total variation of the reconstructions computed by each method.</p>
Full article ">Figure 11
<p>Experiment 3. Reconstructions achieved by various methods. The RRE and corresponding stopping iteration are displayed in the frame titles.</p>
Full article ">
23 pages, 7547 KiB  
Article
Mitral Valve Segmentation Using Robust Nonnegative Matrix Factorization
by Hannah Dröge, Baichuan Yuan, Rafael Llerena, Jesse T. Yen, Michael Moeller and Andrea L. Bertozzi
J. Imaging 2021, 7(10), 213; https://doi.org/10.3390/jimaging7100213 - 16 Oct 2021
Cited by 5 | Viewed by 3004
Abstract
Analyzing and understanding the movement of the mitral valve is of vital importance in cardiology, as the treatment and prevention of several serious heart diseases depend on it. Unfortunately, large amounts of noise as well as a highly varying image quality make the [...] Read more.
Analyzing and understanding the movement of the mitral valve is of vital importance in cardiology, as the treatment and prevention of several serious heart diseases depend on it. Unfortunately, large amounts of noise as well as a highly varying image quality make the automatic tracking and segmentation of the mitral valve in two-dimensional echocardiographic videos challenging. In this paper, we present a fully automatic and unsupervised method for segmentation of the mitral valve in two-dimensional echocardiographic videos, independently of the echocardiographic view. We propose a bias-free variant of the robust non-negative matrix factorization (RNMF) along with a window-based localization approach, that is able to identify the mitral valve in several challenging situations. We improve the average f1-score on our dataset of 10 echocardiographic videos by 0.18 to a f1-score of 0.56. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed method. The white filling of the box shows the input and output of our method. The dark purple fill shows the mandatory steps, while the light green fill indicates the optional step of our algorithm.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <msup> <mover accent="true"> <mi>S</mi> <mo stretchy="false">˜</mo> </mover> <mn>1</mn> </msup> </semantics></math> of Algorithm 3 (<b>a</b>) and <math display="inline"><semantics> <msup> <mover accent="true"> <mi>S</mi> <mo stretchy="false">˜</mo> </mover> <mn>11</mn> </msup> </semantics></math> with the calculated centroids displayed by circles, after 11 iterations ((<b>b</b>), zoomed in).</p>
Full article ">Figure 3
<p>Three frames showing the extraction of the mitral valve (red) from the mis-segmented structures (light gray) by the refinement step.</p>
Full article ">Figure 4
<p>Three-dimensional view of the mitral valve (red), detected by the refinement step.</p>
Full article ">Figure 5
<p>Dataset of 10 echocardiographic videos, with their number of frames and their spacial resolution.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <mrow> <mo>(</mo> <mover accent="true"> <mi>W</mi> <mo stretchy="false">^</mo> </mover> <mover accent="true"> <mi>H</mi> <mo stretchy="false">^</mo> </mover> <mo>)</mo> </mrow> </semantics></math> (<b>a</b>) and <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> (<b>b</b>) in case of using Bregman iteration and of standard robust non-negative matrix factorization (RNMF) for different values of the regularization parameter <math display="inline"><semantics> <msub> <mi>λ</mi> <mrow> <mi>r</mi> <mi>n</mi> <mi>m</mi> <mi>f</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>F1-score for automatic segmentation depending on <math display="inline"><semantics> <msub> <mi>λ</mi> <mrow> <mi>r</mi> <mi>n</mi> <mi>m</mi> <mi>f</mi> </mrow> </msub> </semantics></math>, where the dashed lines show the minimum and maximum f1-score values, and the standard deviation is indicated by the filled range.</p>
Full article ">Figure 8
<p>Segmentation of the mitral valve (red contour), depending on <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>3</mn> </msub> </semantics></math> of Equation (<a href="#FD8-jimaging-07-00213" class="html-disp-formula">8</a>), with the entries in <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>W</mi> <mo stretchy="false">^</mo> </mover> <mover accent="true"> <mi>H</mi> <mo stretchy="false">^</mo> </mover> </mrow> </semantics></math> appear in green and the entries in <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> in blue; the ground truth is highlighted with a green contour.</p>
Full article ">Figure 9
<p>F1-score of segmentation result over ten videos for segmentation with windowing (<b>a</b>), automatic segmentation (<b>b</b>), segmentation by Dukler et al. (<b>c</b>) and segmentation by Corinzia et al. (<b>d</b>).</p>
Full article ">Figure 10
<p>One of the best (<b>a</b>) and the worst (<b>b</b>) segmentation results by automatic segmentation (red), with the corresponding ground truth (green).</p>
Full article ">Figure 11
<p>Effect of changing segmentation parameter for automatic segmentation, where the dashed lines show the minimum and maximum f1-score values and the standard deviation is indicated by the filled range.</p>
Full article ">Figure 12
<p>Recall of mitral valve detection over ten videos for Windowing (<b>a</b>), detection by Dukler et al. (<b>b</b>) and detection by Corinzia et al. (<b>c</b>).</p>
Full article ">Figure 13
<p>Robustness of windowing parameters, where the dashed lines show the minimum and maximum recall values and the standard deviation is indicated by the filled range.</p>
Full article ">Figure 14
<p>Effect of applying windowing on segmentation. The red marks indicate the segmentation. (<b>a</b>) without windowing; (<b>b</b>) with windowing.</p>
Full article ">Figure 15
<p>Ascending ordered difference of the f1-score for 10 videos: f1-score of segmentation with windowing subtracted by the f1-score of Dukler et al. (blue) and f1-score of segmentation with windowing subtracted by the f1-score of Corinzia et al. (red).</p>
Full article ">Figure 16
<p>Segmentation in the cases of the largest differences between the f1-scores of our approach (segmentation with windowing) (<b>a</b>,<b>c</b>) and the one of Dukler et al. (<b>b</b>,<b>d</b>). The results in (<b>a</b>,<b>b</b>) belong to the blue rightmost bar of <a href="#jimaging-07-00213-f015" class="html-fig">Figure 15</a>, the results in (<b>c</b>,<b>d</b>) belong to the leftmost bar.</p>
Full article ">Figure 17
<p>Best (<b>a</b>), worst (<b>b</b>) and an average (<b>c</b>) detection and segmentation result in terms of the recall and f1-score. The green contour indicates the ground truth segmentation, the red contour indicates the detected window location and the calculated segmentation.</p>
Full article ">Figure 18
<p>Segmentation in the cases of the largest differences between the f1-scores of our approach (segmentation with windowing) (<b>a</b>,<b>c</b>) and the one of Corinzia et al. (<b>b</b>,<b>d</b>). The results in (<b>a</b>,<b>b</b>) belong to the red rightmost bar of <a href="#jimaging-07-00213-f015" class="html-fig">Figure 15</a>, the results in (<b>c</b>,<b>d</b>) belong to the leftmost bar.</p>
Full article ">Figure 19
<p>Types of segmentation failures. (<b>a</b>) segmentation of Corinzia et al.; (<b>b</b>) segmentation with windowing.</p>
Full article ">
27 pages, 9722 KiB  
Article
Bayesian Activity Estimation and Uncertainty Quantification of Spent Nuclear Fuel Using Passive Gamma Emission Tomography
by Ahmed Karam Eldaly, Ming Fang, Angela Di Fulvio, Stephen McLaughlin, Mike E. Davies, Yoann Altmann and Yves Wiaux
J. Imaging 2021, 7(10), 212; https://doi.org/10.3390/jimaging7100212 - 14 Oct 2021
Cited by 3 | Viewed by 2619
Abstract
In this paper, we address the problem of activity estimation in passive gamma emission tomography (PGET) of spent nuclear fuel. Two different noise models are considered and compared, namely, the isotropic Gaussian and the Poisson noise models. The problem is formulated within a [...] Read more.
In this paper, we address the problem of activity estimation in passive gamma emission tomography (PGET) of spent nuclear fuel. Two different noise models are considered and compared, namely, the isotropic Gaussian and the Poisson noise models. The problem is formulated within a Bayesian framework as a linear inverse problem and prior distributions are assigned to the unknown model parameters. In particular, a Bernoulli-truncated Gaussian prior model is considered to promote sparse pin configurations. A Markov chain Monte Carlo (MCMC) method, based on a split and augmented Gibbs sampler, is then used to sample the posterior distribution of the unknown parameters. The proposed algorithm is first validated by simulations conducted using synthetic data, generated using the nominal models. We then consider more realistic data simulated using a bespoke simulator, whose forward model is non-linear and not available analytically. In that case, the linear models used are mis-specified and we analyse their robustness for activity estimation. The results demonstrate superior performance of the proposed approach in estimating the pin activities in different assembly patterns, in addition to being able to quantify their uncertainty measures, in comparison with existing methods. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Examples of binary masks of the simulated fuel assemblies. (<b>a</b>) Case “1”, (<b>b</b>) Case “2”, and (<b>c</b>) Case “3”. In Case “1”, the mock-up fuel assembly hosted 331 <math display="inline"><semantics> <msup> <mrow/> <mn>60</mn> </msup> </semantics></math>Co pins of 8.879 g/cm<math display="inline"><semantics> <msup> <mrow/> <mn>3</mn> </msup> </semantics></math> density, which emitted 1.17 and 1.33 MeV gamma rays. In Cases “2” and “3”, 10% <math display="inline"><semantics> <msup> <mrow/> <mn>60</mn> </msup> </semantics></math>Co pins pins were removed at the center or uniformly.</p>
Full article ">Figure 2
<p>Simulated dataset. (<b>a</b>) The admissible support, (<b>b</b>) original binary mask, (<b>c</b>) ground truth activity profile when the peak value is set to 220, (<b>d</b>) sinogram generated with an iid Gaussian noise model with peak value = 220 and noise variance <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>, and (<b>e</b>) sinogram generated with a Poisson noise model with peak value = 220.</p>
Full article ">Figure 3
<p>Normalized mean squared error (NMSE) plots for the (<b>a</b>) Poisson noise model with different peak values, (<b>b</b>) the Gaussian noise model when the maximum intensity is set to 220 and different noise variances.</p>
Full article ">Figure 4
<p>Results of the proposed approach using the Gaussian noise model, when the peak value is set to 220 and noise variance is set to <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>. Column 1: posterior mean, Column 2: marginal standard deviation and Column 3: ratio of marginal standard deviation divided by posterior mean. Row 1: All pixels, and Row 2: pin pixels averaging.</p>
Full article ">Figure 5
<p>Results of the proposed approach using the Poisson noise model when the peak value is set to 220. Column 1: posterior mean; column 2: marginal standard deviation; and column 3: ratio of marginal standard deviation divided by posterior mean. Row 1: all pixels; row 2: pin pixels averaging.</p>
Full article ">Figure 6
<p>Probability of pin presence for (<b>a</b>) the Poisson noise case when peak value is set to 10, (<b>b</b>) the Gaussian noise case for peak value of 220 and noise variance <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>, and (<b>c</b>) the Poisson noise case when peak value is set to 220, and (<b>d</b>) the Gaussian noise case when the peak value is set to 220 and the noise variance is set to <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Pin activity estimation using SPA-P-<math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math>, using the Poisson noise model when peak value is set to 220. The results using the Gaussian noise model (SPA-G-<math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math>) using peak value = 220 and noise variance <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math> present similar trends and hence are not presented here.</p>
Full article ">Figure 8
<p>Pin activity estimation using PIDAL-<math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math>, using the Poisson noise model when the peak value is set to 220. (<b>Left</b>): all pin pixels; (<b>right</b>): average of pin pixels.</p>
Full article ">Figure 9
<p>Actual binary masks of the simulated fuel assemblies. (<b>a</b>) Case “1”, (<b>b</b>) Case “2”, (<b>c</b>) Case “3”, (<b>d</b>) Case “4”, (<b>e</b>) Case “5”. In Case “1”, the mock-up fuel assembly hosted 331 <math display="inline"><semantics> <msup> <mrow/> <mn>60</mn> </msup> </semantics></math>Co pins of 8.879 g/cm<math display="inline"><semantics> <msup> <mrow/> <mn>3</mn> </msup> </semantics></math> density, which emitted 1.17 and 1.33 MeV gamma rays. In Cases “2” and “3”, 10% <math display="inline"><semantics> <msup> <mrow/> <mn>60</mn> </msup> </semantics></math>Co pins pins were removed at the center or uniformly. In Cases “4” and “5”, 10% <math display="inline"><semantics> <msup> <mrow/> <mn>60</mn> </msup> </semantics></math>Co pins pins were replaced by depleted uranium pins of the same size but higher density (10.4 g/cm<math display="inline"><semantics> <msup> <mrow/> <mn>3</mn> </msup> </semantics></math>) and no activity.</p>
Full article ">Figure 10
<p>Simulated sinogram of Case “1”. The colour scale represents the detected photon counts.</p>
Full article ">Figure 11
<p>The IDEAL response matrix: Pin pixels averaging of the proposed approach for Case “1”. (<b>Left-hand column</b>): posterior mean, (<b>middle column</b>): marginal standard deviation, and (<b>right-hand column</b>): the ratio of marginal standard deviation divided by posterior mean.</p>
Full article ">Figure 12
<p>The IDEAL response matrix: Pin pixels averaging of the proposed approach for Case “2” and Case “3”. (<b>Left-hand column</b>): posterior mean, (<b>middle column</b>): marginal standard deviation, and (<b>right-hand column</b>): the ratio of marginal standard deviation divided by posterior mean.</p>
Full article ">Figure 13
<p>The IDEAL response matrix: Pin pixels averaging of the proposed approach for Case “4” and Case “5”. (<b>Left-hand column</b>): posterior mean, (<b>middle column</b>): marginal standard deviation, and (<b>right-hand column</b>): the ratio of marginal standard deviation divided by posterior mean.</p>
Full article ">Figure 14
<p>The IDEAL response matrix: Probability of presence of pins using the proposed approach for the Poisson noise model. (<b>a</b>) Case “1”; (<b>b</b>) Case “2”, (<b>c</b>) Case “3”, (<b>d</b>) Case “4” and (<b>e</b>) Case “5”. The Gaussian noise model showed similar trends and hence not presented.</p>
Full article ">Figure 15
<p>The IDEAL response matrix: Activity estimation using pin pixels averaging using existing methods. The regularization parameter <math display="inline"><semantics> <mi>β</mi> </semantics></math> was set to <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math> in the four methods.</p>
Full article ">Figure 16
<p>The FULL response matrix: pin pixels, averaging of the proposed approach for Case 1 “Full assembly”. (<b>Left-hand column</b>): posterior mean; (<b>middle column</b>): marginal standard deviation; and (<b>right-hand column</b>): the ratio of marginal standard deviation divided by posterior mean.</p>
Full article ">Figure 17
<p>The FULL response matrix: pin pixels averaging of the proposed approach for Case 2 “Uniform Missing Assembly” and Case 3 “Center Missing Assembly”. (<b>Left-hand column</b>): posterior mean; (<b>middle column</b>): marginal standard deviation (in log-scale); and (<b>right-hand column</b>): the ratio of marginal standard deviation divided by posterior mean.</p>
Full article ">Figure 18
<p>The FULL response matrix: pin pixels averaging of the proposed approach for Case “4” and Case “5”. (<b>Left-hand column</b>): posterior mean; (<b>middle column</b>): marginal standard deviation; and (<b>right-hand column</b>): the ratio of marginal standard deviation divided by posterior mean.</p>
Full article ">Figure 19
<p>The FULL response matrix: probability of presence of pins using the proposed approach. (<b>a</b>) Case “1”; (<b>b</b>) Case “2”, (<b>c</b>) Case “3”, (<b>d</b>) Case “4”, and (<b>e</b>) Case “5”.</p>
Full article ">Figure 20
<p>The FULL response matrix: activity estimation using pin pixels averaging using existing methods for the five cases. The regularization parameter <math display="inline"><semantics> <mi>β</mi> </semantics></math> was set to <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math> in the four methods.</p>
Full article ">Figure 21
<p>The EMPIRICAL response matrix: pin pixels averaging of the proposed approach for Case “4” and Case “5” using the “EMPIRICAL” response matrix. (<b>Left-hand column</b>): posterior mean; (<b>middle column</b>): marginal standard deviation; and (<b>right-hand column</b>): the ratio of marginal standard deviation divided by posterior mean.</p>
Full article ">Figure 22
<p>The EMPIRICAL response matrix: probability of presence of pins using the proposed approach. (<b>Left</b>): Case “4”; (<b>right</b>): Case “5”.</p>
Full article ">Figure 23
<p>The EMPIRICAL response matrix: activity estimation using pin pixels averaging using existing methods, for Cases “4” and “5”. The regularization parameter <math display="inline"><semantics> <mi>β</mi> </semantics></math> was set to <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math> in the four methods.</p>
Full article ">
14 pages, 826 KiB  
Article
Sparsity-Based Recovery of Three-Dimensional Photoacoustic Images from Compressed Single-Shot Optical Detection
by Dylan Green, Anne Gelb and Geoffrey P. Luke
J. Imaging 2021, 7(10), 201; https://doi.org/10.3390/jimaging7100201 - 2 Oct 2021
Cited by 5 | Viewed by 2669
Abstract
Photoacoustic (PA) imaging combines optical excitation with ultrasonic detection to achieve high-resolution imaging of biological samples. A high-energy pulsed laser is often used for imaging at multi-centimeter depths in tissue. These lasers typically have a low pulse repetition rate, so to acquire images [...] Read more.
Photoacoustic (PA) imaging combines optical excitation with ultrasonic detection to achieve high-resolution imaging of biological samples. A high-energy pulsed laser is often used for imaging at multi-centimeter depths in tissue. These lasers typically have a low pulse repetition rate, so to acquire images in real-time, only one pulse of the laser can be used per image. This single pulse necessitates the use of many individual detectors and receive electronics to adequately record the resulting acoustic waves and form an image. Such requirements make many PA imaging systems both costly and complex. This investigation proposes and models a method of volumetric PA imaging using a state-of-the-art compressed sensing approach to achieve real-time acquisition of the initial pressure distribution (IPD) at a reduced level of cost and complexity. In particular, a single exposure of an optical image sensor is used to capture an entire Fabry–Pérot interferometric acoustic sensor. Time resolved encoding as achieved through spatial sweeping with a galvanometer. This optical system further makes use of a random binary mask to set a predetermined subset of pixels to zero, thus enabling recovery of the time-resolved signals. The Two-Step Iterative Shrinking and Thresholding algorithm is used to reconstruct the IPD, harnessing the sparsity naturally occurring in the IPD as well as the additional structure provided by the binary mask. We conduct experiments on simulated data and analyze the performance of our new approach. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>The proposed optical system. SMF = single-mode fiber, OI = optical isolator, COL = collimator, PBS = polarized beam splitter, <math display="inline"><semantics> <mi>λ</mi> </semantics></math>/4 = quarter wave plate, L = lens, CAM = camera, DMD = digital micromirror device, LP = linear polarizer, FPE = Fabry–Pérot etalon.</p>
Full article ">Figure 2
<p>A pictorial representation of the imaging process as individual forward operations.</p>
Full article ">Figure 3
<p>(<b>a</b>) A cross-section of the ground truth impulse IPD and (<b>b</b>–<b>d</b>) cross-sections of a no-noise reconstruction of the impulse IPD. Here we use regularization parameter <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>2.5</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> in (<a href="#FD6-jimaging-07-00201" class="html-disp-formula">6</a>).</p>
Full article ">Figure 4
<p>(<b>a</b>) The base case cylindrical IPD and (<b>b</b>) an example of a vessel-like IPD.</p>
Full article ">Figure 5
<p>Average MS-SSIM (<b>a</b>) and average MSE (<b>b</b>) of a cylinder IPD with a radius of six perpendicular to the direction of shearing. Average MS-SSIM (<b>c</b>) and average MSE (<b>d</b>) of a cylinder IPD with a radius of 6 h parallel to the direction of shearing. Average MS-SSIM (<b>e</b>) and average MSE (<b>f</b>) of a vessel IPD with ten vessels present. Each point is averaged over five trials for each SNR value considered and is plotted against the SNR value used to calculate the additive Gaussian noise.</p>
Full article ">Figure 6
<p>Average MSE (<b>a</b>) and average MS-SSIM (<b>b</b>) of the reconstructed cylinder IPD with varying radius, averaged over five trials for each radius value considered. A cross section of the ground truth cylinder with radius of five (<b>c</b>), ten (<b>d</b>) and fifteen (<b>e</b>) voxel widths and a cross section of the reconstruction of the same cylinder, respectively, (<b>f</b>–<b>h</b>). Each cylinder considered is orthogonal to the <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>z</mi> </mrow> </semantics></math>-plane.</p>
Full article ">Figure 7
<p>Average MSE (<b>a</b>) and average MS-SSIM (<b>b</b>) of the reconstructed vessel IPDs with a varying number of vessels present, averaged over twenty IPDs for each number of vessels considered. Ground truth projection onto the xy-plane of a four vessel IPD (<b>c</b>), eight vessel IPD (<b>d</b>), and twelve vessel IPD (<b>e</b>), and the reconstruction of the same IPDs, respectively, (<b>f</b>–<b>h</b>). In images (<b>c</b>–<b>h</b>), hue represents depth in the z-dimension, with the colorbar indicating pixel lengths away from the FPE, while intensity is proportional the value of the voxels after being thresholded at 0.15.</p>
Full article ">Figure 8
<p>Average MSE (<b>a</b>) and average MS-SSIM (<b>b</b>) of the reconstructed cylinder IPD as well as average MSE (<b>c</b>) and average MS-SSIM (<b>d</b>) of the reconstructed vessel-like IPD, averaged over five trials for each value of the regularization parameter considered. Low, medium, and high values of SNR are considered for comparison.</p>
Full article ">
31 pages, 6589 KiB  
Article
Spatially Coherent Clustering Based on Orthogonal Nonnegative Matrix Factorization
by Pascal Fernsel
J. Imaging 2021, 7(10), 194; https://doi.org/10.3390/jimaging7100194 - 28 Sep 2021
Cited by 2 | Viewed by 2136
Abstract
Classical approaches in cluster analysis are typically based on a feature space analysis. However, many applications lead to datasets with additional spatial information and a ground truth with spatially coherent classes, which will not necessarily be reconstructed well by standard clustering methods. Motivated [...] Read more.
Classical approaches in cluster analysis are typically based on a feature space analysis. However, many applications lead to datasets with additional spatial information and a ground truth with spatially coherent classes, which will not necessarily be reconstructed well by standard clustering methods. Motivated by applications in hyperspectral imaging, we introduce in this work clustering models based on Orthogonal Nonnegative Matrix Factorization (ONMF), which include an additional Total Variation (TV) regularization procedure on the cluster membership matrix to enforce the needed spatial coherence in the clusters. We propose several approaches with different optimization techniques, where the TV regularization is either performed as a subsequent post-processing step or included into the clustering algorithm. Finally, we provide a numerical evaluation of 12 different TV regularized ONMF methods on a hyperspectral dataset obtained from a matrix-assisted laser desorption/ionization imaging measurement, which leads to significantly better clustering results compared to classical clustering models. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Histological image of the considered MALDI dataset after H&amp;E staining (<b>a</b>) and the histological annotation (<b>b</b>).</p>
Full article ">Figure 2
<p>Ground truth and clusterings of all considered separated methods without TV regularization. The best-performing replicate, including the TV regularization, is chosen based on the normalized van Dongen criterion.</p>
Full article ">Figure 3
<p>Ground truth and clusterings of all considered methods, including the TV regularization. The best-performing replicate, including the TV regularization, is chosen based on the normalized van Dongen criterion.</p>
Full article ">Figure 4
<p>Box plots of the normalized van Dongen criterion (VD<sub>n</sub>) and the normalized variation of information (VI<sub>n</sub>) of all performed experiments.</p>
Full article ">Figure 5
<p>Box plot of the Entropy (E) of all performed experiments.</p>
Full article ">Figure 6
<p>Box plot of the computational times in minutes of all performed experiments.</p>
Full article ">
14 pages, 6437 KiB  
Article
A Green Prospective for Learned Post-Processing in Sparse-View Tomographic Reconstruction
by Elena Morotti, Davide Evangelista and Elena Loli Piccolomini
J. Imaging 2021, 7(8), 139; https://doi.org/10.3390/jimaging7080139 - 7 Aug 2021
Cited by 13 | Viewed by 2822
Abstract
Deep Learning is developing interesting tools that are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great [...] Read more.
Deep Learning is developing interesting tools that are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green AI literature, we propose a shallow neural network to perform efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results show that the proposed inexpensive network computes images of comparable (or even higher) quality in about one-fourth of time and is more robust than the widely used and very deep ResUNet for tomographic reconstructions from sparse-view protocols. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>The three different CT geometric protocols. In the traditional setting (<b>a</b>), the X-ray source and the detector walk a full circle trajectory with a very small angular step, which is enlarged in sparse-view CT (<b>b</b>), whereas in limited-angle CT (<b>c</b>), the source-detector rotation is restricted to a C-shape path.</p>
Full article ">Figure 2
<p>Graphical draft of the considered two-step workflow for tomographic reconstruction from sparse-view data.</p>
Full article ">Figure 3
<p>On the right: graphical representation of the ResUNet architecture; On the left: details on the maximum receptive fields for each of the five levels of the network encoder (RF percentage respect to the input 512 × 512 image and size of RF).</p>
Full article ">Figure 4
<p>On the <b>left</b>: graphical representation of the 3L-SSNet architecture; on the <b>right</b>: details on the receptive fields for each of the three layers of the network (RF percentage respect to the input 512 × 512 image and size of RF). The name of the three layers follows the notation in [<a href="#B34-jimaging-07-00139" class="html-bibr">34</a>].</p>
Full article ">Figure 5
<p>Ground truth image (<b>a</b>) and the two considered zooms-in (<b>b</b>,<b>c</b>), which are depicted by the red squares on the full image (<b>a</b>).</p>
Full article ">Figure 6
<p>Full-range geometry reconstructions. The results obtained with FPB (<b>left column</b>), ResUNet (<b>central column</b>) and 3l-SSNet (<b>right column</b>). Below each image, the values of its RE and SSIM metrics.</p>
Full article ">Figure 7
<p>Half-range geometry reconstructions. The results obtained with FPB (<b>left column</b>), ResUNet (<b>central column</b>) and 3L-SSNet (<b>right column</b>). Below each image, the values of its RE and SSIM metrics.</p>
Full article ">Figure 8
<p>Crops of the reconstructions of the test patient with unseen noise and half-range geometry. ResUNet in (<b>a</b>,<b>c</b>), 3L-SSNet in (<b>b</b>,<b>d</b>).</p>
Full article ">Figure 9
<p>XCAT phantom test image with half-range geometry. From the left to right: (<b>first column</b>): Ground Truth image (<b>a</b>) and the considered zooms-in (<b>e</b>,<b>i</b>). Reconstructions from half-range geometry with FBP (<b>second column</b>), ResUNet (<b>third column</b>) and 3L-SSNet (<b>fourth column</b>).</p>
Full article ">
18 pages, 5669 KiB  
Article
Directional TGV-Based Image Restoration under Poisson Noise
by Daniela di Serafino, Germana Landi and Marco Viola
J. Imaging 2021, 7(6), 99; https://doi.org/10.3390/jimaging7060099 - 16 Jun 2021
Cited by 17 | Viewed by 2553
Abstract
We are interested in the restoration of noisy and blurry images where the texture mainly follows a single direction (i.e., directional images). Problems of this type arise, for example, in microscopy or computed tomography for carbon or glass fibres. In order to deal [...] Read more.
We are interested in the restoration of noisy and blurry images where the texture mainly follows a single direction (i.e., directional images). Problems of this type arise, for example, in microscopy or computed tomography for carbon or glass fibres. In order to deal with these problems, the Directional Total Generalized Variation (DTGV) was developed by Kongskov et al. in 2017 and 2019, in the case of impulse and Gaussian noise. In this article we focus on images corrupted by Poisson noise, extending the DTGV regularization to image restoration models where the data fitting term is the generalized Kullback–Leibler divergence. We also propose a technique for the identification of the main texture direction, which improves upon the techniques used in the aforementioned work about DTGV. We solve the problem by an ADMM algorithm with proven convergence and subproblems that can be solved exactly at a low computational cost. Numerical results on both phantom and real images demonstrate the effectiveness of our approach. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Workflow of Algorithm 1 on a random directional image.</p>
Full article ">Figure 2
<p>Test problem <tt>phantom</tt>: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [<a href="#B9-jimaging-07-00099" class="html-bibr">9</a>].</p>
Full article ">Figure 3
<p>Test problem <tt>grass</tt>: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [<a href="#B9-jimaging-07-00099" class="html-bibr">9</a>].</p>
Full article ">Figure 4
<p>Test problem <tt>leaves</tt>: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [<a href="#B9-jimaging-07-00099" class="html-bibr">9</a>].</p>
Full article ">Figure 5
<p>Test problem <tt>carbon</tt>: original and corrupted images. The yellow dash-dotted line indicates the direction estimated by Algorithm 1 and the red dashed line the direction estimated by the method in [<a href="#B9-jimaging-07-00099" class="html-bibr">9</a>].</p>
Full article ">Figure 6
<p>Direction estimation for <tt>phantom</tt> with SNR <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>35</mn> <mo>,</mo> <mn>37</mn> <mo>,</mo> <mn>39</mn> <mo>,</mo> <mn>41</mn> <mo>,</mo> <mn>43</mn> </mrow> </semantics></math> dB (from <b>left</b> to <b>right</b>) and out-of-focus blur with radius R <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>5</mn> <mo>,</mo> <mn>7</mn> <mo>,</mo> <mn>9</mn> </mrow> </semantics></math> (from <b>top</b> to <b>bottom</b>).</p>
Full article ">Figure 7
<p>Direction estimation for <tt>phantom</tt> with SNR <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>37</mn> <mo>,</mo> <mn>43</mn> </mrow> </semantics></math> (<b>top</b>, <b>bottom</b>) and out-of-focus blur with radius R <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Test problem <tt>phantom</tt>: images restored with DTGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>top</b>) and TGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>bottom</b>).</p>
Full article ">Figure 9
<p>Test problem <tt>grass</tt>: images restored with DTGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>top</b>) and TGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>bottom</b>).</p>
Full article ">Figure 10
<p>Test problem <tt>leaves</tt>: images restored with DTGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>top</b>) and TGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>bottom</b>).</p>
Full article ">Figure 11
<p>Test problem <tt>carbon</tt>: images restored with DTGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>top</b>) and TGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>bottom</b>).</p>
Full article ">Figure 12
<p>Test problem <tt>carbon</tt>: difference images with DTGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>top</b>) and TGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (<b>bottom</b>).</p>
Full article ">Figure 13
<p>Test problem <tt>carbon</tt>: RMSE history for the KL-DGTV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (continuous line) and KL-TGV<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> (dashed line) models.</p>
Full article ">
20 pages, 6782 KiB  
Article
Calibration-Less Multi-Coil Compressed Sensing Magnetic Resonance Image Reconstruction Based on OSCAR Regularization
by Loubna El Gueddari, Chaithya Giliyar Radhakrishna, Emilie Chouzenoux and Philippe Ciuciu
J. Imaging 2021, 7(3), 58; https://doi.org/10.3390/jimaging7030058 - 19 Mar 2021
Cited by 5 | Viewed by 3245
Abstract
Over the last decade, the combination of compressed sensing (CS) with acquisition over multiple receiver coils in magnetic resonance imaging (MRI) has allowed the emergence of faster scans while maintaining a good signal-to-noise ratio (SNR). Self-calibrating techniques, such as ESPiRIT, have become the [...] Read more.
Over the last decade, the combination of compressed sensing (CS) with acquisition over multiple receiver coils in magnetic resonance imaging (MRI) has allowed the emergence of faster scans while maintaining a good signal-to-noise ratio (SNR). Self-calibrating techniques, such as ESPiRIT, have become the standard approach to estimating the coil sensitivity maps prior to the reconstruction stage. In this work, we proceed differently and introduce a new calibration-less multi-coil CS reconstruction method. Calibration-less techniques no longer require the prior extraction of sensitivity maps to perform multi-coil image reconstruction but usually alternate estimation sensitivity map estimation and image reconstruction. Here, to get rid of the nonconvexity of the latter approach we reconstruct as many MR images as the number of coils. To compensate for the ill-posedness of this inverse problem, we leverage structured sparsity of the multi-coil images in a wavelet transform domain while adapting to variations in SNR across coils owing to the OSCAR (octagonal shrinkage and clustering algorithm for regression) regularization. Coil-specific complex-valued MR images are thus obtained by minimizing a convex but nonsmooth objective function using the proximal primal-dual Condat-Vù algorithm. Comparison and validation on retrospective Cartesian and non-Cartesian studies based on the Brain fastMRI data set demonstrate that the proposed reconstruction method outperforms the state-of-the-art (?1-ESPIRiT, calibration-less AC-LORAKS and CaLM methods) significantly on magnitude images for the T1 and FLAIR contrasts. Additionally, further validation operated on 8 to 20-fold prospectively accelerated high-resolution ex vivo human brain MRI data collected at 7 Tesla confirms the retrospective results. Overall, OSCAR-based regularization preserves phase information more accurately (both visually and quantitatively) compared to other approaches, an asset that can only be assessed on real prospective experiments. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Map of (<b>a</b>) structural similarity (SSIM) (<b>b</b>) peak signal-to-noise ratio (pSNR) score as a function of hyperparameters <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>λ</mi> <mo>,</mo> <mi>γ</mi> <mo>)</mo> </mrow> </semantics></math> involved in OSCAR-band (b-OSCAR) regularization using 20-fold prospectively accelerated Sparkling sampling scheme.</p>
Full article ">Figure 2
<p>Phase processing for better visualization of the brain structure and hence easier comparisons. (<b>a</b>) Raw wrapped phase image. (<b>b</b>) Unwrapped phase image. (<b>c</b>) High pass filtered phase image, notice that the structures in the brain are more visible here. (<b>d</b>) Contrast stretching of 2nd and 98th percentiles of intensity values that permits contrast enhancement and improved visualization.</p>
Full article ">Figure 3
<p>Retrospective results for Cartesian mask (<b>A</b>) and non-Cartesian undersampling pattern (<b>B</b>) on T1 weighted (<b>i</b>) and FLAIR (<b>ii</b>) images of the brain fastMRI data set. The results are presented in the form of box plots, computed over <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> slices. From left to right in each boxplot, we compared subbandwise OSCAR (b-OSCAR), scalewise OSCAR (s-OSCAR), global OSCAR (g-OSCAR), CaLM, L1-ESPiRIT and AC-LORAKS, reconstruction methods.</p>
Full article ">Figure 4
<p>Retrospective results for a single slice of FLAIR (<b>top row</b>) and T1 weighted (<b>bottom row</b>) images of the brain fastMRI data set obtained using the Cartesian mask shown in <a href="#jimaging-07-00058-f003" class="html-fig">Figure 3</a>A with <math display="inline"><semantics> <mrow> <mi>UF</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>, corresponding to <math display="inline"><semantics> <mrow> <mi>AF</mi> <mo>≃</mo> <mn>4</mn> </mrow> </semantics></math>. The fully sampled Cartesian reference and the different methods (Zero filled Inverse, g-OSCAR, CaLM, L1-ESPiRIT and AC-LORAKS) are shown from left to right and the SSIM scores are indicated to reflect the performance of each method.</p>
Full article ">Figure 5
<p>Retrospective results for a single slice of FLAIR (<b>top row</b>) and T1 weighted (<b>bottom row</b>) images of the brain fastMRI data set obtained using the Non-Cartesian sampling pattern shown in <a href="#jimaging-07-00058-f003" class="html-fig">Figure 3</a>B with <math display="inline"><semantics> <mrow> <mi>AF</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>UF</mi> <mo>=</mo> <mn>1.66</mn> </mrow> </semantics></math>. The fully sampled Cartesian reference and the different methods (Density Compensated (DC) adjoint NUFFT, s-OSCAR, CaLM, L1-ESPiRIT and AC-LORAKS) are shown from left to right and the SSIM scores are indicated to reflect the performance of each method.</p>
Full article ">Figure 6
<p>(<b>Top Row</b>) Reconstructed MR images (magnitude) from 20-fold accelerated SPARKLING acquisitions using different methods. The scan time for Cartesian reference was 4min41s while the scan time of accelerated SPARKLING was 15s. (<b>a</b>) Cartesian reference. (<b>b</b>) The density compensated adjoint of raw k-space data (DC adj-NUFFT). (<b>c</b>) Reconstruction based on the b-OSCAR formulation. (<b>d</b>) calibration-less reconstruction based on CaLM or group-LASSO regularization. (<b>e</b>) Self-calibrating <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math>-ESPIRiT reconstruction. (<b>f</b>) Auto-calibrated (AC) LORAKS reconstruction. (<b>Second Row</b>) Respective zooms in the red frame. Reconstructed MR images. (<b>Third Row</b>) Enhanced structures in phase images obtained by the method described in <a href="#sec4dot5-jimaging-07-00058" class="html-sec">Section 4.5</a> on the virtual coil reconstructions of each method. The respective MSE of the phase images with respect to Cartesian reference are also reported. (<b>Bottom Row</b>) Respective zooms to highlight details.</p>
Full article ">
18 pages, 1343 KiB  
Article
Data-Driven Regularization Parameter Selection in Dynamic MRI
by Matti Hanhela, Olli Gröhn, Mikko Kettunen, Kati Niinimäki, Marko Vauhkonen and Ville Kolehmainen
J. Imaging 2021, 7(2), 38; https://doi.org/10.3390/jimaging7020038 - 20 Feb 2021
Cited by 1 | Viewed by 2636
Abstract
In dynamic MRI, sufficient temporal resolution can often only be obtained using imaging protocols which produce undersampled data for each image in the time series. This has led to the popularity of compressed sensing (CS) based reconstructions. One problem in CS approaches is [...] Read more.
In dynamic MRI, sufficient temporal resolution can often only be obtained using imaging protocols which produce undersampled data for each image in the time series. This has led to the popularity of compressed sensing (CS) based reconstructions. One problem in CS approaches is determining the regularization parameters, which control the balance between data fidelity and regularization. We propose a data-driven approach for the total variation regularization parameter selection, where reconstructions yield expected sparsity levels in the regularization domains. The expected sparsity levels are obtained from the measurement data for temporal regularization and from a reference image for spatial regularization. Two formulations are proposed. Simultaneous search for a parameter pair yielding expected sparsity in both domains (S-surface), and a sequential parameter selection using the S-curve method (Sequential S-curve). The approaches are evaluated using simulated and experimental DCE-MRI. In the simulated test case, both methods produce a parameter pair and reconstruction that is close to the root mean square error (RMSE) optimal pair and reconstruction. In the experimental test case, the methods produce almost equal parameter selection, and the reconstructions are of high perceived quality. Both methods lead to a highly feasible selection of the regularization parameters in both test cases while the sequential method is computationally more efficient. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Vascular and tumor regions and the three template signals used in the simulation. (<b>Top left</b>) The simulated image with the vascular region of interest (ROI) marked in blue and labeled ‘1’, and the tumor ROI marked in red and labeled ‘2’. (<b>Top right</b>) Simulated vascular ROI signal template. (<b>Bottom left</b>) Simulated tumor ROI signal template. (<b>Bottom right</b>) Simulated signal template in the rest of the tissue. Each image pixel was multiplied with the corresponding template and the result was added to the image to create the simulated time series.</p>
Full article ">Figure 2
<p>Cartesian gradient-echo pulse sequence full data reconstructions with spatial total variation (TV) regularization from before and after contrast injection used as a reference. The two images have the same adjusted color scale.</p>
Full article ">Figure 3
<p>The joint sparsity contour used in the S-surface parameter selection as well as all the parameter selection curves for Sequential S-curve, L-curve and MC-SURE in the simulation with noise level 5%. The red dots mark the computational grid of <math display="inline"><semantics> <mi>α</mi> </semantics></math>:s and <math display="inline"><semantics> <mi>β</mi> </semantics></math>:s in the contour, and the red squares mark the 1D grid of regularization parameters in the parameter selection curves. Note that the spatial TV norm for the Sequential S-curve is from a single image frame whereas the spatial TV norm for the L-curve is from the whole image series.</p>
Full article ">Figure 4
<p>Joint RMSE contours with noise levels 2%, 5% and 10% with the parameters of the four different reconstructions marked and the jRMSE values of the chosen reconstructions. The images show that the regularization parameter pairs obtained with the sparsity based S-surface and Sequential S-curve methods are close to the MinRMSE parameter pair.</p>
Full article ">Figure 5
<p>Top rows: Single frame images of the simulated true target and reconstructions of the simulation with noise level 5% at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>≈</mo> <mn>90</mn> </mrow> </semantics></math> s (marked in the signal curves with a dashed line) with parameters chosen with different methods; minimum joint RMSE (MinRMSE), simultaneous parameter selection according to joint sparsity (S-surface), sequential parameter selection according to sparsity (Seq. S-curve) and sequential parameter selection by the L-curve and MC-SURE methods. Bottom row: Single pixel signals from the vascular and tumour areas as well as healthy tissue from the right side of the cortex with the different methods compared to the phantom and RMS errors of the corresponding regions.</p>
Full article ">Figure 6
<p>The joint sparsity contour used in the S-surface parameter selection as well as all the parameter selection curves for the Sequential S-curve, L-curve and MC-SURE methods in the experimental data case. The red dots mark the computational grid of <math display="inline"><semantics> <mi>α</mi> </semantics></math>:s and <math display="inline"><semantics> <mi>β</mi> </semantics></math>:s in the contour, and the red squares mark the 1D grid of regularization parameters in the parameter selection curves. Note that the spatial TV norm for the Sequential S-curve is from a single image frame whereas the spatial TV norm for the L-curve is from the whole image series.</p>
Full article ">Figure 7
<p>Top rows: Single frame images of the cartesian reference image with spatial regularization from before contrast agent injection and the reconstructions of the experimental data at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>≈</mo> <mn>200</mn> </mrow> </semantics></math> s (marked below with a dashed line) with parameters chosen with the S-surface, Sequential S-curve, L-curve and MC-SURE methods. Bottom row: Single pixel signals from the tumour and vascular areas with the four different methods.</p>
Full article ">
21 pages, 9628 KiB  
Article
A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction
by Elena Loli Piccolomini and Elena Morotti
J. Imaging 2021, 7(2), 36; https://doi.org/10.3390/jimaging7020036 - 13 Feb 2021
Cited by 9 | Viewed by 3596
Abstract
Digital Breast Tomosynthesis is an X-ray imaging technique that allows a volumetric reconstruction of the breast, from a small number of low-dose two-dimensional projections. Although it is already used in the clinical setting, enhancing the quality of the recovered images is still a [...] Read more.
Digital Breast Tomosynthesis is an X-ray imaging technique that allows a volumetric reconstruction of the breast, from a small number of low-dose two-dimensional projections. Although it is already used in the clinical setting, enhancing the quality of the recovered images is still a subject of research. The aim of this paper was to propose and compare, in a general optimization framework, three slightly different models and corresponding accurate iterative algorithms for Digital Breast Tomosynthesis image reconstruction, characterized by a convergent behavior. The suggested model-based implementations are specifically aligned to Digital Breast Tomosynthesis clinical requirements and take advantage of a Total Variation regularizer. We also tune a fully-automatic strategy to set a proper regularization parameter. We assess our proposals on real data, acquired from a breast accreditation phantom and a clinical case. The results confirm the effectiveness of the presented framework in reconstructing breast volumes, with particular focus on the masses and microcalcifications, in few iterations and in enhancing the image quality in a prolonged execution. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Scheme of the Digital Breast Tomosynthesis (DBT) reconstruction process. On the left, a draft of the frontal (coronal) section of a DBT system acquiring projection images of the breast; in the center, a chart representing the <span class="html-italic">k</span>th iteration of the algorithm computing the sequence <math display="inline"><semantics> <msub> <mrow> <mo>{</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msup> <mo>}</mo> </mrow> <mi>k</mi> </msub> </semantics></math> of approximate solutions by solving the model-based minimization problem; on the right, the evaluation of reconstructed volumes by inspection of cancer objects of interest.</p>
Full article ">Figure 2
<p>Schematic draw representing the Distance Driven approach to compute the system matrix. (<b>a</b>) View on the <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>Z</mi> </mrow> </semantics></math> plane of an X-ray projection onto a single pixel, from a fixed angle. The intersection of the X-ray beam with the volume is highlighted in magenta. (<b>b</b>) The magenta area represents the backward projection of the blue recording unit onto a volume slice parallel to the <math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 3
<p>Reconstructions of microcalcification cluster number 3 in BR3D phantom obtained with Scaled Gradient Projection (SGP), Fixed Point (FP) and Chambolle–Pock (CP) methods. In the upper row, reconstructions in 5 iterations; in the bottom row, reconstructions in 15 iterations.</p>
Full article ">Figure 4
<p>Plots of the Plane Profile on the left and of the Artifact Spread Function (ASF) vectors on the right, taken over one microcalcification of cluster number 3 in BR3D phantom obtained. In all the plots the red line corresponds to SGP method, the blue line to FP method and the green line to CP method.</p>
Full article ">Figure 5
<p>Objective function values vs. the iteration number for the SGP execution on the phantom test. The convergence has been reached after 44 iterations by satisfying condition (<a href="#FD29-jimaging-07-00036" class="html-disp-formula">29</a>). The red labels outline the function values at 5, 15 and 30 iterations.</p>
Full article ">Figure 6
<p>SGP results on BR3D phantom. (<b>a</b>–<b>c</b>) Reconstructions of MC cluster number 5 obtained after 5, 15 and 30 iterations. (<b>d</b>–<b>f</b>) Reconstructions of mass number 2 obtained after 5, 15 and 30 iterations.</p>
Full article ">Figure 7
<p>SGP results on BR3D phantom. Plots of the Plane Profile on the left and of the ASF vectors on the right, taken over one microcalcification of cluster 5 (<b>upper row</b>) and over mass number 2 (<b>bottom row</b>). In all the plots the black line corresponds to 5 iterations, red line to 15 iterations and blue line to 30 iterations.</p>
Full article ">Figure 8
<p>Results obtained after 5, 15 and 30 SGP iterations on a human breast data set. (<b>a</b>–<b>c</b>) Reconstructions of a 440 × 400 pixels region presenting both a spherical mass (pointed by the arrow) and a microcalficication (identified by the circle). (<b>d</b>,<b>e</b>) Plane profiles on the mass and on the microcalcification. In the plots: black line corresponds to 5 iterations and blue line to 30 iterations.</p>
Full article ">Figure 9
<p>Results obtained after 5, 15 and 30 SGP iterations on a human breast data set. The reported 558 × 480 pixels crops present two speculated masses.</p>
Full article ">Figure 10
<p>Plot of the set of values <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>k</mi> </msub> </semantics></math> versus the iteration <span class="html-italic">k</span> (blue line) in the SGP execution on the phantom test. The red straight line represents the constant value <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.005</mn> </mrow> </semantics></math> used in SGP for the experiments presented in the previous sections.</p>
Full article ">Figure 11
<p>Plane Profiles on one microcalcification of cluster number 3 of the phantom, obtained with SGP with different regularization parameters, in 5 and 15 iterations. In all the plots the red line corresponds to fix parameter and the blue line to the adaptive choice of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">
23 pages, 2380 KiB  
Article
A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization
by Germana Landi, Fabiana Zama and Villiam Bortolotti
J. Imaging 2021, 7(2), 18; https://doi.org/10.3390/jimaging7020018 - 28 Jan 2021
Cited by 4 | Viewed by 2598
Abstract
This paper is concerned with the reconstruction of relaxation time distributions in Nuclear Magnetic Resonance (NMR) relaxometry. This is a large-scale and ill-posed inverse problem with many potential applications in biology, medicine, chemistry, and other disciplines. However, the large amount of data and [...] Read more.
This paper is concerned with the reconstruction of relaxation time distributions in Nuclear Magnetic Resonance (NMR) relaxometry. This is a large-scale and ill-posed inverse problem with many potential applications in biology, medicine, chemistry, and other disciplines. However, the large amount of data and the consequently long inversion times, together with the high sensitivity of the solution to the value of the regularization parameter, still represent a major issue in the applicability of the NMR relaxometry. We present a method for two-dimensional data inversion (2DNMR) which combines Truncated Singular Value Decomposition and Tikhonov regularization in order to accelerate the inversion time and to reduce the sensitivity to the value of the regularization parameter. The Discrete Picard condition is used to jointly select the SVD truncation and Tikhonov regularization parameters. We evaluate the performance of the proposed method on both simulated and real NMR measurements. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The Hybrid (blue line), TSVD (red dashed line) and Tikhonov (black dashdotted line) filter factors <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mi>i</mi> </msub> </semantics></math> versus <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>i</mi> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2.5</mn> <mo>·</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p><b>Top</b>: simulated <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>–<math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math> map and surface distribution. <b>Bottom</b>: projections along the <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math> dimensions.</p>
Full article ">Figure 3
<p>The singular values of <math display="inline"><semantics> <mi mathvariant="bold">K</mi> </semantics></math> (blue line), <math display="inline"><semantics> <msub> <mi mathvariant="bold">K</mi> <mn>1</mn> </msub> </semantics></math> (red dashed line) and <math display="inline"><semantics> <msub> <mi mathvariant="bold">K</mi> <mn>2</mn> </msub> </semantics></math> (black dashdotted line). For easier visualization, the singular values of <math display="inline"><semantics> <msub> <mi mathvariant="bold">K</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">K</mi> <mn>2</mn> </msub> </semantics></math> are plotted versus the integers <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>,</mo> <msup> <mn>100</mn> <mn>1</mn> </msup> <mo>,</mo> <msup> <mn>100</mn> <mn>2</mn> </msup> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msup> <mn>100</mn> <mn>100</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Singular values of <math display="inline"><semantics> <msub> <mi mathvariant="bold">K</mi> <mi>k</mi> </msub> </semantics></math> obtained for <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> (blue line) and the singular values of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">K</mi> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>⊗</mo> <msub> <mi mathvariant="bold">K</mi> <mrow> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (blue dotted line), <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (black dashed line) and for <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> (red dashdotted line).</p>
Full article ">Figure 5
<p>The Picard plot for the simulated NMR data. Singular values <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>i</mi> </msub> </semantics></math> (black dashed line), Fourier coefficients <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">u</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mi mathvariant="bold">s</mi> </mrow> </semantics></math> (blue continuous line) and solution coefficients <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">u</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mi mathvariant="bold">s</mi> <mo>/</mo> <msub> <mi>σ</mi> <mi>i</mi> </msub> </mrow> </semantics></math> (red circles) for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>500</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Relative error behaviour (<b>left</b>) and the computational time in seconds (<b>right</b>) versus the regularization parameter values.</p>
Full article ">Figure 7
<p>Estimated distributions from the Hybrid (<b>top row</b>), Tikhonov (<b>middle row</b>) and VSH (<b>bottom row</b>) models for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (<b>left column</b>), <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> (<b>middle column</b>) and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> (<b>right column</b>).</p>
Full article ">Figure 8
<p>Projections along the <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>1</mn> </mrow> </semantics></math> dimension from the Hybrid (<b>top row</b>), Tikhonov (<b>middle row</b>) and VSH (<b>bottom row</b>) models for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (<b>left column</b>), <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> (<b>middle column</b>) and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> (<b>right column</b>). The red dashed and blue continuous lines respectively represent the exact projections and the computed ones.</p>
Full article ">Figure 9
<p>Projections along the <math display="inline"><semantics> <mrow> <mi>T</mi> <mn>2</mn> </mrow> </semantics></math> dimension from the Hybrid (<b>top row</b>), Tikhonov (<b>middle row</b>) and VSH (<b>bottom row</b>) models for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (<b>left column</b>), <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> (<b>middle column</b>) and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> (<b>right column</b>). The red dashed and blue continuous lines respectively represent the exact projections and the computed ones.</p>
Full article ">Figure 10
<p>Hybrid method: restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>–<math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math> distribution and projections along the T1 and T2 dimensions (<math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 11
<p>VSH method: restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>–<math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math> distribution and projections along the T1 and T2 dimensions (<math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 12
<p>From <b>top</b> to <b>bottom</b>: UPEN, Hybrid and VSH restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>–<math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math> maps (<b>left</b>) and surface distributions (<b>right</b>).</p>
Full article ">Figure 13
<p><b>Left</b>: singular values (black dashed line) of <math display="inline"><semantics> <mi mathvariant="bold">K</mi> </semantics></math> and solution coefficients (red circles). Only the first 300 singular values are depicted. <b>Right</b>: Singular values of <math display="inline"><semantics> <msub> <mi mathvariant="bold">K</mi> <mn>1</mn> </msub> </semantics></math> (red dashed line) and <math display="inline"><semantics> <msub> <mi mathvariant="bold">K</mi> <mn>2</mn> </msub> </semantics></math> (black dashdotted line). The horizontal line corresponds to the value <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>From <b>top</b> to <b>bottom</b>: UPEN, Hybrid and VSH projections along the T1 (<b>left</b>) and T2 (<b>right</b>) dimensions.</p>
Full article ">Figure 14 Cont.
<p>From <b>top</b> to <b>bottom</b>: UPEN, Hybrid and VSH projections along the T1 (<b>left</b>) and T2 (<b>right</b>) dimensions.</p>
Full article ">
26 pages, 3307 KiB  
Article
A Computationally Efficient Reconstruction Algorithm for Circular Cone-Beam Computed Tomography Using Shallow Neural Networks
by Marinus J. Lagerwerf, Daniël M. Pelt, Willem Jan Palenstijn and Kees Joost Batenburg
J. Imaging 2020, 6(12), 135; https://doi.org/10.3390/jimaging6120135 - 8 Dec 2020
Cited by 11 | Viewed by 3505
Abstract
Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a [...] Read more.
Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a need for fast reconstruction algorithms capable of creating accurate reconstructions from limited data. In this paper, we introduce the Neural Network Feldkamp–Davis–Kress (NN-FDK) algorithm. This algorithm adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency. Moreover, the NN-FDK algorithm is designed such that it has low training data requirements and is fast to train. This ensures that the proposed algorithm can be used to improve image quality in high-throughput CT scanning settings, where FDK is currently used to keep pace with the acquisition speed using readily available computational resources. We compare the NN-FDK algorithm to two standard CT reconstruction algorithms and to two popular deep neural networks trained to remove reconstruction artifacts from the 2D slices of an FDK reconstruction. We show that the NN-FDK reconstruction algorithm is substantially faster in computing a reconstruction than all the tested alternative methods except for the standard FDK algorithm and we show it can compute accurate CCB CT reconstructions in cases of high noise, a low number of projection angles or large cone angles. Moreover, we show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the NN-FDK network, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">N</mi> <mi>θ</mi> </msub> <mo>:</mo> <msup> <mi mathvariant="double-struck">R</mi> <msub> <mi>N</mi> <mi>e</mi> </msub> </msup> <mo>→</mo> <mi mathvariant="double-struck">R</mi> </mrow> </semantics></math>, with <math display="inline"><semantics> <msub> <mi>N</mi> <mi>h</mi> </msub> </semantics></math> hidden nodes. Note that if we take <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <msub> <mrow> <mo>(</mo> <msub> <mi>F</mi> <mi mathvariant="bold">y</mi> </msub> <mi>E</mi> <mo>)</mo> </mrow> <mrow> <mi>v</mi> <mo>:</mo> </mrow> </msub> </mrow> </semantics></math> we get <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>·</mo> <msubsup> <mi mathvariant="bold">h</mi> <mi>e</mi> <mi>k</mi> </msubsup> <mo>=</mo> <msub> <mrow> <mo>(</mo> <mi>FDK</mi> <mrow> <mo>(</mo> <mi mathvariant="bold">y</mi> <mo>,</mo> <mi>E</mi> <msubsup> <mi mathvariant="bold">h</mi> <mi>e</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>v</mi> </msub> </mrow> </semantics></math> in the perceptrons of the hidden layer and the output of the network is equal to the <span class="html-italic">v</span>-th voxel of the NN-FDK reconstruction algorithm.</p>
Full article ">Figure 2
<p>Examples simulated data. (<b>a</b>) Slices, (<b>Left</b>) <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, (<b>Right</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, of the Fourshape test phantom. This phantom is designed such that at least one of all objects can clearly be observed in the slices. (<b>b</b>) The <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> slice for a Random Defrise phantom (<b>Left</b>) and the standard Defrise phantom without alternating intensities from [<a href="#B53-jimaging-06-00135" class="html-bibr">53</a>] (<b>Right</b>).</p>
Full article ">Figure 3
<p>The <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (<b>Left</b>) and <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (<b>Right</b>) slice of the gold standard reconstruction of the high-dose dataset of the 21st walnut with full number of projection angles. The projection data are acquired using the FleX-ray scanner located at the CWI [<a href="#B55-jimaging-06-00135" class="html-bibr">55</a>]. The horizontal line artifacts visible in the right figure are due to imperfections in the projection data and not due to the reconstruction process.</p>
Full article ">Figure 4
<p>The required memory to store all intermediate images for applying a 2D and 3D U-net and MSD-net as a function of the input image size.</p>
Full article ">Figure 5
<p>The test set error (TSE) as a function of the number of voxels the training process has seen. We report the lowest TSE until that point. The networks are trained on randomly generated Fourshape phantoms with size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1024</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math> projection angles and no noise. (<b>Left</b>) Linear scaling in the number of voxels ranging from 1 epoch for the NN-FDK (<math display="inline"><semantics> <msup> <mn>10</mn> <mn>6</mn> </msup> </semantics></math> voxels), to one full 3D dataset (<math display="inline"><semantics> <msup> <mn>10</mn> <mn>9</mn> </msup> </semantics></math> voxels). (<b>Right</b>) Logarithmic scaling in the number of voxels. Ranging from 1 epoch for the NN-FDK network (<math display="inline"><semantics> <msup> <mn>10</mn> <mn>6</mn> </msup> </semantics></math> voxels) to 5 epochs for a DNN (<math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>10</mn> </msup> </mrow> </semantics></math> voxels).</p>
Full article ">Figure 6
<p>The average and standard deviation of the TSE andstructural similarity index (SSIM). These results are discussed in <a href="#sec5dot2-jimaging-06-00135" class="html-sec">Section 5.2</a>. For each number of projection angles, the noise level, cone angle and training scenario of one specific network are trained and used to evaluate the 20 reconstruction problems. The NN-FDK reconstruction time is 4-10 times lower than U-net, MSD-net and approximately 40 times lower than SIRT<math display="inline"><semantics> <msubsup> <mrow/> <mn>200</mn> <mo>+</mo> </msubsup> </semantics></math>. (<b>a</b>) The average and standard deviation of the TSE and SSIM as a function of number of projection angles <math display="inline"><semantics> <msub> <mi>N</mi> <mi>a</mi> </msub> </semantics></math> computed over 20 randomly generated phantoms Fourshape family. (<b>b</b>) The average and standard deviation of the TSE and SSIM as a function of the emitted photon count <math display="inline"><semantics> <msub> <mi>I</mi> <mn>0</mn> </msub> </semantics></math> computed over 20 randomly generated phantoms of the Fourshape family. (<b>c</b>) The average and standard deviation of the average TSE and SSIM as a function of the cone angle computed over 20 randomly generated phantoms of the Defrise family.</p>
Full article ">Figure 7
<p>Line profile through the center of the <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> slice of the Fourshape test phantom. We show the ground truth profile, the profile of the FDK reconstruction with lowest emitted photon count <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math>, and the profile of the FDK reconstruction with the highest emitted photon count <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>8196</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Two-dimensional slices of the reconstructions for the considered reconstruction methods. (Top) Slice <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> of the Fourshape test phantom reconstruction problem with <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>360</mn> </mrow> </semantics></math> projection angles and <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1024</mn> </mrow> </semantics></math> emitted photon count. (Middle) Slice <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> of the Fourshape test phantom reconstruction problem with <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math> projection angles. (Bottom) Slice <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> of the Defrise reconstruction problem with <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>360</mn> </mrow> </semantics></math> projection angles and a cone angle of 11.5 degrees.</p>
Full article ">Figure 9
<p>Slices <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> of several reconstruction methods of the high-dose dataset of the 21 st walnut with 32 projection angles. (<b>a</b>) FDK<math display="inline"><semantics> <msub> <mrow/> <mi>HN</mi> </msub> </semantics></math>. (<b>b</b>) SIRT<math display="inline"><semantics> <msubsup> <mrow/> <mn>200</mn> <mo>+</mo> </msubsup> </semantics></math> reconstruction. (<b>c</b>) NN-FDK<math display="inline"><semantics> <msub> <mrow/> <mn>4</mn> </msub> </semantics></math> reconstruction. (<b>d</b>) MSD-net reconstruction.</p>
Full article ">

Review

Jump to: Research

33 pages, 3286 KiB  
Review
Off-The-Grid Variational Sparse Spike Recovery: Methods and Algorithms
by Bastien Laville, Laure Blanc-Féraud and Gilles Aubert
J. Imaging 2021, 7(12), 266; https://doi.org/10.3390/jimaging7120266 - 6 Dec 2021
Cited by 7 | Viewed by 3086
Abstract
Gridless sparse spike reconstruction is a rather new research field with significant results for the super-resolution problem, where we want to retrieve fine-scale details from a noisy and filtered acquisition. To tackle this problem, we are interested in optimisation under some prior, typically [...] Read more.
Gridless sparse spike reconstruction is a rather new research field with significant results for the super-resolution problem, where we want to retrieve fine-scale details from a noisy and filtered acquisition. To tackle this problem, we are interested in optimisation under some prior, typically the sparsity i.e., the source is composed of spikes. Following the seminal work on the generalised LASSO for measures called the Beurling-Lasso (BLASSO), we will give a review on the chief theoretical and numerical breakthrough of the off-the-grid inverse problem, as we illustrate its usefulness to the super-resolution problem in Single Molecule Localisation Microscopy (SMLM) through new reconstruction metrics and tests on synthetic and real SMLM data we performed for this review. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Discrete reconstruction, which can be seen as spikes with support constrained on a grid; (<b>b</b>) off-the-grid reconstruction, the spikes can move continuously on the line. The red line is the acquisition <span class="html-italic">y</span>, orange spikes are the source (the cause we want to retrieve), blue spikes are discrete reconstruction constrained on a grid and green can move freely since it is off-the-grid. Note that, when a source spike is between two grid points, two spikes will be recovered in the discrete reconstruction.</p>
Full article ">Figure 2
<p>(<b>a</b>) Certificates associated with acquisition <span class="html-italic">y</span> and noiseless <math display="inline"><semantics> <msub> <mi>y</mi> <mn>0</mn> </msub> </semantics></math>, result of three <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peaks (in black, plotted with 10 times their ground-truth amplitudes) through a Fourier measurement of cut-off frequency <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi mathvariant="normal">c</mi> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>; (<b>b</b>) localisation of the roots of the certificate associated with the dual <span class="html-italic">maximum</span>. All the roots (the three ground-truths and the spurious spike on the right) on the unit circle are interpreted as the support of the <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peaks.</p>
Full article ">Figure 3
<p>Reconstruction with the Interior Point Method for <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The algorithm detected a spurious spike near 0.05; otherwise, amplitudes and positions of the peaks are correctly estimated.</p>
Full article ">Figure 4
<p>Reconstruction by <span class="html-italic">Sliding Frank-Wolfe</span> for a 1D Fourier operator, with the same settings (<span class="html-italic">y</span>, noise realisations, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) as the former section. All ground-truth spikes are reconstructed, no spurious spike is detected.</p>
Full article ">Figure 5
<p>(<b>a</b>) First iterate <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; (<b>b</b>) mid-computation <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>; (<b>c</b>) end of the computation <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, results for SFW reconstruction on the domain <math display="inline"><semantics> <mrow> <mi mathvariant="script">X</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mrow> </semantics></math> for the Gaussian kernel with spread-factor <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and additive Gaussian noise of variance <math display="inline"><semantics> <mrow> <mn>0.1</mn> </mrow> </semantics></math>. All <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-peaks are successfully recovered only thanks to the acquisition, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mrow> <mn>3</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Digest of the important quantities mentioned in [<a href="#B3-jimaging-07-00266" class="html-bibr">3</a>,<a href="#B23-jimaging-07-00266" class="html-bibr">23</a>]: <span style="color:#a35f45">red refers to <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="script">M</mi> <mo>+</mo> </msup> <mfenced open="(" close=")"> <mi mathvariant="script">X</mi> </mfenced> </mrow> </semantics></math> quantities</span>, <span style="color:#6f9e41">green to <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="sans-serif">Ω</mi> <mi>N</mi> </msup> <mover> <mo>=</mo> <mrow> <mi>def</mi> <mo>.</mo> </mrow> </mover> <mrow> <mo>(</mo> <msup> <mi>R</mi> <mo>+</mo> </msup> <mo>×</mo> <mi mathvariant="script">X</mi> <mo>)</mo> </mrow> </mrow> </semantics></math></span> and <span style="color:#332f9e">blue to the Wasserstein space <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">P</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi mathvariant="sans-serif">Ω</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and theoretical results</span>. Dashed lines correspond to the theoretical section, and continuous lines indicate the numerical part.</p>
Full article ">Figure 7
<p>Reconstruction by <span class="html-italic">Conic Particle Gradient Descent</span> for a 1D Fourier operator in a noiseless setting, with the same ground-truth spikes as the former section. Implementation is an adaptation of [<a href="#B23-jimaging-07-00266" class="html-bibr">23</a>], <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mi>β</mi> <mo>=</mo> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> for 1000 iterations.</p>
Full article ">Figure 8
<p>(<b>a</b>) Initialisation <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; (<b>b</b>) mid-computation <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>; (<b>c</b>) end of the computation <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>. <span class="html-italic">Conic Particle Gradient Descent</span> applied for 2D Gaussian deconvolution, the red dots are the particle measure <math display="inline"><semantics> <msup> <mi>ν</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msup> </semantics></math> (size of dot proportional with amplitude), the three white dots are the source measure, the image in the background is the noiseless acquisition <math display="inline"><semantics> <msub> <mi>y</mi> <mn>0</mn> </msub> </semantics></math> and the black lines are the paths of the particles <math display="inline"><semantics> <msup> <mi>ν</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msup> </semantics></math>—all the paths constitute the gradient flow <math display="inline"><semantics> <msub> <mrow> <mo>(</mo> <msub> <mi>ν</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> <mrow> <mi>t</mi> <mo>≥</mo> <mn>0</mn> </mrow> </msub> </semantics></math>. Implementation is an adaptation of [<a href="#B23-jimaging-07-00266" class="html-bibr">23</a>], <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mi>β</mi> <mo>=</mo> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>(<b>a</b>) Ground-truth tubulins, two excerpts of the stack in the square below: convolution + all noise described before; (<b>b</b>) reconstructed measure by <span class="html-italic">Sliding Frank-Wolfe</span> visualised through Gaussian kernel with a smaller <math display="inline"><semantics> <mi>σ</mi> </semantics></math> (see text).</p>
Full article ">Figure 10
<p>(<b>top left</b>) Excerpt of the stack; (<b>top right</b>) mean of the stack; (<b>bottom left</b>) reconstruction by off-the-grid method; (<b>bottom right</b>) Deep-STORM.</p>
Full article ">
Back to TopTop