A Rehabilitation of Pixel-Based Spectral Reconstruction from RGB Images
<p>The RGB sensing coarsely sums the spectral intensities into 3 values per pixel. Conversely, spectral reconstruction recovers the lost spectral information from the RGB image.</p> "> Figure 2
<p>The SR mean-relative-absolute error (MRAE) maps of (<b>A</b>) the leading deep neural network (DNN) “AWAN” [<a href="#B35-sensors-23-04155" class="html-bibr">35</a>], (<b>B</b>) our data-augmented AWAN and (<b>C</b>) our pixel-based “A++”, under the original, rotation and blur conditions. The error maps of the “rotation” experiments are rotated back to upright orientation to ease comparison.</p> "> Figure 3
<p>The effectiveness of our data augmentation setups for AWAN. The AWAN-aug result refers to augmenting input images with one random condition (a combined condition of rotation and blur), while AWAN-aug3 augments 3 random conditions per image. The results are shown in mean per-image-mean-MRAE.</p> "> Figure 4
<p>An example visualized hyperspectral image reconstruction performance by all compared methods. One scene from the ICVL database [<a href="#B44-sensors-23-04155" class="html-bibr">44</a>] shown in the left-most column is tested under the original (<b>top row</b>), rotation (<b>middle row</b>), and two Gaussian blur conditions (<b>bottom 2 rows</b>). The error maps for the rotation condition are rotated back to an upright orientation to ease the comparison.</p> "> Figure 5
<p>Visualization of selected ground-truth and recovered spectra (continued in <a href="#sensors-23-04155-f006" class="html-fig">Figure 6</a>). <b>Left:</b> 3 pixels specified in an example scene. <b>Middle:</b> Legend for the spectral plots—in all plots in <a href="#sensors-23-04155-f005" class="html-fig">Figure 5</a> and <a href="#sensors-23-04155-f006" class="html-fig">Figure 6</a>, ground-truth (gt) is shown in black, A++ in red, AWAN in green, and HSCNN-D in blue. <b>Right:</b> The recovery of spectra in the “sky” region (i.e., region ➀ in the example scene) under the Original, Rot90, Blur10 and Blur20 imaging conditions.</p> "> Figure 6
<p>Visualization of the ground-truth and recovered spectra in region ➁ and ➂ in the example scene in <a href="#sensors-23-04155-f005" class="html-fig">Figure 5</a>. The legend for the different colored curves is the same as in <a href="#sensors-23-04155-f005" class="html-fig">Figure 5</a>: ground-truth (gt) is shown in black, A++ in red, AWAN in green, and HSCNN-D in blue. Respectively, region ➁ refers to the “building” and region ➂ the “plants”.</p> "> Figure 7
<p>The top 5 Characteristic Vector Analysis (CVA) characteristic vectors of the ground-truth (gt; black curve), A++-recovered (red), AWAN-recovered (green) and HSCNN-D-recovered spectra (blue) in the testing image set. All recovered spectra are from original testing images without rotation or blurring.</p> "> Figure 8
<p>The original (<b>left</b>) and relighted scenes (<b>middle</b> and <b>right</b>) shown in sRGB colors.</p> "> Figure 9
<p>CIE Illuminant <span class="html-italic">A</span> scene relighting error heat maps in <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>E</mi> <mn>00</mn> </msub> </mrow> </semantics></math>. The ground-truth relighted scene is shown in sRGB in the leftmost column. From the top to the bottom row, the tested imaging condition is in turn the original, rotation, and two Gaussian blur conditions.</p> "> Figure 10
<p>CIE Illuminant <span class="html-italic">E</span> scene relighting error heat maps in <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>E</mi> <mn>00</mn> </msub> </mrow> </semantics></math>. The ground-truth relighted scene is shown in sRGB in the leftmost column. From the top to the bottom row, the tested imaging condition is in turn the original, rotation, and two Gaussian blur conditions.</p> ">
Abstract
:1. Introduction
2. Related Works
3. A++ Pixel-Based Spectral Reconstruction
3.1. Preliminaries
3.2. Overview of A+ and A++
3.3. Primary SR Algorithm
3.4. Clustering Step
3.5. Local Linear SR Maps
3.5.1. Training
3.5.2. Testing
4. Experiments
4.1. Dataset
4.2. Training, Validation and Testing
4.3. Evaluation Setup
4.4. Tuning Our A++ Sparse Coding Architecture
4.5. DNN Data Augmentation
- one out of four image orientations including the original, 90 degrees, 180 degrees and 270 degrees clockwise, and
- a factor for the Gaussian filter, drawn from the uniform distribution between .
4.6. Results
Characteristic Vector Analysis Test
4.7. Discussion and Limitations
5. Demonstration: Spectral Reconstruction for Scene Relighting
5.1. “Ground-Truth” Scene Relighting
5.2. Experiment: SR Relighting vs. RGB Diagonal Model Relighting
5.2.1. Evaluation Metric
5.2.2. Results
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wandell, B. The synthesis and analysis of color images. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 2–13. [Google Scholar] [CrossRef]
- Wang, W.; Ma, L.; Chen, M.; Du, Q. Joint Correlation Alignment-Based Graph Neural Network for Domain Adaptation of Multitemporal Hyperspectral Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3170–3184. [Google Scholar] [CrossRef]
- Torun, O.; Yuksel, S. Unsupervised segmentation of LiDAR fused hyperspectral imagery using pointwise mutual information. Int. J. Remote Sens. 2021, 42, 6465–6480. [Google Scholar] [CrossRef]
- Tu, B.; Zhou, C.; Liao, X.; Zhang, G.; Peng, Y. Spectral–spatial hyperspectral classification via structural-kernel collaborative representation. IEEE Geosci. Remote Sens. Lett. 2020, 18, 861–865. [Google Scholar] [CrossRef]
- Inamdar, D.; Kalacska, M.; Leblanc, G.; Arroyo-Mora, J. Characterizing and mitigating sensor generated spatial correlations in airborne hyperspectral imaging data. Remote Sens. 2020, 12, 641. [Google Scholar] [CrossRef]
- Xie, W.; Fan, S.; Qu, J.; Wu, X.; Lu, Y.; Du, Q. Spectral Distribution-Aware Estimation Network for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12. [Google Scholar] [CrossRef]
- Zhang, L.; Cheng, B. A combined model based on stacked autoencoders and fractional Fourier entropy for hyperspectral anomaly detection. Int. J. Remote Sens. 2021, 42, 3611–3632. [Google Scholar] [CrossRef]
- Li, X.; Zhao, C.; Yang, Y. Hyperspectral anomaly detection based on the distinguishing features of a redundant difference-value network. Int. J. Remote Sens. 2021, 42, 5459–5477. [Google Scholar] [CrossRef]
- Zhang, X.; Ma, X.; Huyan, N.; Gu, J.; Tang, X.; Jiao, L. Spectral-Difference Low-Rank Representation Learning for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10364–10377. [Google Scholar] [CrossRef]
- Lv, M.; Chen, T.; Yang, Y.; Tu, T.; Zhang, N.; Li, W.; Li, W. Membranous nephropathy classification using microscopic hyperspectral imaging and tensor patch-based discriminative linear regression. Biomed. Opt. Express 2021, 12, 2968–2978. [Google Scholar] [CrossRef] [PubMed]
- Courtenay, L.; González-Aguilera, D.; Lagüela, S.; del Pozo, S.; Ruiz-Mendez, C.; Barbero-García, I.; Román-Curto, C.; Cañueto, J.; Santos-Durán, C.; Cardeñoso-Álvarez, M.; et al. Hyperspectral imaging and robust statistics in non-melanoma skin cancer analysis. Biomed. Opt. Express 2021, 12, 5107–5127. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, J.; Wang, T.; Song, Z.; Li, Y.; Huang, Y.; Wang, L.; Jin, J. Automated in-field leaf-level hyperspectral imaging of corn plants using a Cartesian robotic platform. Comput. Electron. Agric. 2021, 183, 105996. [Google Scholar] [CrossRef]
- Gomes, V.; Mendes-Ferreira, A.; Melo-Pinto, P. Application of Hyperspectral Imaging and Deep Learning for Robust Prediction of Sugar and pH Levels in Wine Grape Berries. Sensors 2021, 21, 3459. [Google Scholar] [CrossRef]
- Pane, C.; Manganiello, G.; Nicastro, N.; Cardi, T.; Carotenuto, F. Powdery Mildew Caused by Erysiphe cruciferarum on Wild Rocket (Diplotaxis tenuifolia): Hyperspectral Imaging and Machine Learning Modeling for Non-Destructive Disease Detection. Agriculture 2021, 11, 337. [Google Scholar] [CrossRef]
- Picollo, M.; Cucci, C.; Casini, A.; Stefani, L. Hyper-spectral imaging technique in the cultural heritage field: New possible scenarios. Sensors 2020, 20, 2843. [Google Scholar] [CrossRef]
- Grillini, F.; Thomas, J.; George, S. Mixing models in close-range spectral imaging for pigment mapping in cultural heritage. In Proceedings of the International Colour Association (AIC) Conference, Avignon, France, 26–27 November 2020; pp. 372–376. [Google Scholar]
- Gat, N. Imaging spectroscopy using tunable filters: A review. Wavelet Appl. VII 2000, 4056, 50–64. [Google Scholar]
- Green, R.; Eastwood, M.; Sarture, C.; Chrien, T.; Aronsson, M.; Chippendale, B.; Faust, J.; Pavri, B.; Chovit, C.; Solis, M.; et al. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
- Cao, X.; Du, H.; Tong, X.; Dai, Q.; Lin, S. A prism-mask system for multispectral video acquisition. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2423–2435. [Google Scholar]
- Takatani, T.; Aoto, T.; Mukaigawa, Y. One-shot hyperspectral imaging using faced reflectors. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4039–4047. [Google Scholar]
- Wang, L.; Xiong, Z.; Gao, D.; Shi, G.; Wu, F. Dual-camera design for coded aperture snapshot spectral imaging. Appl. Opt. 2015, 54, 848–858. [Google Scholar] [CrossRef]
- Zhao, Y.; Guo, H.; Ma, Z.; Cao, X.; Yue, T.; Hu, X. Hyperspectral Imaging With Random Printed Mask. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10149–10157. [Google Scholar]
- Garcia, H.; Correa, C.; Arguello, H. Multi-resolution compressive spectral imaging reconstruction from single pixel measurements. IEEE Trans. Image Process. 2018, 27, 6174–6184. [Google Scholar] [CrossRef]
- Galvis, L.; Lau, D.; Ma, X.; Arguello, H.; Arce, G. Coded aperture design in compressive spectral imaging based on side information. Appl. Opt. 2017, 56, 6332–6340. [Google Scholar] [CrossRef] [PubMed]
- Rueda, H.; Arguello, H.; Arce, G. DMD-based implementation of patterned optical filter arrays for compressive spectral imaging. J. Opt. Soc. Am. A 2015, 32, 80–89. [Google Scholar] [CrossRef] [PubMed]
- Brainard, D.; Freeman, W. Bayesian color constancy. J. Opt. Soc. Am. A 1997, 14, 1393–1411. [Google Scholar] [CrossRef] [PubMed]
- Heikkinen, V.; Lenz, R.; Jetsu, T.; Parkkinen, J.; Hauta-Kasari, M.; Jääskeläinen, T. Evaluation and unification of some methods for estimating reflectance spectra from RGB images. J. Opt. Soc. Am. A 2008, 25, 2444–2458. [Google Scholar] [CrossRef] [PubMed]
- Maloney, L.; Wandell, B. Color constancy: A method for recovering surface spectral reflectance. J. Opt. Soc. Am. A 1986, 3, 29–33. [Google Scholar] [CrossRef]
- Arad, B.; Ben-Shahar, O.; Timofte, R.; Gool, L.V.; Zhang, L.; Yang, M.; Xiong, Z.; Chen, C.; Shi, Z.; Li, D.; et al. NTIRE 2018 challenge on spectral reconstruction from RGB images. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 929–938. [Google Scholar]
- Arad, B.; Timofte, R.; Ben-Shahar, O.; Lin, Y.; Finlayson, G.; Givati, S.; Li, J.; Wu, C.; Song, R.; Li, Y.; et al. NTIRE 2020 challenge on spectral reconstruction from an RGB Image. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Finlayson, G.; Morovic, P. Metamer sets. J. Opt. Soc. Am. A 2005, 22, 810–819. [Google Scholar] [CrossRef]
- Lin, Y.; Finlayson, G. On the Optimization of Regression-Based Spectral Reconstruction. Sensors 2021, 21, 5586. [Google Scholar] [CrossRef]
- Stiebel, T.; Merhof, D. Brightness Invariant Deep Spectral Super-Resolution. Sensors 2020, 20, 5789. [Google Scholar] [CrossRef]
- Lin, Y.; Finlayson, G. Exposure Invariance in Spectral Reconstruction from RGB Images. In Proceedings of the Color and Imaging Conference, Paris, France, 21–25 October 2019; Volume 2019, pp. 284–289. [Google Scholar]
- Li, J.; Wu, C.; Song, R.; Li, Y.; Liu, F. Adaptive weighted attention network with camera spectral sensitivity prior for spectral reconstruction from RGB images. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 462–463. [Google Scholar]
- Lin, Y.; Finlayson, G. Physically Plausible Spectral Reconstruction. Sensors 2020, 20, 6399. [Google Scholar] [CrossRef]
- Aeschbacher, J.; Wu, J.; Timofte, R. In defense of shallow learned spectral reconstruction from RGB images. In Proceedings of the International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 471–479. [Google Scholar]
- Agahian, F.; Amirshahi, S.; Amirshahi, S. Reconstruction of reflectance spectra using weighted principal component analysis. Color Res. Appl. 2008, 33, 360–371. [Google Scholar] [CrossRef]
- Hardeberg, J. On the spectral dimensionality of object colours. In Proceedings of the Conference on Colour in Graphics, Imaging, and Vision. Society for Imaging Science and Technology, Poitiers, France, 2–5 April 2002; Volume 2002, pp. 480–485. [Google Scholar]
- Parkkinen, J.; Hallikainen, J.; Jaaskelainen, T. Characteristic spectra of Munsell colors. J. Opt. Soc. Am. A 1989, 6, 318–322. [Google Scholar] [CrossRef]
- Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 193–200. [Google Scholar]
- Connah, D.; Hardeberg, J. Spectral recovery using polynomial models. In Proceedings of the Color Imaging X: Processing, Hardcopy, and Applications. International Society for Optics and Photonics, San Jose, CA, USA, 16–20 January 2005; Volume 5667, pp. 65–75. [Google Scholar]
- Morovic, P.; Finlayson, G. Metamer-set-based approach to estimating surface reflectance from camera RGB. J. Opt. Soc. Am. A 2006, 23, 1814–1822. [Google Scholar] [CrossRef]
- Arad, B.; Ben-Shahar, O. Sparse recovery of hyperspectral signal from natural RGB images. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 19–34. [Google Scholar]
- Nguyen, R.; Prasad, D.; Brown, M. Training-based spectral reconstruction from a single RGB image. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 186–201. [Google Scholar]
- Sharma, G.; Wang, S. Spectrum recovery from colorimetric data for color reproductions. In Proceedings of the Color Imaging: Device-Independent Color, Color Hardcopy, and Applications VII, San Jose, CA, USA, 19–25 January 2002; Volume 4663, pp. 8–14. [Google Scholar]
- Ribés, A.; Schmit, F. Reconstructing spectral reflectances with mixture density networks. In Proceedings of the Conference on Colour in Graphics, Imaging, and Vision, Poitiers, France, 2–5 April 2002; Volume 2002, pp. 486–491. [Google Scholar]
- Arun, P.; Buddhiraju, K.; Porwal, A.; Chanussot, J. CNN based spectral super-resolution of remote sensing images. Signal Process. 2020, 169, 107394. [Google Scholar] [CrossRef]
- Joslyn Fubara, B.; Sedky, M.; Dyke, D. RGB to Spectral Reconstruction via Learned Basis Functions and Weights. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 480–481. [Google Scholar]
- Shi, Z.; Chen, C.; Xiong, Z.; Liu, D.; Wu, F. Hscnn+: Advanced cnn-based hyperspectral recovery from RGB images. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 939–947. [Google Scholar]
- Zhao, Y.; Po, L.; Yan, Q.; Liu, W.; Lin, T. Hierarchical regression network for spectral reconstruction from RGB images. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 422–423. [Google Scholar]
- Li, Y.; Wang, C.; Zhao, J. Locally Linear Embedded Sparse Coding for Spectral Reconstruction From RGB Images. IEEE Signal Process. Lett. 2017, 25, 363–367. [Google Scholar] [CrossRef]
- Lin, Y.; Finlayson, G. Investigating the Upper-Bound Performance of Sparse-Coding-Based Spectral Reconstruction from RGB Images. In Proceedings of the Color and Imaging Conference, Online, 1–4 November 2021. [Google Scholar]
- Lin, Y.; Finlayson, G. Recovering Real-World Spectra from RGB Images under Radiance Mondrian-World Assumption. In Proceedings of the International Colour Association (AIC) Conference, Milan, Italy, 30 August–3 September 2021. [Google Scholar]
- Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [PubMed]
- Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
- Tikhonov, A.; Goncharsky, A.; Stepanov, V.; Yagola, A. Numerical Methods for the Solution of Ill-Posed Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 328. [Google Scholar]
- Galatsanos, N.; Katsaggelos, A. Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation. IEEE Trans. Image Process. 1992, 1, 322–336. [Google Scholar] [CrossRef]
- CIE 2019, CIE 1964 Colour-Matching Functions, 10 Degree Observer, (Data Table), International Commission on Illumination (CIE), Vienna, Austria. Available online: https://cie.co.at/datatable/cie-1964-colour-matching-functions-10-degree-observer (accessed on 12 April 2023).
- Virtanen, P.; Gommers, R.; Oliphant, T.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
- Maloney, L.T. Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Am. A 1986, 3, 1673–1683. [Google Scholar] [CrossRef]
- Finlayson, G.; Drew, M.; Funt, B. Diagonal transforms suffice for color constancy. In Proceedings of the International Conference on Computer Vision, Berlin, Germany, 11–14 May 1993; pp. 164–171. [Google Scholar]
- Land, E. The retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef]
- Schanda, J. Colorimetry: Understanding the CIE System; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
- Brainard, D.; Wandell, B. Analysis of the retinex theory of color vision. J. Opt. Soc. Am. A 1986, 3, 1651–1661. [Google Scholar] [CrossRef] [PubMed]
- Sharma, G.; Wu, W.; Dalal, E. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
- Robertson, A. The CIE 1976 color-difference formulae. Color Res. Appl. 1977, 2, 7–11. [Google Scholar] [CrossRef]
- Süsstrunk, S.; Buckley, R.; Swen, S. Standard RGB color spaces. In Proceedings of the Color and Imaging Conference, Scottsdale, AZ, USA, 16–19 November 1999; Volume 1999, pp. 127–134. [Google Scholar]
Training Steps | Testing (Reconstruction) Steps | ||
---|---|---|---|
1. | Obtain primary SR estimates of all training RGBs | 1. | Obtain the primary SR estimate of each testing RGB |
2. | Run K-SVD clustering on the primary estimates | 2. | Find the closest cluster center of this primary estimate |
3. | For each cluster, find N RGBs in the training set whose primary estimates are closest to the cluster center | 3. | Get the trained local linear SR map associated with this cluster |
4. | Train a linear SR map associated with this cluster using the found N RGBs and their ground-truth spectra | 4. | Apply this map to the testing RGB to reconstruct its spectrum |
Method | Number of | Training | Testing Time |
---|---|---|---|
Parameters | Time | (per Image) | |
HSCNN-D | 9.3 × | 2.7 days | 13.3 min |
AWAN | 1.7 × | 2.8 days | 20.1 min |
A+ | 9.5 × | 26.9 min | 17.8 s |
PR-RELS | 2.6 × | 15.1 min | 6.5 s |
A++ (Ours) | 7.6 × | 3.4 h | 5.4 min |
K | 1024 | 2048 | 4096 | 8192 | 10,240 |
N (fixed) | ——————8192—————— | ||||
MRAE (%) | 1.88 | 1.82 | 1.78 | 1.76 | 1.78 |
K (fixed) | ——————8192—————— | ||||
N | 512 | 1024 | 2048 | 4096 | 8192 |
MRAE (%) | 1.70 | 1.69 | 1.70 | 1.72 | 1.76 |
Approach | Method | Mean per-Image-Mean MRAE (%) | Mean per-Image-99-pt. MRAE (%) | ||||||
---|---|---|---|---|---|---|---|---|---|
Orig | Rot90 | Blur10 | Blur20 | Orig | Rot90 | Blur10 | Blur20 | ||
DNN | HSCNN-D | 1.71 | 1.91 | 1.70 | 1.70 | 7.18 | 7.76 | 6.97 | 6.54 |
AWAN | 1.20 | 2.12 | 2.72 | 2.78 | 6.15 | 8.08 | 10.75 | 10.34 | |
AWAN-aug3 | 2.11 | 2.01 | 1.95 | 2.01 | 9.60 | 9.17 | 9.51 | 9.20 | |
Pixel-based | A+ | 3.81 | 3.81 | 3.70 | 3.71 | 15.52 | 15.52 | 14.36 | 13.47 |
PR-RELS | 1.86 | 1.86 | 1.70 | 1.70 | 7.56 | 7.56 | 6.80 | 6.32 | |
A++ (Ours) | 1.69 | 1.69 | 1.53 | 1.54 | 8.11 | 8.11 | 7.30 | 6.85 |
Method | Explained Variance (CVA Eigenvalue) | ||||
---|---|---|---|---|---|
#1 | #2 | #3 | #4 | #5 | |
HSCNN-D | |||||
AWAN | |||||
A++ (Ours) | |||||
Ground-Truth |
Approach | Method | Relighting to CIE Illuminant A | |||||||
---|---|---|---|---|---|---|---|---|---|
Mean per-Image-Mean | Mean per-Image-99-pt. | ||||||||
Orig | Rot | Blur10 | Blur20 | Orig | Rot | Blur10 | Blur20 | ||
Baseline | RGB Diagonal | 0.83 | 0.83 | 0.82 | 0.81 | 2.63 | 2.63 | 2.30 | 2.20 |
DNN-basedSR | HSCNN-D | 0.30 | 0.30 | 0.24 | 0.24 | 2.28 | 2.36 | 1.82 | 1.71 |
AWAN | 0.10 | 0.20 | 0.32 | 0.32 | 1.39 | 1.90 | 1.91 | 1.81 | |
AWAN-aug3 | 0.23 | 0.23 | 0.15 | 0.15 | 2.33 | 2.35 | 1.95 | 1.84 | |
Pixel-basedSR | A+ | 0.30 | 0.30 | 0.26 | 0.26 | 2.41 | 2.41 | 2.08 | 1.93 |
PR-RELS | 0.19 | 0.19 | 0.16 | 0.16 | 1.97 | 1.97 | 1.79 | 1.66 | |
A++ (Ours) | 0.15 | 0.15 | 0.13 | 0.13 | 1.84 | 1.84 | 1.70 | 1.62 |
Approach | Method | Relighting to CIE Illuminant E | |||||||
---|---|---|---|---|---|---|---|---|---|
Mean per-image-mean | Mean per-image-99-pt. | ||||||||
Orig | Rot | Blur10 | Blur20 | Orig | Rot | Blur10 | Blur20 | ||
Baseline | RGB diagonal | 1.35 | 1.35 | 1.35 | 1.35 | 3.39 | 3.39 | 3.24 | 3.18 |
DNN-basedSR | HSCNN-D | 0.33 | 0.34 | 0.27 | 0.26 | 2.58 | 2.74 | 2.03 | 1.92 |
AWAN | 0.12 | 0.21 | 0.24 | 0.24 | 1.64 | 2.11 | 2.21 | 2.14 | |
AWAN-aug3 | 0.27 | 0.27 | 0.23 | 0.24 | 3.00 | 2.97 | 2.92 | 2.85 | |
Pixel-basedSR | A+ | 0.40 | 0.40 | 0.36 | 0.35 | 3.17 | 3.17 | 2.74 | 2.59 |
PR-RELS | 0.17 | 0.17 | 0.16 | 0.15 | 1.87 | 1.87 | 1.78 | 1.68 | |
A++ (Ours) | 0.16 | 0.16 | 0.13 | 0.13 | 1.99 | 1.99 | 1.81 | 1.73 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, Y.-T.; Finlayson, G.D. A Rehabilitation of Pixel-Based Spectral Reconstruction from RGB Images. Sensors 2023, 23, 4155. https://doi.org/10.3390/s23084155
Lin Y-T, Finlayson GD. A Rehabilitation of Pixel-Based Spectral Reconstruction from RGB Images. Sensors. 2023; 23(8):4155. https://doi.org/10.3390/s23084155
Chicago/Turabian StyleLin, Yi-Tun, and Graham D. Finlayson. 2023. "A Rehabilitation of Pixel-Based Spectral Reconstruction from RGB Images" Sensors 23, no. 8: 4155. https://doi.org/10.3390/s23084155
APA StyleLin, Y.-T., & Finlayson, G. D. (2023). A Rehabilitation of Pixel-Based Spectral Reconstruction from RGB Images. Sensors, 23(8), 4155. https://doi.org/10.3390/s23084155