[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (101)

Search Parameters:
Keywords = color correction techniques

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13442 KiB  
Article
From Play to Understanding: Large Language Models in Logic and Spatial Reasoning Coloring Activities for Children
by Sebastián Tapia-Mandiola and Roberto Araya
AI 2024, 5(4), 1870-1892; https://doi.org/10.3390/ai5040093 (registering DOI) - 11 Oct 2024
Viewed by 451
Abstract
Visual thinking leverages spatial mechanisms in animals for navigation and reasoning. Therefore, given the challenge of abstract mathematics and logic, spatial reasoning-based teaching strategies can be highly effective. Our previous research verified that innovative box-and-ball coloring activities help teach elementary school students complex [...] Read more.
Visual thinking leverages spatial mechanisms in animals for navigation and reasoning. Therefore, given the challenge of abstract mathematics and logic, spatial reasoning-based teaching strategies can be highly effective. Our previous research verified that innovative box-and-ball coloring activities help teach elementary school students complex notions like quantifiers, logical connectors, and dynamic systems. However, given the richness of the activities, correction is slow, error-prone, and demands high attention and cognitive load from the teacher. Moreover, feedback to the teacher should be immediate. Thus, we propose to provide the teacher with real-time help with LLMs. We explored various prompting techniques with and without context—Zero-Shot, Few-Shot, Chain of Thought, Visualization of Thought, Self-Consistency, logicLM, and emotional —to test GPT-4o’s visual, logical, and correction capabilities. We obtained that Visualization of Thought and Self-Consistency techniques enabled GPT-4o to correctly evaluate 90% of the logical–spatial problems that we tested. Additionally, we propose a novel prompt combining some of these techniques that achieved 100% accuracy on a testing sample, excelling in spatial problems and enhancing logical reasoning. Full article
Show Figures

Figure 1

Figure 1
<p>Plot obtained from [<a href="#B32-ai-05-00093" class="html-bibr">32</a>]. We can see the growth in the visual reasoning index, surpassing the human baseline in recent years.</p>
Full article ">Figure 2
<p>Examples of the completed exercises by students with their instructions. Note that from left to right, the first two are correct, and the third is incorrect. In the blank box below the activity, the student must explain in his or her own words how he or she solved the coloring problem.</p>
Full article ">Figure 3
<p>Corresponding blank worksheet where each student has to write instructions to pose a problem to a peer.</p>
Full article ">Figure 4
<p>Ways in which the boxes and balls were introduced in the prompts. On the left, the colors were given in parentheses, while on the right, colored emojis were used to give a visual aid for VofT and Self-Consistency.</p>
Full article ">Figure 5
<p>Examples used in Self-Consistency and Chain-of-Thought prompts. For Chain of Thought, the emojis are replaced with text, as in <a href="#ai-05-00093-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Some other problems tested in this work. The remaining are listed in <a href="#app1-ai-05-00093" class="html-app">Appendix A</a>.</p>
Full article ">Figure 7
<p>Example for instruction 1.1 by directly entering the image in <a href="#ai-05-00093-f002" class="html-fig">Figure 2</a> with the proposed solution solved by a child.</p>
Full article ">Figure 8
<p>Results obtained for each technique in each problem. Cells in green indicate correct answers with correct reasoning, in yellow correct answers with incorrect reasoning, and in red incorrect answers. ZS stands for Zero-Shot, FS stands for Few-Shot, CofT stands for Chain of Thought, VofT stands for Visualization of Thought, and SC and V stand for Self-Consistency and Visualization, respectively.</p>
Full article ">Figure 9
<p>Individual scores for Self-Consistency and the two versions of our prompt. The vertical axis shows the number of correct answers given a total of 5 generated answers.</p>
Full article ">Figure 10
<p>Error in correcting problem 1.3 while using logic Zero-Shot with no context. The machine interprets the instruction as “For every box, there exists a color” when in fact, it is “There exists a color for every box”.</p>
Full article ">Figure 11
<p>Early conclusion error in correcting problem 1.2 while using Zero-Shot without context. ChatGPT initially concludes that the solution is incorrect (which is accurate) but then reverses its decision, incorrectly stating that the solution is correct.</p>
Full article ">Figure A1
<p>Problems for instruction 1.</p>
Full article ">Figure A2
<p>Problems for instruction 2.</p>
Full article ">Figure A3
<p>Problems for instruction 3.</p>
Full article ">Figure A4
<p>Problems for instruction 4. We omit the complete instruction in the images because it is too long.</p>
Full article ">Figure A5
<p>Problems for instruction 5. We omit the complete instruction in the images because it is too long.</p>
Full article ">
13 pages, 7413 KiB  
Article
A Study on Enhancing the Visual Fidelity of Aviation Simulators Using WGAN-GP for Remote Sensing Image Color Correction
by Chanho Lee, Hyukjin Kwon, Hanseon Choi, Jonggeun Choi, Ilkyun Lee, Byungkyoo Kim, Jisoo Jang and Dongkyoo Shin
Appl. Sci. 2024, 14(20), 9227; https://doi.org/10.3390/app14209227 - 11 Oct 2024
Viewed by 366
Abstract
When implementing outside-the-window (OTW) visuals in aviation tactical simulators, maintaining terrain image color consistency is critical for enhancing pilot immersion and focus. However, due to various environmental factors, inconsistent image colors in terrain can cause visual confusion and diminish realism. To address these [...] Read more.
When implementing outside-the-window (OTW) visuals in aviation tactical simulators, maintaining terrain image color consistency is critical for enhancing pilot immersion and focus. However, due to various environmental factors, inconsistent image colors in terrain can cause visual confusion and diminish realism. To address these issues, a color correction technique based on a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) is proposed. The proposed WGAN-GP model utilizes multi-scale feature extraction and Wasserstein distance to effectively measure and adjust the color distribution difference between the input image and the reference image. This approach can preserve the texture and structural characteristics of the image while maintaining color consistency. In particular, by converting Bands 2, 3, and 4 of the BigEarthNet-S2 dataset into RGB images as the reference image and preprocessing the reference image to serve as the input image, it is demonstrated that the proposed WGAN-GP model can handle large-scale remote sensing images containing various lighting conditions and color differences. The experimental results showed that the proposed WGAN-GP model outperformed traditional methods, such as histogram matching and color transfer, and was effective in reflecting the style of the reference image to the target image while maintaining the structural elements of the target image during the training process. Quantitative analysis demonstrated that the mid-stage model achieved a PSNR of 28.93 dB and an SSIM of 0.7116, which significantly outperforms traditional methods. Furthermore, the LPIPS score was reduced to 0.3978, indicating improved perceptual similarity. This approach can contribute to improving the visual elements of the simulator to enhance pilot immersion and has the potential to significantly reduce time and costs compared to the manual methods currently used by the Republic of Korea Air Force. Full article
(This article belongs to the Special Issue Applications of Machine Learning Algorithms in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>An overview of the architecture of the WGAN-GP model.</p>
Full article ">Figure 2
<p>Architecture of the Generator and Critic in the WGAN-GP model. (<b>a</b>) Architecture of the Generator (<b>b</b>) Architecture of the Critic.</p>
Full article ">Figure 3
<p>BigEarthNet-S2 RGB images (ref images).</p>
Full article ">Figure 4
<p>BigEarthNet-S2 preprocessing images (target images).</p>
Full article ">Figure 5
<p>Color matching results of BigEarthNet-S2 RGB images and BigEarthNet-S2 preprocessing images. (<b>a</b>) Generated image by model at the early stage of training, (<b>b</b>) generated image by model at the mid-stage of training, and (<b>c</b>) generated image by fully trained model.</p>
Full article ">Figure 6
<p>Precise texture reproduction results. (<b>a</b>) Generated image by model at the early stage of training, (<b>b</b>) generated image by model at the mid-stage of training, and (<b>c</b>) generated image by fully trained model.</p>
Full article ">Figure 7
<p>Comparison with other method’s results. (<b>a</b>) Image processed using histogram matching, showing limitations in maintaining color consistency and significant information loss in texture and detail; (<b>b</b>) image processed using the color transfer technique, which also shows limitations in maintaining the ground truth’s color consistency and lacks texture reproduction; (<b>c</b>) image generated by the early stage of the WGAN-GP-based model, where color distribution is irregular and texture representation is still underdeveloped; (<b>d</b>) image generated by the mid-stage model, demonstrating improved color matching and texture reproduction, with textures becoming more similar to the ground truth; and (<b>e</b>) image generated by the fully trained WGAN-GP model, showing a slight decrease in color consistency compared to the mid-stage model but offering superior texture reproduction compared to the other methods.</p>
Full article ">
25 pages, 5085 KiB  
Article
Enhancing Underwater Images through Multi-Frequency Detail Optimization and Adaptive Color Correction
by Xiujing Gao, Junjie Jin, Fanchao Lin, Hongwu Huang, Jiawei Yang, Yongfeng Xie and Biwen Zhang
J. Mar. Sci. Eng. 2024, 12(10), 1790; https://doi.org/10.3390/jmse12101790 - 8 Oct 2024
Viewed by 403
Abstract
This paper presents a novel underwater image enhancement method addressing the challenges of low contrast, color distortion, and detail loss prevalent in underwater photography. Unlike existing methods that may introduce color bias or blur during enhancement, our approach leverages a two-pronged strategy. First, [...] Read more.
This paper presents a novel underwater image enhancement method addressing the challenges of low contrast, color distortion, and detail loss prevalent in underwater photography. Unlike existing methods that may introduce color bias or blur during enhancement, our approach leverages a two-pronged strategy. First, an Efficient Fusion Edge Detection (EFED) module preserves crucial edge information, ensuring detail clarity even in challenging turbidity and illumination conditions. Second, a Multi-scale Color Parallel Frequency-division Attention (MCPFA) module integrates multi-color space data with edge information. This module dynamically weights features based on their frequency domain positions, prioritizing high-frequency details and areas affected by light attenuation. Our method further incorporates a dual multi-color space structural loss function, optimizing the performance of the network across RGB, Lab, and HSV color spaces. This approach enhances structural alignment and minimizes color distortion, edge artifacts, and detail loss often observed in existing techniques. Comprehensive quantitative and qualitative evaluations using both full-reference and no-reference image quality metrics demonstrate that our proposed method effectively suppresses scattering noise, corrects color deviations, and significantly enhances image details. In terms of objective evaluation metrics, our method achieves the best performance in the test dataset of EUVP with a PSNR of 23.45, SSIM of 0.821, and UIQM of 3.211, indicating that it outperforms state-of-the-art methods in improving image quality. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Raw underwater images. Underwater images commonly suffer from (<b>a</b>) color casts, (<b>b</b>) artifacts, and (<b>c</b>) blurred details.</p>
Full article ">Figure 2
<p>The overview of our framework. First, the EFED module detects edge information in the image using an efficient network architecture. Subsequently, the original image and the extracted edge map are fed into the MCPFA module. The MCPFA module leverages an attention mechanism to fuse information from different color spaces and scales, enhancing the image and ultimately producing the enhanced result.</p>
Full article ">Figure 3
<p>Pixel difference convolution flowchart [<a href="#B36-jmse-12-01790" class="html-bibr">36</a>]. * for point multiplication. First, calculating the difference between a target pixel and its neighboring pixels, then multiplying these differences by the corresponding weights in the convolution kernel and summing the results, and finally, outputting the sum as the feature value of the target pixel.</p>
Full article ">Figure 4
<p>Edge detection structure diagram. First, the original image undergoes multiple downsampling layers within the backbone network, extracting multi-scale edge features. Subsequently, these features are fed into four parallel auxiliary networks. The auxiliary networks utilize dilated convolutions to enlarge the receptive field, sampling global information and fusing features from different scales. This process enables refined edge processing. Finally, the auxiliary networks output a high-quality edge map.</p>
Full article ">Figure 5
<p>MCSF module. Integrates information from HSV, Lab, and RGB color spaces, along with edge information, to provide comprehensive features for subsequent image enhancement steps.</p>
Full article ">Figure 6
<p>CF-MHA architecture. First, the input feature map is divided into frequency bands based on scale channels. Then, each band undergoes multi-head attention computation independently. Color-aware weights are learned based on the attenuation levels of different colors at different locations. Finally, the multi-head attention outputs, adjusted by the color-aware weights, are fused to produce the final enhanced feature, effectively mitigating the color attenuation issue in underwater images.</p>
Full article ">Figure 7
<p>Visual comparison of the full-reference data on the test dataset of EUVP. From left to right; (<b>a</b>) original underwater image, (<b>b</b>) UDCP [<a href="#B10-jmse-12-01790" class="html-bibr">10</a>], (<b>c</b>) HE [<a href="#B47-jmse-12-01790" class="html-bibr">47</a>], (<b>d</b>) CLAHE [<a href="#B11-jmse-12-01790" class="html-bibr">11</a>], (<b>e</b>) LRS [<a href="#B48-jmse-12-01790" class="html-bibr">48</a>], (<b>f</b>) FUnIE-GAN [<a href="#B3-jmse-12-01790" class="html-bibr">3</a>], (<b>g</b>) U-shape [<a href="#B41-jmse-12-01790" class="html-bibr">41</a>], (<b>h</b>) Semi-UIR [<a href="#B49-jmse-12-01790" class="html-bibr">49</a>], (<b>i</b>) our method and (<b>j</b>) reference image (recognized as ground-truthing (GT)).</p>
Full article ">Figure 8
<p>Visual comparison of non-reference data from RUIE on the UCCS, UTTS, and UIQS datasets. From left to right: for (1) bluish-biased image, (2) bluish-green biased image, and (3) greenish-biased image data in the UCCS dataset with different color biases, and (4) underwater image quality data in the UIQS dataset that contains underwater images of various qualities for specific underwater mission, and (5) underwater target mission data in the image dataset UTTS for a specific underwater mission. From left to right: (<b>a</b>) original underwater image, (<b>b</b>) UDCP [<a href="#B10-jmse-12-01790" class="html-bibr">10</a>], (<b>c</b>) HE [<a href="#B47-jmse-12-01790" class="html-bibr">47</a>], (<b>d</b>) CLAHE [<a href="#B11-jmse-12-01790" class="html-bibr">11</a>], (<b>e</b>) LRS [<a href="#B48-jmse-12-01790" class="html-bibr">48</a>], (<b>f</b>) FUnIE-GAN [<a href="#B3-jmse-12-01790" class="html-bibr">3</a>], (<b>g</b>) U-shape [<a href="#B41-jmse-12-01790" class="html-bibr">41</a>], (<b>h</b>) Semi-UIR [<a href="#B49-jmse-12-01790" class="html-bibr">49</a>] and (<b>i</b>) our method.</p>
Full article ">Figure 9
<p>Visual comparison of reference data on the test dataset of EUVP. From left to right: (<b>a</b>) the original image, (<b>b</b>) Sobel [<a href="#B19-jmse-12-01790" class="html-bibr">19</a>], (<b>c</b>) Canny [<a href="#B22-jmse-12-01790" class="html-bibr">22</a>], (<b>d</b>) Laplace [<a href="#B21-jmse-12-01790" class="html-bibr">21</a>], (<b>e</b>) RCF [<a href="#B53-jmse-12-01790" class="html-bibr">53</a>], (<b>f</b>) ours and (<b>g</b>) ours on ground truth.</p>
Full article ">Figure 10
<p>Results of color space selection evaluation. Tests are performed on the test dataset of EUVP to obtain PSNR and SSIM results for each color space model test.</p>
Full article ">Figure 11
<p>Results of ablation experiments on different components. From left to right: (<b>a</b>) Input, (<b>b</b>) U-net, (<b>c</b>) U + EFED, (<b>d</b>) U + MCSF, (<b>e</b>) U + CF-MHA, (<b>f</b>) U + EFED + MCSF, (<b>g</b>) U + MCSF + CF-MHA, (<b>h</b>) U + CF-MHA + EFED, (<b>i</b>) MCPFA, (<b>j</b>) GT. And zoomed-in local details.</p>
Full article ">Figure 12
<p>The results of underwater target recognition. From left to right: (<b>a</b>) original underwater image, (<b>b</b>) UDCP [<a href="#B10-jmse-12-01790" class="html-bibr">10</a>], (<b>c</b>) HE [<a href="#B47-jmse-12-01790" class="html-bibr">47</a>], (<b>d</b>) CLAHE [<a href="#B11-jmse-12-01790" class="html-bibr">11</a>], (<b>e</b>) LRS [<a href="#B48-jmse-12-01790" class="html-bibr">48</a>], (<b>f</b>) FUnIE-GAN [<a href="#B3-jmse-12-01790" class="html-bibr">3</a>], (<b>g</b>) U-shape [<a href="#B41-jmse-12-01790" class="html-bibr">41</a>], (<b>h</b>) Semi-UIR [<a href="#B49-jmse-12-01790" class="html-bibr">49</a>] and (<b>i</b>) our method.</p>
Full article ">Figure 13
<p>The results of the Segment Anything Model. From left to right: (<b>a</b>) original underwater image, (<b>b</b>) UDCP [<a href="#B10-jmse-12-01790" class="html-bibr">10</a>], (<b>c</b>) HE [<a href="#B47-jmse-12-01790" class="html-bibr">47</a>], (<b>d</b>) CLAHE [<a href="#B11-jmse-12-01790" class="html-bibr">11</a>], (<b>e</b>) LRS [<a href="#B48-jmse-12-01790" class="html-bibr">48</a>], (<b>f</b>) FUnIE-GAN [<a href="#B3-jmse-12-01790" class="html-bibr">3</a>], (<b>g</b>) U-shape [<a href="#B41-jmse-12-01790" class="html-bibr">41</a>], (<b>h</b>) Semi-UIR [<a href="#B49-jmse-12-01790" class="html-bibr">49</a>] and (<b>i</b>) our method.</p>
Full article ">Figure 14
<p>Enhancement results of a real underwater cage environment. From left to right: (<b>a</b>) original underwater image, (<b>b</b>) UDCP [<a href="#B10-jmse-12-01790" class="html-bibr">10</a>], (<b>c</b>) HE [<a href="#B47-jmse-12-01790" class="html-bibr">47</a>], (<b>d</b>) CLAHE [<a href="#B11-jmse-12-01790" class="html-bibr">11</a>], (<b>e</b>) LRS [<a href="#B48-jmse-12-01790" class="html-bibr">48</a>], (<b>f</b>) FUnIE-GAN [<a href="#B3-jmse-12-01790" class="html-bibr">3</a>], (<b>g</b>) U-shape [<a href="#B41-jmse-12-01790" class="html-bibr">41</a>], (<b>h</b>) Semi-UIR [<a href="#B49-jmse-12-01790" class="html-bibr">49</a>] and (<b>i</b>) our method.</p>
Full article ">
12 pages, 7796 KiB  
Article
A Multi-Fruit Recognition Method for a Fruit-Harvesting Robot Using MSA-Net and Hough Transform Elliptical Detection Compensation
by Shengxue Wang and Tianhong Luo
Horticulturae 2024, 10(10), 1024; https://doi.org/10.3390/horticulturae10101024 - 26 Sep 2024
Viewed by 467
Abstract
In the context of agricultural modernization and intelligentization, automated fruit recognition is of significance for improving harvest efficiency and reducing labor costs. The variety of fruits commonly planted in orchards and the fluctuations in market prices require farmers to adjust the types of [...] Read more.
In the context of agricultural modernization and intelligentization, automated fruit recognition is of significance for improving harvest efficiency and reducing labor costs. The variety of fruits commonly planted in orchards and the fluctuations in market prices require farmers to adjust the types of crops they plant flexibly. However, the differences in size, shape, and color among different types of fruits make fruit recognition quite challenging. If each type of fruit requires a separate visual model, it becomes time-consuming and labor intensive to train and deploy these models, as well as increasing system complexity and maintenance costs. Therefore, developing a general visual model capable of recognizing multiple types of fruits has great application potential. Existing multi-fruit recognition methods mainly include traditional image processing techniques and deep learning models. Traditional methods perform poorly in dealing with complex backgrounds and diverse fruit morphologies, while current deep learning models may struggle to effectively capture and recognize targets of different scales. To address these challenges, this paper proposes a general fruit recognition model based on the Multi-Scale Attention Network (MSA-Net) and a Hough Transform localization compensation mechanism. By generating multi-scale feature maps through a multi-scale attention mechanism, the model enhances feature learning for fruits of different sizes. In addition, the Hough Transform ellipse detection compensation mechanism uses the shape features of fruits and combines them with MSA-Net recognition results to correct the initial positioning of spherical fruits and improve positioning accuracy. Experimental results show that the MSA-Net model achieves a precision of 97.56, a recall of 92.21, and an [email protected] of 94.81 on a comprehensive dataset containing blueberries, lychees, strawberries, and tomatoes, demonstrating the ability to accurately recognize multiple types of fruits. Moreover, the introduction of the Hough Transform mechanism reduces the average localization error by 8.8 pixels and 3.5 pixels for fruit images at different distances, effectively improving the accuracy of fruit localization. Full article
Show Figures

Figure 1

Figure 1
<p>Dataset examples: (<b>a</b>) distant tomatoes, (<b>b</b>) exposed tomatoes, (<b>c</b>) close-up of tomatoes, (<b>d</b>) backlit strawberries, (<b>e</b>) strawberries under natural light, (<b>f</b>) exposed strawberries, (<b>g</b>) close-up of lychees, (<b>h</b>) distant lychees, (<b>i</b>) exposed lychees, (<b>j</b>) exposed blueberries, (<b>k</b>) distant blueberries, and (<b>l</b>) backlit blueberries.</p>
Full article ">Figure 1 Cont.
<p>Dataset examples: (<b>a</b>) distant tomatoes, (<b>b</b>) exposed tomatoes, (<b>c</b>) close-up of tomatoes, (<b>d</b>) backlit strawberries, (<b>e</b>) strawberries under natural light, (<b>f</b>) exposed strawberries, (<b>g</b>) close-up of lychees, (<b>h</b>) distant lychees, (<b>i</b>) exposed lychees, (<b>j</b>) exposed blueberries, (<b>k</b>) distant blueberries, and (<b>l</b>) backlit blueberries.</p>
Full article ">Figure 2
<p>The backbone structure of MSA-Net.</p>
Full article ">Figure 3
<p>The overall structure of MSA-Net.</p>
Full article ">Figure 4
<p>The Hough Transform compensation mechanism.</p>
Full article ">Figure 5
<p>The mAP@0.5 variation during the training process.</p>
Full article ">Figure 6
<p>Identification results of different fruits: (<b>a</b>–<b>d</b>) identification results of strawberries, (<b>e</b>–<b>h</b>) tomato recognition results, (<b>i</b>–<b>l</b>) lychee recognition results, and (<b>m</b>–<b>p</b>) blueberry recognition results.</p>
Full article ">Figure 6 Cont.
<p>Identification results of different fruits: (<b>a</b>–<b>d</b>) identification results of strawberries, (<b>e</b>–<b>h</b>) tomato recognition results, (<b>i</b>–<b>l</b>) lychee recognition results, and (<b>m</b>–<b>p</b>) blueberry recognition results.</p>
Full article ">Figure 6 Cont.
<p>Identification results of different fruits: (<b>a</b>–<b>d</b>) identification results of strawberries, (<b>e</b>–<b>h</b>) tomato recognition results, (<b>i</b>–<b>l</b>) lychee recognition results, and (<b>m</b>–<b>p</b>) blueberry recognition results.</p>
Full article ">Figure 7
<p>Hough Transform compensation results for different fruits: (<b>a</b>) blueberry; (<b>b</b>) lychee; (<b>c</b>) tomato; (<b>d</b>) strawberry.</p>
Full article ">
15 pages, 1561 KiB  
Article
Dental Color-Matching Ability: Comparison between Visual Determination and Technology
by Maria Menini, Lorenzo Rivolta, Jordi Manauta, Massimo Nuvina, Zsolt M. Kovacs-Vajna and Paolo Pesce
Dent. J. 2024, 12(9), 284; https://doi.org/10.3390/dj12090284 - 3 Sep 2024
Viewed by 589
Abstract
Background: The choice of the correct color is of paramount importance in esthetic dentistry; however, there is still no consensus on the best technique to determine it. The aim of the present study is to compare the accuracy of a recently introduced colorimeter [...] Read more.
Background: The choice of the correct color is of paramount importance in esthetic dentistry; however, there is still no consensus on the best technique to determine it. The aim of the present study is to compare the accuracy of a recently introduced colorimeter in shade matching with human vision. In addition, possible variables affecting color-matching by human eye have been analysed. Methods: 18 disc-shaped composite samples with identical size and shape were produced from a composite flow system (Enamel plus HriHF, Micerium): Nine were considered control samples (UD 0-UD 6), and nine were test samples with identical flow composite shade to the control ones. Parallelly, 70 individuals (dental students and dental field professionals) were individually instructed to sit in a dark room illuminated with D55 light and to perform visual shade matching between control and test discs. An error matrix containing ΔE94 between control and test discs was generated, containing four match-clusters depending on perceptibility and acceptability thresholds. The frequency and severity of errors were examined. Results: The colorimeter achieved a 100% perfect matching, while individuals only achieved a 78%. A higher occurrence of mismatches was noted for intermediate composite shades without a statistically significant difference. No statistically significant differences were reported for age, sex, and experience. A statistically significant difference was present among the Optishade match and the visual determination. Conclusions: The instrumental shade-matching evaluation proved to be significantly more reliable than the human visual system. Further research is needed to determine whether the same outcomes are achieved in a clinical setting directly on patients. Full article
(This article belongs to the Special Issue Esthetic Dentistry: Current Perspectives and Future Prospects)
Show Figures

Figure 1

Figure 1
<p>Nine control discs and nine test discs arranged following the Micerium chromatic scale (1-2-3-4-5-6-7-8-9) (incremental order).</p>
Full article ">Figure 2
<p>Study setup with the supporting table placed at a 30-degrees inclination for all subjects involved in the research.</p>
Full article ">Figure 3
<p>The chart shows the distribution of incremental and assorted number of matches. No significant difference between incremental and assorted number of matches were observed with a significant level of 5%.</p>
Full article ">
26 pages, 6173 KiB  
Article
Enhancing Underwater Object Detection and Classification Using Advanced Imaging Techniques: A Novel Approach with Diffusion Models
by Prabhavathy Pachaiyappan, Gopinath Chidambaram, Abu Jahid and Mohammed H. Alsharif
Sustainability 2024, 16(17), 7488; https://doi.org/10.3390/su16177488 - 29 Aug 2024
Viewed by 1025
Abstract
Underwater object detection and classification pose significant challenges due to environmental factors such as water turbidity and variable lighting conditions. This research proposes a novel approach that integrates advanced imaging techniques with diffusion models to address these challenges effectively, aligning with Sustainable Development [...] Read more.
Underwater object detection and classification pose significant challenges due to environmental factors such as water turbidity and variable lighting conditions. This research proposes a novel approach that integrates advanced imaging techniques with diffusion models to address these challenges effectively, aligning with Sustainable Development Goal (SDG) 14: Life Below Water. The methodology leverages the Convolutional Block Attention Module (CBAM), Modified Swin Transformer Block (MSTB), and Diffusion model to enhance the quality of underwater images, thereby improving the accuracy of object detection and classification tasks. This study utilizes the TrashCan dataset, comprising diverse underwater scenes and objects, to validate the proposed method’s efficacy. This study proposes an advanced imaging technique YOLO (you only look once) network (AIT-YOLOv7) for detecting objects in underwater images. This network uses a modified U-Net, which focuses on informative features using a convolutional block channel and spatial attentions for color correction and a modified swin transformer block for resolution enhancement. A novel diffusion model proposed using modified U-Net with ResNet understands the intricate structures in images with underwater objects, which enhances detection capabilities under challenging visual conditions. Thus, AIT-YOLOv7 net precisely detects and classifies different classes of objects present in this dataset. These improvements are crucial for applications in marine ecology research, underwater archeology, and environmental monitoring, where precise identification of marine debris, biological organisms, and submerged artifacts is essential. The proposed framework advances underwater imaging technology and supports the sustainable management of marine resources and conservation efforts. The experimental results demonstrate that state-of-the-art object detection methods, namely SSD, YOLOv3, YOLOv4, and YOLOTrashCan, achieve mean accuracies ([email protected]) of 57.19%, 58.12%, 59.78%, and 65.01%, respectively, whereas the proposed AIT-YOLOv7 net reaches a mean accuracy ([email protected]) of 81.4% on the TrashCan dataset, showing a 16.39% improvement. Due to this improvement in the accuracy and efficiency of underwater object detection, this research contributes to broader marine science and technology efforts, promoting the better understanding and management of aquatic ecosystems and helping to prevent and reduce the marine pollution, as emphasized in SDG 14. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed System architecture.</p>
Full article ">Figure 2
<p>Selection of images, highlighting the dataset’s diversity.</p>
Full article ">Figure 3
<p>Process flow description for color correction module of CBAM.</p>
Full article ">Figure 4
<p>Unprocessed images.</p>
Full article ">Figure 5
<p>Output image after color correction from the input image.</p>
Full article ">Figure 6
<p>U-Net architecture.</p>
Full article ">Figure 7
<p>(<b>a</b>) Modified Swin Transformer Block (MSTB) for enhancing image resolution. (<b>b</b>) Process flow of MSTB in underwater image resolution enhancement.</p>
Full article ">Figure 8
<p>Enhanced images.</p>
Full article ">Figure 9
<p>Process flow in diffusion model.</p>
Full article ">Figure 10
<p>Corrupted image.</p>
Full article ">Figure 11
<p>Image generated using the Diffusion model.</p>
Full article ">Figure 12
<p>MS COCO object detection from [<a href="#B5-sustainability-16-07488" class="html-bibr">5</a>].</p>
Full article ">Figure 13
<p>Output image with underwater objects detected and classified using AIT-YOLOv7.</p>
Full article ">Figure 14
<p>Precision, recall, and mAP metrics for the proposed AIT-YOLOv7.</p>
Full article ">Figure 14 Cont.
<p>Precision, recall, and mAP metrics for the proposed AIT-YOLOv7.</p>
Full article ">Figure 15
<p>(<b>a</b>) Input image from the dataset. (<b>b</b>) Convolutional block attention with modified Swin Transformer Block. (<b>c</b>) Diffusion model. (<b>d</b>) Detected and classified with high precision.</p>
Full article ">Figure 15 Cont.
<p>(<b>a</b>) Input image from the dataset. (<b>b</b>) Convolutional block attention with modified Swin Transformer Block. (<b>c</b>) Diffusion model. (<b>d</b>) Detected and classified with high precision.</p>
Full article ">
19 pages, 1137 KiB  
Article
A Bayesian Tensor Decomposition Method for Joint Estimation of Channel and Interference Parameters
by Yuzhe Sun, Wei Wang, Yufan Wang and Yuanfeng He
Sensors 2024, 24(16), 5284; https://doi.org/10.3390/s24165284 - 15 Aug 2024
Viewed by 595
Abstract
Bayesian tensor decomposition has been widely applied in channel parameter estimations, particularly in cases with the presence of interference. However, the types of interference are not considered in Bayesian tensor decomposition, making it difficult to accurately estimate the interference parameters. In this paper, [...] Read more.
Bayesian tensor decomposition has been widely applied in channel parameter estimations, particularly in cases with the presence of interference. However, the types of interference are not considered in Bayesian tensor decomposition, making it difficult to accurately estimate the interference parameters. In this paper, we present a robust tensor variational method using a CANDECOMP/PARAFAC (CP)-based additive interference model for multiple input–multiple output (MIMO) with orthogonal frequency division multiplexing (OFDM) systems. A more realistic interference model compared to traditional colored noise is considered in terms of co-channel interference (CCI) and front-end interference (FEI). In contrast to conventional algorithms that filter out interference, the proposed method jointly estimates the channel and interference parameters in the time–frequency domain. Simulation results validate the correctness of the proposed method by the evidence lower bound (ELBO) and reveal the fact that the proposed method outperforms traditional information-theoretic methods, tensor decomposition models, and robust model based on CP (RCP) in terms of estimation accuracy. Further, the interference parameter estimation technique has profound implications for anti-interference applications and dynamic spectrum allocation. Full article
(This article belongs to the Special Issue Integrated Localization and Communication: Advances and Challenges)
Show Figures

Figure 1

Figure 1
<p>A typical traffic scenario.</p>
Full article ">Figure 2
<p>The power composition of the received tensor.</p>
Full article ">Figure 3
<p>Probabilistic graphical model.</p>
Full article ">Figure 4
<p>(<b>a</b>) The changes in the number of paths and the three variations of ELBO for RCP-APH. (<b>b</b>) The probability density function (PDF) of the interference item power distribution and other estimated parameters for RCP and RCP-APH.</p>
Full article ">Figure 5
<p>For different interference item ratios, a comparison of rank and parameter estimation performance is conducted for interference powers of <math display="inline"><semantics> <mrow> <mn>5</mn> <msubsup> <mi>σ</mi> <mrow> <mi>N</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10</mn> <msubsup> <mi>σ</mi> <mrow> <mi>N</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math>. (<b>a</b>) Rank estimation. (<b>b</b>) Angle estimation. (<b>c</b>) Delay estimation. Here, (<b>b</b>,<b>c</b>) share a common legend.</p>
Full article ">Figure 6
<p>Study on the performance of interference estimation for the RCP-APH. Green indicates accurately estimated interference positions, while blue represents unestimated interference positions. (<b>a</b>) True interference; (<b>b</b>) matrix unfolding of true interference; (<b>c</b>) estimated interference; (<b>d</b>) matrix unfolding of estimated interference.</p>
Full article ">Figure 7
<p>The estimations of the interference positions are compared between two variational algorithms under different interference ratios. To clearly depict the performance differences between the algorithms, coordinate annotations for all subplots are omitted. The first row illustrates the estimated noise precision and PDF of the interference item power for both the RCP-APH and RCP algorithms. The coordinate scales are consistent with <a href="#sensors-24-05284-f004" class="html-fig">Figure 4</a>b. The second row represents the actual interference, while the third and fourth rows depict the estimations of the interference positions for both algorithms. The coordinate scales align with those in <a href="#sensors-24-05284-f006" class="html-fig">Figure 6</a>b.</p>
Full article ">Figure 8
<p>Under different interference item ratios, a comparison of interference estimation is conducted for interference powers of <math display="inline"><semantics> <mrow> <mn>5</mn> <msubsup> <mi>σ</mi> <mrow> <mi>N</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10</mn> <msubsup> <mi>σ</mi> <mrow> <mi>N</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math>. (<b>a</b>) Recall. (<b>b</b>) Precision. (<b>c</b>) F1 Score. Here, all subplots share a common legend.</p>
Full article ">Figure 9
<p>Interference estimation performance is compared for different interference item ratios for both 10 dB and 20 dB of <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>. (<b>a</b>) Recall. (<b>b</b>) Precision. (<b>c</b>) F1 Score. Here, all subplots share a common legend.</p>
Full article ">Figure 10
<p>Performance metrics for interference estimation for different spatial structures and interference characteristics. (<b>a</b>) Different sampling K. (<b>b</b>) Different ratio of CCI. (<b>c</b>) Different bandwidth of FEI. All subplots share a common legend.</p>
Full article ">
22 pages, 30798 KiB  
Article
Underwater Image Enhancement Fusion Method Guided by Salient Region Detection
by Jiawei Yang, Hongwu Huang, Fanchao Lin, Xiujing Gao, Junjie Jin and Biwen Zhang
J. Mar. Sci. Eng. 2024, 12(8), 1383; https://doi.org/10.3390/jmse12081383 - 13 Aug 2024
Cited by 1 | Viewed by 1110
Abstract
Exploring and monitoring underwater environments pose unique challenges due to water’s complex optical properties, which significantly impact image quality. Challenges like light absorption and scattering result in color distortion and decreased visibility. Traditional underwater image acquisition methods face these obstacles, highlighting the need [...] Read more.
Exploring and monitoring underwater environments pose unique challenges due to water’s complex optical properties, which significantly impact image quality. Challenges like light absorption and scattering result in color distortion and decreased visibility. Traditional underwater image acquisition methods face these obstacles, highlighting the need for advanced techniques to solve the image color shift and image detail loss caused by the underwater environment in the image enhancement process. This study proposes a salient region-guided underwater image enhancement fusion method to alleviate these problems. First, this study proposes an advanced dark channel prior method to reduce haze effects in underwater images, significantly improving visibility and detail. Subsequently, a comprehensive RGB color correction restores the underwater scene’s natural appearance. The innovation of our method is that it fuses through a combination of Laplacian and Gaussian pyramids, guided by salient region coefficients, thus preserving and accentuating the visually significant elements of the underwater environment. Comprehensive subjective and objective evaluations demonstrate our method’s superior performance in enhancing contrast, color depth, and overall visual quality compared to existing methods. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of underwater imaging.</p>
Full article ">Figure 2
<p>A flowchart of our proposed method for enhancing underwater images. Firstly, we perform single-image dehazing on the input image. Then, we remove the color deviation from the input image. After that, we detect the salient regions in the previously enhanced maps and calculate the weights. Finally, we fuse the images according to the obtained parameters to achieve the final enhancement result.</p>
Full article ">Figure 3
<p>The results of each key stage in the process: (<b>a</b>) original underwater image, (<b>b</b>) single-image dehazing component, (<b>c</b>) salient region detection component, (<b>d</b>) removed color deviation component, (<b>e</b>) CIELAB component, (<b>f</b>) salient region detection component, (<b>g</b>) enhanced underwater image.</p>
Full article ">Figure 4
<p>Qualitative comparison results of various methods on the UIEB.</p>
Full article ">Figure 5
<p>Qualitative comparison results of various methods on the UIQS.</p>
Full article ">Figure 6
<p>Qualitative comparison results of various methods on the UCCS.</p>
Full article ">Figure 7
<p>Detail enhancement comparisons.</p>
Full article ">Figure 8
<p>Qualitative ablation results for each key component of our method on the UIEB, UCCS, and UIQS datasets. (<b>a</b>) Original image. (<b>b</b>) -w/o SID. (<b>c</b>) -w/o CCR. (<b>d</b>) -w/o SGF. (<b>e</b>) Our proposed method.</p>
Full article ">Figure 9
<p>Additional data validation. (<b>a</b>,<b>b</b>) The top row displays the original image, and the bottom row shows the results of our enhanced underwater image.</p>
Full article ">Figure 10
<p>The results of feature matching.</p>
Full article ">Figure 11
<p>The results of image stitching. (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>) The original sequence image; (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) the stitching result of the enhanced sequence image.</p>
Full article ">Figure 12
<p>The results of target recognition. (<b>a</b>) Original target recognition results; (<b>b</b>) target recognition results of our method.</p>
Full article ">Figure 13
<p>Results of our method in enhancing hazy and low-light images. (<b>a</b>) Comparison of hazy image enhancement; (<b>b</b>) comparison of low-light image enhancement.</p>
Full article ">
12 pages, 8410 KiB  
Article
Enhancing Retina Images by Lowpass Filtering Using Binomial Filter
by Mofleh Hannuf AlRowaily, Hamzah Arof, Imanurfatiehah Ibrahim, Haniza Yazid and Wan Amirul Mahyiddin
Diagnostics 2024, 14(15), 1688; https://doi.org/10.3390/diagnostics14151688 - 5 Aug 2024
Viewed by 721
Abstract
This study presents a method to enhance the contrast and luminosity of fundus images with boundary reflection. In this work, 100 retina images taken from online databases are utilized to test the performance of the proposed method. First, the red, green and blue [...] Read more.
This study presents a method to enhance the contrast and luminosity of fundus images with boundary reflection. In this work, 100 retina images taken from online databases are utilized to test the performance of the proposed method. First, the red, green and blue channels are read and stored in separate arrays. Then, the area of the eye also called the region of interest (ROI) is located by thresholding. Next, the ratios of R to G and B to G at every pixel in the ROI are calculated and stored along with copies of the R, G and B channels. Then, the RGB channels are subjected to average filtering using a 3 × 3 mask to smoothen the RGB values of pixels, especially along the border of the ROI. In the background brightness estimation stage, the ROI of the three channels is filtered by binomial filters (BFs). This step creates a background brightness (BB) surface of the eye region by levelling the foreground objects like blood vessels, fundi, optic discs and blood spots, thus allowing the estimation of the background illumination. In the next stage, using the BB, the luminosity of the ROI is equalized so that all pixels will have the same background brightness. This is followed by a contrast adjustment of the ROI using CLAHE. Afterward, details of the adjusted green channel are enhanced using information from the adjusted red and blue channels. In the color correction stage, the intensities of pixels in the red and blue channels are adjusted according to their original ratios to the green channel before the three channels are reunited. The resulting color image resembles the original one in color distribution and tone but shows marked improvement in luminosity and contrast. The effectiveness of the approach is tested on the test images and enhancement is noticeable visually and quantitatively in greyscale and color. On average, this method manages to increase the contrast and luminosity of the images. The proposed method was implemented using MATLAB R2021b on an AMD 5900HS processor and the average execution time was less than 10 s. The performance of the filter is compared to those of two other filters and it shows better results. This technique can be a useful tool for ophthalmologists who perform diagnoses on the eyes of diabetic patients. Full article
(This article belongs to the Special Issue Advances in Medical Image Processing, Segmentation and Classification)
Show Figures

Figure 1

Figure 1
<p>Two samples of retina images with boundary reflection.</p>
Full article ">Figure 2
<p>Flow of stages in the enhancement process.</p>
Full article ">Figure 3
<p>The green channel of an image, its background brightness and fully adjusted form.</p>
Full article ">Figure 4
<p>Three retina images processed by BF with α of 1, 0.5, 0.2, 0.1 and 0.01, row wise from top to bottom.</p>
Full article ">Figure 5
<p>The original, binomial-filtered, median-filtered and Gaussian-filtered samples, column-wise from left to right. (<b>a</b>) Original, (<b>b</b>) binomial, (<b>c</b>) median and (<b>d</b>) Gaussian.</p>
Full article ">Figure 5 Cont.
<p>The original, binomial-filtered, median-filtered and Gaussian-filtered samples, column-wise from left to right. (<b>a</b>) Original, (<b>b</b>) binomial, (<b>c</b>) median and (<b>d</b>) Gaussian.</p>
Full article ">
22 pages, 5638 KiB  
Article
A Method for Defogging Sea Fog Images by Integrating Dark Channel Prior with Adaptive Sky Region Segmentation
by Kongchi Hu, Qingyan Zeng, Junyan Wang, Jianqing Huang and Qi Yuan
J. Mar. Sci. Eng. 2024, 12(8), 1255; https://doi.org/10.3390/jmse12081255 - 25 Jul 2024
Viewed by 667
Abstract
Due to the detrimental impact of fog on image quality, dehazing maritime images is essential for applications such as safe maritime navigation, surveillance, environmental monitoring, and marine research. Traditional dehazing techniques, which are dependent on presupposed conditions, often fail to perform effectively, particularly [...] Read more.
Due to the detrimental impact of fog on image quality, dehazing maritime images is essential for applications such as safe maritime navigation, surveillance, environmental monitoring, and marine research. Traditional dehazing techniques, which are dependent on presupposed conditions, often fail to perform effectively, particularly when processing sky regions within marine fog images in which these conditions are not met. This study proposes an adaptive sky area segmentation dark channel prior to the marine image dehazing method. This study effectively addresses challenges associated with traditional marine image dehazing methods, improving dehazing results affected by bright targets in the sky area and mitigating the grayish appearance caused by the dark channel. This study uses the grayscale value of the region boundary’s grayscale discontinuity characteristics, takes the grayscale value with the least number of discontinuity areas in the grayscale histogram as a segmentation threshold adapted to the characteristics of the sea fog image to segment bright areas such as the sky, and then uses grayscale gradients to identify grayscale differences in different bright areas, accurately distinguishing boundaries between sky and non-sky areas. By comparing the area parameters, non-sky blocks are filled; this adaptively eliminates interference from other bright non-sky areas and accurately locks the sky area. Furthermore, this study proposes an enhanced dark channel prior approach that optimizes transmittance locally within sky areas and globally across the image. This is achieved using a transmittance optimization algorithm combined with guided filtering technology. The atmospheric light estimation is refined through iterative adjustments, ensuring consistency in brightness between the dehazed and original images. The image reconstruction employs calculated atmospheric light and transmittance values through an atmospheric scattering model. Finally, the use of gamma-correction technology ensures that images more accurately replicate natural colors and brightness levels. Experimental outcomes demonstrate substantial improvements in the contrast, color saturation, and visual clarity of marine fog images. Additionally, a set of foggy marine image data sets is developed for monitoring purposes. Compared with traditional dark channel prior dehazing techniques, this new approach significantly improves fog removal. This advancement enhances the clarity of images obtained from maritime equipment and effectively mitigates the risk of maritime transportation accidents. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>A comparison of boundaries between maritime and terrestrial sky and non-sky areas. (<b>b</b>,<b>c</b>) are magnified portions of (<b>a</b>), and (<b>e</b>) is a magnified portion of (<b>d</b>). (<b>a</b>) is the maritime image. (<b>b</b>) shows the magnified boundary between the sky and maritime areas in the maritime image. (<b>c</b>) shows the magnified boundary between the sky and terrestrial areas in a maritime image. (<b>d</b>) is the terrestrial image. (<b>e</b>) shows the magnified boundary between the sky and terrestrial areas in the terrestrial image.</p>
Full article ">Figure 2
<p>Comparison of land-based defogging algorithms applied to maritime images. The input image in (<b>a</b>) is defogged using the algorithms of (<b>b</b>) Liu et al. [<a href="#B18-jmse-12-01255" class="html-bibr">18</a>], (<b>c</b>) T. M. Bui et al. [<a href="#B19-jmse-12-01255" class="html-bibr">19</a>], (<b>d</b>) He et al. [<a href="#B11-jmse-12-01255" class="html-bibr">11</a>], and (<b>e</b>) Wang et al. [<a href="#B20-jmse-12-01255" class="html-bibr">20</a>], as well as (<b>f</b>) the proposed algorithm.</p>
Full article ">Figure 3
<p>A detailed depiction of the dehazing algorithm process outlined in this study.</p>
Full article ">Figure 4
<p>Explanation of threshold selection. (<b>a</b>) Distribution of optimal segmentation thresholds for 25 foggy maritime images; (<b>b</b>) Histogram of grayscale values for the foggy maritime images.</p>
Full article ">Figure 5
<p><span class="html-italic">L</span>(i, j) represents the central pixel, with its surrounding eight pixels grouped into four sets labeled (1), (2), (3), and (4).</p>
Full article ">Figure 6
<p>Linear regression fitting depicting the relationship between the sky transmittance range and the adjustment factor γ.</p>
Full article ">Figure 7
<p>Qualitative comparison of different methods for defogging maritime images. The foggy input images (first column) were restored using the algorithms of He et al. [<a href="#B11-jmse-12-01255" class="html-bibr">11</a>] (second column), T. V. Nguyen et al. [<a href="#B22-jmse-12-01255" class="html-bibr">22</a>] (third column), Hu et al. [<a href="#B21-jmse-12-01255" class="html-bibr">21</a>] (fourth column), Kaplan N.H. et al. [<a href="#B27-jmse-12-01255" class="html-bibr">27</a>] (fifth column), and Liu et al. [<a href="#B18-jmse-12-01255" class="html-bibr">18</a>] (sixth column), as well as the proposed algorithm (seventh column). Each row, labeled from (<b>a</b>–<b>e</b>), corresponds to distinct maritime scenes, providing a side-by-side comparison of the effectiveness of each algorithm under varying fog conditions.</p>
Full article ">Figure 8
<p>Comparison of demisting outcomes for the misty maritime images and their amplified segments. The second and fourth columns correspondingly exhibit enlarged portions of the crimson rectangles in the initial and subsequent columns.</p>
Full article ">Figure 9
<p>The defogging outcomes of the proposed algorithm for varying α weights: (<b>a</b>) α = 5; (<b>b</b>) α = 25; (<b>c</b>) α = 75. The first and third rows display the outcomes of threshold-based segmentation, while the second and third columns display the corresponding defogged images.</p>
Full article ">Figure 10
<p>The defogging outcomes of the proposed algorithm for varying β weights: (<b>a</b>) β = 0.5; (<b>b</b>) β = 2.5; (<b>c</b>) β = 5; (<b>d</b>) β = 10. The first row displays the results of threshold segmentation, the second row shows the recognition of grayscale gradient outcomes, the third row illustrates connected region filling, and the fourth row presents the defogged images.</p>
Full article ">Figure 11
<p>The defogging outcomes of the proposed algorithm for varying μ weights: (<b>a</b>) μ = 0.5; (<b>b</b>) μ = 1; (<b>c</b>) μ = 5; (<b>d</b>) μ = 15; (<b>e</b>) μ = 20. The transmission maps are presented in the first row, and the corresponding defogged images are displayed in the second row.</p>
Full article ">Figure 12
<p>The defogging outcomes of the proposed algorithm for varying η weights: (<b>a</b>) η = 0; (<b>b</b>) η = 0.5; (<b>c</b>) η = 0.5; (<b>d</b>) η = 0.75; (<b>e</b>) η = 1.</p>
Full article ">Figure 13
<p>Defogging outcomes of land fog images, presenting input images in the top row and corresponding defogged results in the bottom row. (<b>a</b>,<b>b</b>) Land images without sky regions and (<b>c</b>–<b>f</b>) land images with sky regions.</p>
Full article ">
14 pages, 4040 KiB  
Article
THz Generation by Two-Color Plasma: Time Shaping and Ultra-Broadband Polarimetry
by Domenico Paparo, Anna Martinez, Andrea Rubano, Jonathan Houard, Ammar Hideur and Angela Vella
Sensors 2024, 24(13), 4265; https://doi.org/10.3390/s24134265 - 30 Jun 2024
Viewed by 732
Abstract
The generation of terahertz radiation via laser-induced plasma from two-color femtosecond pulses in air has been extensively studied due to its broad emission spectrum and significant pulse energy. However, precise control over the temporal properties of these ultra-broadband terahertz pulses, as well as [...] Read more.
The generation of terahertz radiation via laser-induced plasma from two-color femtosecond pulses in air has been extensively studied due to its broad emission spectrum and significant pulse energy. However, precise control over the temporal properties of these ultra-broadband terahertz pulses, as well as the measurement of their polarization state, remain challenging. In this study, we review our latest findings on these topics and present additional results not previously reported in our earlier works. First, we investigate the impact of chirping on the fundamental wave and the effect of manipulating the phase difference between the fundamental wave and the second-harmonic wave on the properties of generated terahertz pulses. We demonstrate that we can tune the time shape of terahertz pulses, causing them to reverse polarity or become bipolar by carefully selecting the correct combination of chirp and phase. Additionally, we introduce a novel technique for polarization characterization, termed terahertz unipolar polarimetry, which utilizes a weak probe beam and avoids the systematic errors associated with traditional methods. This technique is effective for detecting polarization-structured terahertz beams and the longitudinal component of focused terahertz beams. Our findings contribute to the improved control and characterization of terahertz radiation, enhancing its application in fields such as nonlinear optics, spectroscopy, and microscopy. Full article
(This article belongs to the Special Issue Research Development in Terahertz and Infrared Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>Intensity of the generated THz field as a function of the FW intensity of the two mechanisms: four-wave mixing (orange line) and plasma photocurrent (blue line). The values have been normalized to the maximum yield obtained with the photocurrent mechanism.</p>
Full article ">Figure 2
<p>Panel (<b>a</b>) shows THz waveforms for different values of chirp. Panel (<b>b</b>) shows normalized power spectra for different chirp values. In the inset, the frequency of the maximum power as a function of the chirp is shown. For all graphs, <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>ϕ</mi> <mo>=</mo> <mn>7</mn> <mi>π</mi> <mo>/</mo> <mn>9</mn> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 3
<p>(<b>a</b>) Experimental scheme of the apparatus. FW and THz beams are collinearly focused on the same point, respectively, by a lens and a hole-drilled off-axis parabolic mirror. SHW is analyzed with a combination of a half-waveplate and a polarizing beam splitter (PBS), with the axis parallel to the <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>x</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math> axis. A monochromator (MC) further rejects spurious signals, and a photomultiplier tube (PMT) measures the SHW intensity. Panel (<b>b</b>) shows the geometry of the different polarizations and the half-waveplate-analyzer axis: FW (red line); half-wave plate axis (dotted black line). [Reprinted from <span class="html-italic">Appl. Phys. Lett.</span> 2023, 1403923, 071101 [<a href="#B30-sensors-24-04265" class="html-bibr">30</a>], with the permission of AIP Publishing].</p>
Full article ">Figure 4
<p>Schematic diagram of the experimental setup: HWP, half-waveplate; QWP, quarter-waveplate; WP, Wollaston prism. The pump beam passes through a BBO crystal to generate a second harmonic. Both beams are focused in air to form a plasma that produces a strong THz pulse. The probe beam is directed to the post-compression system to enable EO sampling. Note that the pump beam can be chirped independently of the probe beam.</p>
Full article ">Figure 5
<p>In panels (<b>a</b>–<b>c</b>), we present the measured (blue solid line) and simulated (red dashed line) THz waveforms for different values of chirp (positive) and <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>ϕ</mi> </mrow> </semantics></math>. In panels (<b>d</b>–<b>f</b>), the corresponding power spectra are shown. [Partly adapted from <span class="html-italic">Appl. Phys. Lett.</span> 2024 [<a href="#B29-sensors-24-04265" class="html-bibr">29</a>], 124, 021105, with the permission of AIP Publishing].</p>
Full article ">Figure 6
<p>In panels (<b>a</b>–<b>c</b>), we present the measured (blue solid line) and simulated (red dashed line) THz waveforms for different values of chirp (negative) and <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>ϕ</mi> </mrow> </semantics></math>. In panels (<b>d</b>–<b>f</b>), the corresponding power spectra are shown. [Partly adapted from <span class="html-italic">Appl. Phys. Lett.</span> 2024, 124, 021105 [<a href="#B29-sensors-24-04265" class="html-bibr">29</a>], with the permission of AIP Publishing.</p>
Full article ">Figure 7
<p>The energy of the generated THz pulses is shown as a function of <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>ϕ</mi> </mrow> </semantics></math> and chirp. The different colors correspond to different sets of measurements.</p>
Full article ">Figure 8
<p>In the upper row of the figure, the plots show the quantity <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mn>2</mn> <mi>ω</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi mathvariant="sans-serif">α</mi> <mo>,</mo> <mi mathvariant="sans-serif">β</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> as a function of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">β</mi> </mrow> </semantics></math> for various values of <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math>. The data points are represented by open circles, while the solid lines correspond to the fitting curves, as described in the main text. In the lower row of the figure, the plots depict the quantity <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mn>2</mn> <mi>ω</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi mathvariant="sans-serif">α</mi> <mo>,</mo> <mi mathvariant="sans-serif">β</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> as a function of <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math> for various values of <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math>. [Adapted from <span class="html-italic">Appl. Phys. Lett.</span> 2023, 123, 071101 [<a href="#B30-sensors-24-04265" class="html-bibr">30</a>], with the permission of AIP Publishing].</p>
Full article ">Figure 9
<p>Panels (<b>a</b>–<b>c</b>) indicate the polarization states of the THz beam (see the explanation in the main text). Panels (<b>d</b>–<b>f</b>) show the corresponding normalized intensity distributions of the SHW. In panel (<b>d</b>), the blue curve has been slightly rescaled for clarity; actually, the two interferograms overlap exactly. In panel (<b>f</b>), <math display="inline"><semantics> <mrow> <mi>I</mi> <mfenced separators="|"> <mrow> <mi>α</mi> <mo>,</mo> <mn>0</mn> </mrow> </mfenced> </mrow> </semantics></math> is calculated for different values of the ratio between the longitudinal, <span class="html-italic">c</span>, and the transverse, <span class="html-italic">b</span>, components. The blue dotted square highlights the values for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mi>π</mi> </mrow> </semantics></math>, where the curves show significant differences, to allow measurement of the axial component.</p>
Full article ">
19 pages, 12242 KiB  
Article
Reconstructing the Colors of Underwater Images Based on the Color Mapping Strategy
by Siyuan Wu, Bangyong Sun, Xiao Yang, Wenjia Han, Jiahai Tan and Xiaomei Gao
Mathematics 2024, 12(13), 1933; https://doi.org/10.3390/math12131933 - 21 Jun 2024
Viewed by 667
Abstract
Underwater imagery plays a vital role in ocean development and conservation efforts. However, underwater images often suffer from chromatic aberration and low contrast due to the attenuation and scattering of visible light in the complex medium of water. To address these issues, we [...] Read more.
Underwater imagery plays a vital role in ocean development and conservation efforts. However, underwater images often suffer from chromatic aberration and low contrast due to the attenuation and scattering of visible light in the complex medium of water. To address these issues, we propose an underwater image enhancement network called CM-Net, which utilizes color mapping techniques to remove noise and restore the natural brightness and colors of underwater images. Specifically, CM-Net consists of a three-step solution: adaptive color mapping (ACM), local enhancement (LE), and global generation (GG). Inspired by the principles of color gamut mapping, the ACM enhances the network’s adaptive response to regions with severe color attenuation. ACM enables the correction of the blue-green cast in underwater images by combining color constancy theory with the power of convolutional neural networks. To account for inconsistent attenuation in different channels and spatial regions, we designed a multi-head reinforcement module (MHR) in the LE step. The MHR enhances the network’s attention to channels and spatial regions with more pronounced attenuation, further improving contrast and saturation. Compared to the best candidate models on the EUVP and UIEB datasets, CM-Net improves PSNR by 18.1% and 6.5% and SSIM by 5.9% and 13.3%, respectively. At the same time, CIEDE2000 decreased by 25.6% and 1.3%. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Machine Learning, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The comparison of underwater and normal images: (<b>a</b>) underwater image and corresponding chromaticity diagram; (<b>b</b>) normal images and corresponding chromaticity diagram.</p>
Full article ">Figure 2
<p>The diagram of underwater light attenuation.</p>
Full article ">Figure 3
<p>The architecture of the proposed CM-Net.</p>
Full article ">Figure 4
<p>The architecture of the base module.</p>
Full article ">Figure 5
<p>The architecture of the conditional module.</p>
Full article ">Figure 6
<p>The architecture of the MHR.</p>
Full article ">Figure 7
<p>The architecture of the REB.</p>
Full article ">Figure 8
<p>Visual comparison on the S_500 dataset: (<b>a</b>) raw; (<b>b</b>) GDCP; (<b>c</b>) Fusion; (<b>d</b>) Red; (<b>e</b>) Retinex; (<b>f</b>) UWCNN; (<b>g</b>) W-Net; (<b>h</b>) F-GAN; (<b>i</b>) U-color; (<b>j</b>) U-Shape; (<b>k</b>) CM-Net; (<b>l</b>) ground truth.</p>
Full article ">Figure 9
<p>Visual comparison on the R_90 dataset: (<b>a</b>) raw; (<b>b</b>) GDCP; (<b>c</b>) Fusion; (<b>d</b>) Red; (<b>e</b>) Retinex; (<b>f</b>) UWCNN; (<b>g</b>) W-Net; (<b>h</b>) F-GAN; (<b>i</b>) U-color; (<b>j</b>) U-Shape; (<b>k</b>) CM-Net; (<b>l</b>) ground truth.</p>
Full article ">Figure 10
<p>Visual comparison on the C_60 dataset: (<b>a</b>) raw; (<b>b</b>) GDCP; (<b>c</b>) Fusion; (<b>d</b>) Red; (<b>e</b>) Retinex; (<b>f</b>) UDCP; (<b>g</b>) UWCNN; (<b>h</b>) W-Net; (<b>i</b>) F-GAN; (<b>j</b>) U-color; (<b>k</b>) U-Shape; (<b>l</b>) CM-Net.</p>
Full article ">Figure 11
<p>Visual comparison and corresponding chromaticity diagram on the S_500 dataset: (<b>a</b>) raw; (<b>b</b>) GDCP; (<b>c</b>) Fusion; (<b>d</b>) Red; (<b>e</b>) Retinex; (<b>f</b>) UWCNN; (<b>g</b>) W-Net; (<b>h</b>) F-GAN; (<b>i</b>) U-color; (<b>j</b>) U-Shape; (<b>k</b>) CM-Net; (<b>l</b>) ground truth.</p>
Full article ">Figure 12
<p>Visual comparison and corresponding chromaticity diagram on the R_90 dataset: (<b>a</b>) raw; (<b>b</b>) GDCP; (<b>c</b>) Fusion; (<b>d</b>) Red; (<b>e</b>) Retinex; (<b>f</b>) UWCNN; (<b>g</b>) W-Net; (<b>h</b>) F-GAN; (<b>i</b>) U-color; (<b>j</b>) U-Shape; (<b>k</b>) CM-Net; (<b>l</b>) ground truth.</p>
Full article ">Figure 13
<p>Visual comparison and corresponding chromaticity diagram on the C_60 dataset: (<b>a</b>) raw; (<b>b</b>) GDCP; (<b>c</b>) Fusion; (<b>d</b>) Red; (<b>e</b>) Retinex; (<b>f</b>) UDCP; (<b>g</b>) UWCNN; (<b>h</b>) W-Net; (<b>i</b>) F-GAN; (<b>j</b>) U-color; (<b>k</b>) U-Shape; (<b>l</b>) CM-Net.</p>
Full article ">Figure 14
<p>Visual comparison of different components on R_90 dataset: (<b>a</b>) raw; (<b>b</b>) ACC; (<b>c</b>) ACC + LE; (<b>d</b>) LE + ACC + GG; (<b>e</b>) CM-Net; (<b>f</b>) ground truth.</p>
Full article ">Figure 15
<p>Visual comparison of different MHRB numbers on R_90 dataset: (<b>a</b>) raw; (<b>b</b>) 3 MHRB; (<b>c</b>) 5 MHRB; (<b>d</b>) CM-Net; (<b>e</b>) ground truth.</p>
Full article ">
13 pages, 4456 KiB  
Article
3D Printed Hydrogel Sensor for Rapid Colorimetric Detection of Salivary pH
by Magdalena B. Łabowska, Agnieszka Krakos and Wojciech Kubicki
Sensors 2024, 24(12), 3740; https://doi.org/10.3390/s24123740 - 8 Jun 2024
Viewed by 1042
Abstract
Salivary pH is one of the crucial biomarkers used for non-invasive diagnosis of intraoral diseases, as well as general health conditions. However, standard pH sensors are usually too bulky, expensive, and impractical for routine use outside laboratory settings. Herein, a miniature hydrogel sensor, [...] Read more.
Salivary pH is one of the crucial biomarkers used for non-invasive diagnosis of intraoral diseases, as well as general health conditions. However, standard pH sensors are usually too bulky, expensive, and impractical for routine use outside laboratory settings. Herein, a miniature hydrogel sensor, which enables quick and simple colorimetric detection of pH level, is shown. The sensor structure was manufactured from non-toxic hydrogel ink and patterned in the form of a matrix with 5 mm × 5 mm × 1 mm individual sensing pads using a 3D printing technique (bioplotting). The authors’ ink composition, which contains sodium alginate, polyvinylpyrrolidone, and bromothymol blue indicator, enables repeatable and stable color response to different pH levels. The developed analysis software with an easy-to-use graphical user interface extracts the R(ed), G(reen), and B(lue) components of the color image of the hydrogel pads, and evaluates the pH value in a second. A calibration curve used for the analysis was obtained in a pH range of 3.5 to 9.0 using a laboratory pH meter as a reference. Validation of the sensor was performed on samples of artificial saliva for medical use and its mixtures with beverages of different pH values (lemon juice, coffee, black and green tea, bottled and tap water), and correct responses to acidic and alkaline solutions were observed. The matrix of square sensing pads used in this study provided multiple parallel responses for parametric tests, but the applied 3D printing method and ink composition enable easy adjustment of the shape of the sensing layer to other desired patterns and sizes. Additional mechanical tests of the hydrogel layers confirmed the relatively high quality and durability of the sensor structure. The solution presented here, comprising 3D printed hydrogel sensor pads, simple colorimetric detection, and graphical software for signal processing, opens the way to development of miniature and biocompatible diagnostic devices in the form of flexible, wearable, or intraoral sensors for prospective application in personalized medicine and point-of-care diagnosis. Full article
(This article belongs to the Special Issue Eurosensors 2023 Selected Papers)
Show Figures

Figure 1

Figure 1
<p>Preparation of the hydrogel pH sensors: (<b>a</b>) 3D printed substrate (outer dimensions: 54 × 51 × 1.1 mm<sup>3</sup>) containing a matrix of 25 bioplotted 5 × 5 mm<sup>2</sup> hydrogel sensor pads, (<b>b</b>) hydrogel pattern bioplotting process utilizing a BioX Cellink printer.</p>
Full article ">Figure 2
<p>Tensile strength of the hydrogel matrices. Emperor Force software v1.18-408 for visualization of the maximum tensile strength.</p>
Full article ">Figure 3
<p>Front panel of the developed LabVIEW application for image colorimetry. RGB plots are automatically processed for selected rectangular ROIs selected in the image of the sensor matrix.</p>
Full article ">Figure 4
<p>Tensile strength tests of the hydrogel sensor. (<b>a</b>) Young’s modulus (E) of hydrogel and hydrogel with pH indicator added (N = 10), (<b>b</b>) comparison of the compressive modulus of elasticity (E<sub>c</sub>) of hydrogel exposed to different pH (3.5, 6.5, 9.0) (N = 10).</p>
Full article ">Figure 5
<p>Colorimetry response of hydrogel sensor to pH values in the range 3.5 to 9.0. (<b>a</b>) Visible color tunability of hydrogel sensors after 30 s of various pH application, (<b>b</b>) calibration curve for hue values as a function of pH with fitted polynomial curve after 30 s of various pH application (mean values, N = 5).</p>
Full article ">Figure 6
<p>Triangle charts of RGB components for color changes for various pH 3.5–9.0 (mean values, N = 5) at (<b>a</b>) 0 min, (<b>b</b>) 1 min, (<b>c</b>) 5 min, (<b>d</b>) 15 min.</p>
Full article ">Figure 7
<p>Radar chart of hydrogel sensor responses at 0–15 min (example for ph 6.5 solution)—stability of the hydrogel sensor (mean values, N = 5).</p>
Full article ">Figure 8
<p>Hue values of beverages as a function of pH after 30 s of measurement (mean values, N = 5).</p>
Full article ">Figure 9
<p>Triangle chart of the RGB component for hydrogel sensor responses to saliva and saliva-beverages mixtures after 15 min of application (mean values, N = 5); (<b>a</b>) beverages, 100% concentration; (<b>b</b>) beverages and saliva mixtures, 50% concentration; (<b>c</b>) beverages and saliva mixtures, 20% concentration; (<b>d</b>) beverages and saliva mixtures, 10% concentration.</p>
Full article ">
13 pages, 2233 KiB  
Article
Aspects of Lighting and Color in Classifying Malignant Skin Cancer with Deep Learning
by Alan R. F. Santos, Kelson R. T. Aires and Rodrigo M. S. Veras
Appl. Sci. 2024, 14(8), 3297; https://doi.org/10.3390/app14083297 - 14 Apr 2024
Viewed by 916
Abstract
Malignant skin cancers are common in emerging countries, with excessive sun exposure and genetic predispositions being the main causes. Variations in lighting and color, resulting from the diversity of devices and lighting conditions during image capture, pose a challenge for automated diagnosis through [...] Read more.
Malignant skin cancers are common in emerging countries, with excessive sun exposure and genetic predispositions being the main causes. Variations in lighting and color, resulting from the diversity of devices and lighting conditions during image capture, pose a challenge for automated diagnosis through digital images. Deep learning techniques emerge as promising solutions to improve the accuracy of identifying malignant skin lesions. This work aims to investigate the impact of lighting and color correction methods on automated skin cancer diagnosis using deep learning architectures, focusing on the relevance of these characteristics for accuracy in identifying malignant skin cancer. The developed methodology includes steps for hair removal, lighting, and color correction, defining the region of interest, and classification using deep neural network architectures. We employed deep learning techniques such as LCDPNet, LLNeRF, and DSN for lighting and color correction, which still need to be tested in this context. The results emphasize the importance of image preprocessing, especially in lighting and color adjustments, where the best results show an accuracy increase of between 3% and 4%. We observed that different deep neural network architectures react variably to lighting and color corrections. Some architectures are more sensitive to variations in these characteristics, while others are more robust. Advanced lighting and color correction can thus significantly improve the accuracy of malignant skin cancer diagnosis. Full article
Show Figures

Figure 1

Figure 1
<p>Steps of the suggested experimental methodology.</p>
Full article ">Figure 2
<p>Removal of excess hair. The first image (<b>a</b>) is the input to the method. The second image (<b>b</b>) is the mask produced by combining the components. The third image (<b>c</b>) is the corrected image with bilinear interpolation.</p>
Full article ">Figure 3
<p>Definition of the region of interest. The Encoder processes the input image with atrous convolution at multiple rates. The Decoder integrates the captured features to create a segmentation mask. Then, a bounding box is defined over the lesion to isolate the region of interest cropped from the original image.</p>
Full article ">Figure 4
<p>Suggested lighting and color correction methods: The subfigure (<b>a</b>) represents the uncorrected images, the subfigure (<b>b</b>) represents the images corrected with the LCDPNet method, the subfigure (<b>c</b>) shows the images corrected with the LLNeRF method, and the subfigure (<b>d</b>) represents the images corrected with the DSN method.</p>
Full article ">
3 pages, 414 KiB  
Abstract
The Application of Back-Compatible Color QR Codes to Colorimetric Sensors
by Ismael Benito-Altamirano, Ferran Crugeira, Míriam Marchena and J. Daniel Prades
Proceedings 2024, 97(1), 3; https://doi.org/10.3390/proceedings2024097003 - 13 Mar 2024
Viewed by 573
Abstract
We present the application of QR Codes as carriers for colorimetric dyes, whereby this refined version of machine-readable patterns applied to colorimetric sensing also allows us to maintain the data from the QR Code standard in a back-compatible way, which means that the [...] Read more.
We present the application of QR Codes as carriers for colorimetric dyes, whereby this refined version of machine-readable patterns applied to colorimetric sensing also allows us to maintain the data from the QR Code standard in a back-compatible way, which means that the QR Code is still able to encode digital data (readable with a standard QR Code decoder) alongside a hundred colorimetric references and the dyes. Also, we discuss in detail the effectiveness of different color correction methods in attaining color accuracy levels suited for sensing via colorimetry. Moreover, we illustrate how color correction techniques can be applied to take advantage of having hundreds of color references, with an exemplary case of a CO2 printed sensor used to monitor the integrity of modified atmosphere packaging (MAP). Full article
Show Figures

Figure 1

Figure 1
<p>Evolution over the years of machine-readable patterns which embed colorimetric dyes from 2018 to 2023. (<b>a</b>) Our first proposal for such patterns, presented at Eurosensors in 2018 [<a href="#B1-proceedings-97-00003" class="html-bibr">1</a>]; (<b>b</b>) our second attempt to fabricate the patterns [<a href="#B2-proceedings-97-00003" class="html-bibr">2</a>]; (<b>c</b>) Escobedo et al. proposal [<a href="#B3-proceedings-97-00003" class="html-bibr">3</a>] to embed sensors in pattern with digital data; and (<b>d</b>) our proposal to do a similar concept but maximazing back-compatibility [<a href="#B4-proceedings-97-00003" class="html-bibr">4</a>].</p>
Full article ">
Back to TopTop