[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,963)

Search Parameters:
Keywords = artifacts

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6380 KiB  
Article
Multi-Type Self-Attention-Based Convolutional-Neural-Network Post-Filtering for AV1 Codec
by Woowoen Gwun, Kiho Choi and Gwang Hoon Park
Mathematics 2024, 12(18), 2874; https://doi.org/10.3390/math12182874 - 15 Sep 2024
Viewed by 195
Abstract
Over the past few years, there has been substantial interest and research activity surrounding the application of Convolutional Neural Networks (CNNs) for post-filtering in video coding. Most current research efforts have focused on using CNNs with various kernel sizes for post-filtering, primarily concentrating [...] Read more.
Over the past few years, there has been substantial interest and research activity surrounding the application of Convolutional Neural Networks (CNNs) for post-filtering in video coding. Most current research efforts have focused on using CNNs with various kernel sizes for post-filtering, primarily concentrating on High-Efficiency Video Coding/H.265 (HEVC) and Versatile Video Coding/H.266 (VVC). This narrow focus has limited the exploration and application of these techniques to other video coding standards such as AV1, developed by the Alliance for Open Media, which offers excellent compression efficiency, reducing bandwidth usage and improving video quality, making it highly attractive for modern streaming and media applications. This paper introduces a novel approach that extends beyond traditional CNN methods by integrating three different self-attention layers into the CNN framework. Applied to the AV1 codec, the proposed method significantly improves video quality by incorporating these distinct self-attention layers. This enhancement demonstrates the potential of self-attention mechanisms to revolutionize post-filtering techniques in video coding beyond the limitations of convolution-based methods. The experimental results show that the proposed network achieves an average BD-rate reduction of 10.40% for the Luma component and 19.22% and 16.52% for the Chroma components compared to the AV1 anchor. Visual quality assessments further validated the effectiveness of our approach, showcasing substantial artifact reduction and detail enhancement in videos. Full article
(This article belongs to the Special Issue New Advances and Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Illustration showing where in-loop filter is located in video codec pipeline; (<b>b</b>) illustration showing where post-filter is located in pipeline.</p>
Full article ">Figure 2
<p>Proposed MTSA-based CNN.</p>
Full article ">Figure 3
<p>(<b>a</b>) RCB; (<b>b</b>) CWSA.</p>
Full article ">Figure 4
<p>(<b>a</b>) Simplified feature map with channel size of 3 and height and width sizes of 4; (<b>b</b>) feature map unfolded into smaller blocks; (<b>c</b>) feature map permuted and reshaped.</p>
Full article ">Figure 5
<p>(<b>a</b>) BWSSA; (<b>b</b>) PWSA.</p>
Full article ">Figure 6
<p>R-D curves by SVT-AV1 and MTSA. (<b>a</b>) class A1; (<b>b</b>) class A2; (<b>c</b>) class A3; (<b>d</b>) class A4; (<b>e</b>) class A5.</p>
Full article ">Figure 7
<p>Example sequence of Class A1 PierSeaSide. (<b>a</b>) Original image from the AVM-CTC sequence; (<b>b</b>) detail inside the yellow box from (<b>a</b>) in the original image; (<b>c</b>) detail inside the yellow box from (<b>a</b>) in the compressed image using SVT-AV1 with QP55; (<b>d</b>) detail inside the yellow box from (<b>a</b>) after applying the post-filter using the proposed network.</p>
Full article ">Figure 8
<p>Example sequence of Class A1 Tango. (<b>a</b>) Original image from the AVM-CTC sequence; (<b>b</b>) detail inside the yellow box from (<b>a</b>) in the original image; (<b>c</b>) detail inside the yellow box from (<b>a</b>) in the compressed image using SVT-AV1 with QP55; (<b>d</b>) detail inside the yellow box from (<b>a</b>) after applying the post-filter using the proposed network.</p>
Full article ">Figure 9
<p>Example sequence of Class A2 RushFieldCuts. (<b>a</b>) Original image from the AVM-CTC sequence; (<b>b</b>) detail inside the yellow box from (<b>a</b>) in the original image; (<b>c</b>) detail inside the yellow box from (<b>a</b>) in the compressed image using SVT-AV1 with QP43; (<b>d</b>) detail inside the yellow box from (<b>a</b>) after applying the post-filter using the proposed network.</p>
Full article ">Figure 10
<p>Methods to handle empty spaces for edge patches; (<b>a</b>) empty spaces filled with zero value; (<b>b</b>) empty spaces filled with edge pixel value extended.</p>
Full article ">Figure 11
<p>Network wrongly turning edge pixel into darker value; (<b>a</b>) pixel value difference between the original video frame and the AV1-encoded frame; (<b>b</b>) pixel value difference between the original video frame and the AV1-encoded frame processed by the proposed network, with larger positive pixel differences in Y indicating that the processed frame is darker, at the bottom of the image.</p>
Full article ">
17 pages, 59483 KiB  
Article
Augmented Reality- and Geographic Information System-Based Inspection of Brick Details in Heritage Warehouses
by Naai-Jung Shih and Yu-Chen Wu
Appl. Sci. 2024, 14(18), 8316; https://doi.org/10.3390/app14188316 (registering DOI) - 15 Sep 2024
Viewed by 255
Abstract
Brick warehouses represent interdisciplinary heritage sites developed by social, cultural, and economic impacts. This study aimed to connect warehouse details and GIS maps in augmented reality (AR) based on the former Camphor Refinery Workshop Warehouse. AR was applied as an innovation interface to [...] Read more.
Brick warehouses represent interdisciplinary heritage sites developed by social, cultural, and economic impacts. This study aimed to connect warehouse details and GIS maps in augmented reality (AR) based on the former Camphor Refinery Workshop Warehouse. AR was applied as an innovation interface to communicate the differences between construction details, providing a feasible on-site solution for articulating historical brick engineering technology. A complex warehouse cluster was georeferenced by the AR models of brick details. The map was assisted by a smartphone-based comparison of the details of adjacent warehouses. Sixty AR models of warehouse details exemplified the active and sustainable preservation of the historical artifacts. The side-by-side allocation of warehouse details in AR facilitated cross-comparisons of construction differences. We found that a second reconstructed result integrated AR and reality in a novel manner based on the use of a smartphone AR. GIS and AR facilitated a management effort using webpages and cloud access from a remote site. The vocabulary of building details can be enriched and better presented in AR. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

Figure 1
<p>Former Camphor Refinery Workshop Warehouse: (<b>a</b>) geo-referenced map in QGIS<sup>®</sup> marked with 60 brick details (red dots); (<b>b</b>) field images; (<b>c</b>) relative location to old urban fabric in 1930 map [<a href="#B1-applsci-14-08316" class="html-bibr">1</a>]; (<b>d</b>) same as in (<b>c</b>) but for 1983 map [<a href="#B1-applsci-14-08316" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Building components under inspection: (<b>a</b>) red bricks; (<b>b</b>) buttresses; (<b>c</b>) corners; (<b>d</b>) openings; (<b>e</b>) decorations; (<b>f</b>) downspouts; and (<b>g</b>) wall finishes.</p>
Full article ">Figure 3
<p>Creation and interaction of AR models in GIS.</p>
Full article ">Figure 4
<p>The process of creating and interacting with AR models: (<b>a</b>) field image taking; (<b>b</b>) AR model uploading and conversion; (<b>c</b>) AR database for QGIS<sup>®</sup>; (<b>d</b>) field access (facilitated by scanning a QR code); (<b>e</b>) moving the smartphone to define the ground plane; (<b>f</b>) deploying the AR model; (<b>g</b>) adjusting the model’s location; (<b>h</b>) adjusting the model’s scale; (<b>i</b>) documenting the process via a screenshot; (<b>j</b>) spreadsheet of details; (<b>k</b>) brick detail webpage with altitude data, longitude data, and a link to the AR model converted from QGIS<sup>®</sup>; and (<b>l</b>) AR inspection and scaling in portrait and landscape views.</p>
Full article ">Figure 5
<p>Examples of field images, 3D color models, and plain models: (<b>a</b>) main entrance; (<b>b</b>) corner; (<b>c</b>) main entrance with buttress; and (<b>d</b>) facades. Examples of smartphone screenshots of AR models.</p>
Full article ">Figure 6
<p>Examples of AR inspection for (<b>a</b>) utilities; (<b>b</b>) brick corner and pavement; (<b>c</b>) ground window finish with pavement; (<b>d</b>) offset crack between brick opening and corner; (<b>e</b>) ventilation windows above entrance; (<b>f</b>) sealed opening; and (<b>g</b>,<b>h</b>) scale model in front of real stone fence.</p>
Full article ">Figure 7
<p>Examples of screenshots of warehouse models: (<b>a</b>) facades; (<b>b</b>) gables.</p>
Full article ">Figure 8
<p>Secondary reconstruction of AR model and field scene: (<b>a</b>) screenshots of an AR model placed in front of a different opening style; (<b>b</b>) the second reconstructed scene in Zephyr<sup>®</sup>; and (<b>c</b>) a 3D model exported from Zephyr<sup>®</sup>.</p>
Full article ">Figure 9
<p>Secondary reconstruction of AR model and physical 3D-printed model: (<b>a</b>) the model in front is a 3D color-printed one, while the model in the back is an AR model which can only be seen on a smartphone screen; (<b>b</b>) photogrammetric modeling was carried out using Zephyr; (<b>c</b>) a 3D model exported from Zephyr<sup>®</sup>.</p>
Full article ">Figure 10
<p>Screenshots of video communication using Line<sup>®</sup>: (<b>a</b>) Line<sup>®</sup> video call; (<b>b</b>) QR code scanning; (<b>c</b>) moving the smartphone to define a working plane; (<b>d</b>) a model was inserted and placed next to the original building shown in (<b>c</b>); (<b>e</b>) view from the right-hand side.</p>
Full article ">Figure 11
<p>Redefined transparency of AR model to highlight brick edge and layout.</p>
Full article ">Figure 12
<p>Cross-warehouse comparison for checking the alignment of the building corner finishes.</p>
Full article ">Figure 13
<p>The 3D photogrammetric modeling loop.</p>
Full article ">Figure A1
<p>Images of the 60 3D models.</p>
Full article ">Figure A1 Cont.
<p>Images of the 60 3D models.</p>
Full article ">Figure A1 Cont.
<p>Images of the 60 3D models.</p>
Full article ">
13 pages, 2706 KiB  
Article
Application of Weighting Algorithm for Enhanced Broadband Vector Network Analyzer Measurements
by Sang-hee Shin and James Skinner
Mathematics 2024, 12(18), 2871; https://doi.org/10.3390/math12182871 - 14 Sep 2024
Viewed by 301
Abstract
A weighting algorithm for application in the Thru-Reflect-Line (TRL) calibration technique is presented to enhance the accuracy and reliability of vector network analyzer (VNA) measurements over broad frequency bands. The method addresses the inherent limitations of the traditional TRL calibration, particularly the step [...] Read more.
A weighting algorithm for application in the Thru-Reflect-Line (TRL) calibration technique is presented to enhance the accuracy and reliability of vector network analyzer (VNA) measurements over broad frequency bands. The method addresses the inherent limitations of the traditional TRL calibration, particularly the step changes observed in banded-TRL approaches when multiple Line standards are used. By introducing a bespoke weighting function that assigns phase-dependent weights to each Line standard, smoother transitions and improved S-parameter measurements can be achieved. Experimental validation using measurements of both 3.5 mm and Type-N devices demonstrates the effectiveness of the weighted-TRL method in eliminating discontinuities and calibration artifacts across a wide range of frequencies. The results reveal the improved calibration of S-parameters this approach can yield compared to traditional TRL calibration methods. The developed weighted-TRL calibration technique offers a significant advancement in metrology-grade measurements, enabling more precise characterization of high-frequency devices across broad frequency bands. By mitigating a key limitation of the TRL calibration, this method provides a valuable tool for enhancing the accuracy and reliability of VNA measurements for precision metrology applications. Full article
(This article belongs to the Special Issue Mathematical Applications in Electrical Engineering)
14 pages, 5815 KiB  
Article
The Evaluation and Analysis of the Anti-Corrosion Performance of the Sealing Material B72 for Metal Artifacts Based on Electrochemical Noise
by Hao Xu, Minghao Jia, Pei Hu, Shengyu Liu and Gang Hu
Coatings 2024, 14(9), 1190; https://doi.org/10.3390/coatings14091190 - 14 Sep 2024
Viewed by 257
Abstract
Paraloid B-72 (B72), as a transparent, colorless polymer material, has good film-forming ability when dissolved in acetone and is widely used as a sealing material for metal artifacts. In order to analyze and evaluate the preservation performance of B72 as a sealing material [...] Read more.
Paraloid B-72 (B72), as a transparent, colorless polymer material, has good film-forming ability when dissolved in acetone and is widely used as a sealing material for metal artifacts. In order to analyze and evaluate the preservation performance of B72 as a sealing material on the substrate of metal artifacts, a variety of electrochemical methods, mainly electrochemical noise (EN), and scanning electron microscopy (SEM) were applied to evaluate the B72 coating. The results showed that the B72 coating had a good preservation effect at the initial stage, and its poor water resistance led to the loss of its effectiveness after a few days of immersion. Compared with conventional electrochemical methods, electrochemical noise is non-destructive, which cannot cause new corrosion on the metal substrate and can well characterize the corrosion rate of the test system, and the results of its time domain and frequency domain analyses can correspond well with the polarization resistance and impedance spectra. Electrochemical noise is an effective method for evaluating the anti-corrosion performance of material preservation coatings. Full article
(This article belongs to the Special Issue New Trends in Conservation and Restoration of Cultural Heritage)
Show Figures

Figure 1

Figure 1
<p>Different ways of coating the B72 film.</p>
Full article ">Figure 2
<p>(<b>a</b>) The salt bridge arrangement for ENM; (<b>b</b>) the arrangement for EIS and Polarization curves.</p>
Full article ">Figure 3
<p>Time records of (<b>a</b>) the potential and (<b>b</b>) the current of the blank group after de-trending.</p>
Full article ">Figure 4
<p>Time records of (<b>a</b>) the potential and (<b>b</b>) the current of the B72 group after de-trending.</p>
Full article ">Figure 5
<p>Polarization curves of (<b>a</b>) the blank group and (<b>b</b>) the B72 group at different immersion times.</p>
Full article ">Figure 6
<p>(<b>a</b>) <span class="html-italic">σ</span><sub>V</sub> and <span class="html-italic">σ</span><sub>I</sub> of the blank group at different immersion times; (<b>b</b>) <span class="html-italic">R</span><sub>n</sub> and <span class="html-italic">R</span><sub>p</sub> of the blank group at different immersion times.</p>
Full article ">Figure 7
<p>(<b>a</b>) <span class="html-italic">σ</span><sub>V</sub> and <span class="html-italic">σ</span><sub>I</sub> of the B72 group at different immersion times; (<b>b</b>) <span class="html-italic">R</span><sub>n</sub> and <span class="html-italic">R</span><sub>p</sub> of the B72 group at different immersion times.</p>
Full article ">Figure 8
<p>The blank group’s PSDs of (<b>a</b>) the current and (<b>b</b>) the potential, and (<b>c</b>) <span class="html-italic">Z</span><sub>n</sub> at different immersion times.</p>
Full article ">Figure 9
<p>The B72 group’s PSDs of (<b>a</b>) the current and (<b>b</b>) the potential, and (<b>c</b>) <span class="html-italic">Z</span><sub>n</sub> at different immersion times.</p>
Full article ">Figure 10
<p>The comparison of (<b>a</b>) the current PSD and (<b>b</b>) noise impedance <span class="html-italic">Z</span><sub>n</sub> of the two groups at different immersion times (0 d and 6 d).</p>
Full article ">Figure 11
<p>(<b>a</b>) Impedance modulus and (<b>b</b>) phase of the blank group at different immersion times.</p>
Full article ">Figure 12
<p>(<b>a</b>) Impedance modulus and (<b>b</b>) phase of the B72 group at different immersion times.</p>
Full article ">Figure 13
<p>The noise impedance <span class="html-italic">Z</span><sub>n</sub> with impedance modulus of (<b>a</b>) the blank group and (<b>b</b>) the B72 group at different immersion times.</p>
Full article ">Figure 14
<p>(<b>a</b>) The blank group and (<b>b</b>) the B72 group without immersion.</p>
Full article ">Figure 15
<p>(<b>a</b>) The blank group, (<b>b</b>) the pitting hole, and (<b>c</b>) the wrinkle of the B72 group after immersion.</p>
Full article ">Figure 16
<p>The scanning area and the elemental distribution of the B72 group.</p>
Full article ">
24 pages, 60637 KiB  
Article
SAR-NTV-YOLOv8: A Neural Network Aircraft Detection Method in SAR Images Based on Despeckling Preprocessing
by Xiaomeng Guo and Baoyi Xu
Remote Sens. 2024, 16(18), 3420; https://doi.org/10.3390/rs16183420 - 14 Sep 2024
Viewed by 274
Abstract
Monitoring aircraft using synthetic aperture radar (SAR) images is a very important task. Given its coherent imaging characteristics, there is a large amount of speckle interference in the image. This phenomenon leads to the scattering information of aircraft targets being masked in SAR [...] Read more.
Monitoring aircraft using synthetic aperture radar (SAR) images is a very important task. Given its coherent imaging characteristics, there is a large amount of speckle interference in the image. This phenomenon leads to the scattering information of aircraft targets being masked in SAR images, which is easily confused with background scattering points. Therefore, automatic detection of aircraft targets in SAR images remains a challenging task. For this task, this paper proposes a framework for speckle reduction preprocessing of SAR images, followed by the use of an improved deep learning method to detect aircraft in SAR images. Firstly, to improve the problem of introducing artifacts or excessive smoothing in speckle reduction using total variation (TV) methods, this paper proposes a new nonconvex total variation (NTV) method. This method aims to ensure the effectiveness of speckle reduction while preserving the original scattering information as much as possible. Next, we present a framework for aircraft detection based on You Only Look Once v8 (YOLOv8) for SAR images. Therefore, the complete framework is called SAR-NTV-YOLOv8. Meanwhile, a high-resolution small target feature head is proposed to mitigate the impact of scale changes and loss of depth feature details on detection accuracy. Then, an efficient multi-scale attention module was proposed, aimed at effectively establishing short-term and long-term dependencies between feature grouping and multi-scale structures. In addition, the progressive feature pyramid network was chosen to avoid information loss or degradation in multi-level transmission during the bottom-up feature extraction process in Backbone. Sufficient comparative experiments, speckle reduction experiments, and ablation experiments are conducted on the SAR-Aircraft-1.0 and SADD datasets. The results have demonstrated the effectiveness of SAR-NTV-YOLOv8, which has the most advanced performance compared to other mainstream algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>The overall network structure of SAR-NTV-YOLOv8.</p>
Full article ">Figure 2
<p>Overall structure of SAR-YOLOv8 aircraft target detection network.</p>
Full article ">Figure 3
<p>EMA module structure based on cross spatial learning.</p>
Full article ">Figure 4
<p>Structure diagram based on adaptive spatial feature fusion.</p>
Full article ">Figure 5
<p>The adaptive spatial special fusion process.</p>
Full article ">Figure 6
<p>Comparison of NTV with some popular methods for the despeckling of real SAR image 1. (<b>a</b>) Original SAR image. (<b>b</b>) MIDAL. (<b>c</b>) TV. (<b>d</b>) SAR-CNN-M-xUnit. (<b>e</b>) NTV.</p>
Full article ">Figure 7
<p>Comparison of NTV with some popular methods for the despeckling of real SAR image 2. (<b>a</b>) Original SAR image. (<b>b</b>) MIDAL. (<b>c</b>) TV. (<b>d</b>) SAR-CNN-M-xUnit. (<b>e</b>) NTV.</p>
Full article ">Figure 8
<p>Comparison of detection results based on different speckle reduction methods using SAR-YOLOv8. The green, yellow, and red rectangle represent detected results, missed alarms, and false alarms, respectively. (<b>a</b>) Original SAR image. (<b>b</b>) MIDAL. (<b>c</b>) TV. (<b>d</b>) SAR-CNN-M-xUnit. (<b>e</b>) NTV.</p>
Full article ">Figure 9
<p>Comparison with other methods. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>AP</mi> <mn>50</mn> </msub> </semantics></math> curves. (<b>b</b>) PR curves.</p>
Full article ">Figure 10
<p>The detection results of each algorithm in four typical SAR scenarios in the SAR-AIRcraft-1.0 dataset. The green, yellow, and red rectangle represent detected results, missed alarms, and false alarms, respectively. Each row from top to bottom shows the experimental results of FCOS, YOLOv7, LMSD-YOLO, PFF-ADN, SEFEPNet, and SAR-NTV-YOLOv8, respectively.</p>
Full article ">Figure 11
<p>The detection results of each algorithm in four typical SAR scenarios in the SADD dataset. The green, yellow, and red rectangle represent detected results, missed alarms, and false alarms, respectively. Each row from top to bottom shows the experimental results of FCOS, YOLOv7, LMSD-YOLO, PFF-ADN, SEFEPNet, and SAR-NTV-YOLOv8, respectively.</p>
Full article ">
13 pages, 1407 KiB  
Article
Evaluation of Spontaneous Overtime Methemoglobin Formation in Post-Mortem Blood Samples from Real Cases in Critical Storage Conditions
by Sara Gariglio, Maria Chiara David, Alessandro Mattia, Francesca Consalvo, Matteo Scopetti, Martina Padovano, Stefano D’Errico, Donato Morena, Paola Frati, Alessandro Santurro and Vittorio Fineschi
Toxics 2024, 12(9), 670; https://doi.org/10.3390/toxics12090670 - 14 Sep 2024
Viewed by 194
Abstract
Nitrite/nitrate poisoning is an emerging problem, with an ongoing escalation of reported self-administration with suicidal intent in several countries. Nitrites toxicity mainly consists of their interaction with hemoglobin (Hb), causing its oxidization to methemoglobin (MetHb). In order to give support to the correct [...] Read more.
Nitrite/nitrate poisoning is an emerging problem, with an ongoing escalation of reported self-administration with suicidal intent in several countries. Nitrites toxicity mainly consists of their interaction with hemoglobin (Hb), causing its oxidization to methemoglobin (MetHb). In order to give support to the correct procedures for the analysis of these cases, this study aims to evaluate spontaneous sample degradation and consequent MetHb formation in the typical storage conditions of a forensic toxicology laboratory. Two different types of samples have been used in this study: the first stage of our study consisted of a retrospective analysis of blood samples obtained by judicial autopsies already stored in the toxicology laboratory, collected over four years (2018–2021), while the samples used for the second stage were appositely collected during judicial autopsies. The data obtained by the application of a derivative spectrophotometry method on these samples suggest that there seems not to be a maximum threshold for MetHb formation within which it is possible to state with a sufficient grade of certainty that the concentration of MetHb found is consistent with an ante-mortem formation and is not the result of an artifact due to sample degradation and storage conditions. On the other hand, the results suggest that MetHb formation depends on the time passed between sample collection and analysis, so that a tempestive sample processing, performed as soon as the samples are received in the laboratory, is crucial to obtain the maximum reliability and diagnostic values from the data when MetHb quantitation is necessary. Full article
(This article belongs to the Special Issue The Identification of Narcotic and Psychotropic Drugs)
Show Figures

Figure 1

Figure 1
<p>Dispersion graph with detailed regression equations for both solution A and solution B.</p>
Full article ">Figure 2
<p>Distribution of MetHb percentual concentration in blood samples collected over four years (2018–2021) during judicial autopsies and stored in the toxicology laboratory. The years indicate the time of sampling and the start of storage. Boxes are the median concentrations and interquartile range; whiskers are 5% and 95% percentiles. * Significant difference between 2018 and 2021 groups (Kruskal–Wallis H test; <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">Figure 3
<p>Increase of MetHb concentration over time for (<b>A</b>) fresh samples unfrozen in short three-day analytical periods (days 0, 1, 2, 6, 7, 8, 13, 14, and 15), and (<b>B</b>) fresh samples unfrozen every week.</p>
Full article ">
26 pages, 7340 KiB  
Article
Versatile Video Coding-Post Processing Feature Fusion: A Post-Processing Convolutional Neural Network with Progressive Feature Fusion for Efficient Video Enhancement
by Tanni Das, Xilong Liang and Kiho Choi
Appl. Sci. 2024, 14(18), 8276; https://doi.org/10.3390/app14188276 - 13 Sep 2024
Viewed by 356
Abstract
Advanced video codecs such as High Efficiency Video Coding/H.265 (HEVC) and Versatile Video Coding/H.266 (VVC) are vital for streaming high-quality online video content, as they compress and transmit data efficiently. However, these codecs can occasionally degrade video quality by adding undesirable artifacts such [...] Read more.
Advanced video codecs such as High Efficiency Video Coding/H.265 (HEVC) and Versatile Video Coding/H.266 (VVC) are vital for streaming high-quality online video content, as they compress and transmit data efficiently. However, these codecs can occasionally degrade video quality by adding undesirable artifacts such as blockiness, blurriness, and ringing, which can detract from the viewer’s experience. To ensure a seamless and engaging video experience, it is essential to remove these artifacts, which improves viewer comfort and engagement. In this paper, we propose a deep feature fusion based convolutional neural network (CNN) architecture (VVC-PPFF) for post-processing approach to further enhance the performance of VVC. The proposed network, VVC-PPFF, harnesses the power of CNNs to enhance decoded frames, significantly improving the coding efficiency of the state-of-the-art VVC video coding standard. By combining deep features from early and later convolution layers, the network learns to extract both low-level and high-level features, resulting in more generalized outputs that adapt to different quantization parameter (QP) values. The proposed VVC-PPFF network achieves outstanding performance, with Bjøntegaard Delta Rate (BD-Rate) improvements of 5.81% and 6.98% for luma components in random access (RA) and low-delay (LD) configurations, respectively, while also boosting peak signal-to-noise ratio (PSNR). Full article
Show Figures

Figure 1

Figure 1
<p>Enhancing video quality with CNN based post-processing in conventional VVC coding workflow.</p>
Full article ">Figure 2
<p>MP4 to YUV conversion and reconstruction using VVenC and VVdeC.</p>
Full article ">Figure 3
<p>Illustration of video-to-image conversion process: (<b>a</b>) original videos converted to original images using FFmpeg, and (<b>b</b>) reconstructed videos converted to reconstructed images using FFmpeg.</p>
Full article ">Figure 4
<p>Illustration of the conversion process from YUV 4:2:0 format to YUV 4:4:4 format before feeding data into the deep learning network.</p>
Full article ">Figure 5
<p>Illustration of down-sampling process of neural network output from YUV 4:4:4 to YUV 4:2:0 format.</p>
Full article ">Figure 6
<p>Architecture of the proposed CNN-based post-filtering method, integrating multiple feature extractions for enhanced output refinement.</p>
Full article ">Figure 7
<p>Comparative visualization of (<b>b</b>) reconstructed frames from anchor VVC and (<b>c</b>) proposed methods for DaylightRoad2 sequence at QP 42 for RA configuration, alongside the (<b>a</b>) original uncompressed reference frame.</p>
Full article ">Figure 8
<p>Comparative visualization of (<b>b</b>) reconstructed frames from anchor VVC and (<b>c</b>) proposed methods for FourPeople sequence at QP 42 for LD configuration, alongside the (<b>a</b>) original uncompressed reference frame.</p>
Full article ">Figure 9
<p>RD curve performance comparison for five different test sequences in RA configuration.</p>
Full article ">Figure 10
<p>RD curve performance comparison for four different test sequences in LD configuration.</p>
Full article ">Figure 11
<p>Visual quality comparison of proposed method with 8 feature extraction blocks for RA and LD scenarios at QP 42: (<b>a</b>) MarketPlace Sequence and (<b>b</b>) PartyScene Sequence.</p>
Full article ">Figure 12
<p>Visual quality comparison of proposed method with 12 feature extraction blocks for RA and LD scenarios at QP 42: (<b>a</b>) RitualDance Sequence and (<b>b</b>) Cactus Sequence.</p>
Full article ">
22 pages, 15255 KiB  
Article
Permanent Human Occupation of the Western Tibetan Plateau in the Early Holocene
by Hongliang Lu and Ziyan Li
Land 2024, 13(9), 1484; https://doi.org/10.3390/land13091484 - 13 Sep 2024
Viewed by 191
Abstract
Archaeological investigations worldwide have focused on when and how humans permanently settled in high-altitude environments. Recent evidence from Xiada Co, Qusongguo, and Dingzhonghuzhuzi in western Tibet, where lithic artifacts and radiocarbon dates with original deposits were first accessed, provides new insights into human [...] Read more.
Archaeological investigations worldwide have focused on when and how humans permanently settled in high-altitude environments. Recent evidence from Xiada Co, Qusongguo, and Dingzhonghuzhuzi in western Tibet, where lithic artifacts and radiocarbon dates with original deposits were first accessed, provides new insights into human activities in this extreme environment during the early Holocene. This paper examines the mobility and land-use patterns of foragers in western Tibet from the perspectives of lithic analysis. Assemblages from three sites suggest homogenous technologies and raw material use, as well as potential interaction network of hunter-gatherers within the plateau during the early Holocene. It further argues that the material exponents and travel cost models of site location supported permanent occupation of the western Tibetan Plateau in this period. Full article
Show Figures

Figure 1

Figure 1
<p>Map of the research area and sites mentioned in the text. (<b>a</b>) Location of archaeological sites on the Tibetan Plateau and in its adjacent areas:1. Xiada Co (4350masl); 2. Dingzhonghuzhuzi (4285 masl); 3. Qusongguo (4305 masl); 4. Nwya Devu (4600 masl); 5. Chusang (4230 masl); 6. Tshem gzhung kha thog (4100 masl); 7. Jiangxigou 1&amp;2, 93-13 (3330 masl); 8. 151 (3397 masl); 9. Heimahe 1 (3202 masl), Heimahe 3 (3210 masl); 10. Layihai (2600 masl); 11. Baishiya (3280 masl); 12. Oshhona (4100 masl).13. General location of Beshkent, Javan, Mullo-Nijaz, Makoni-Mor; 14. Dzamathang (3101 masl); (<b>b</b>) Locations of the three archaeological sites studied in this paper.</p>
Full article ">Figure 2
<p>Calibrated dates for each site. The date of <sup>14</sup>C samples was calibrated using OxCal 4.4 [<a href="#B22-land-13-01484" class="html-bibr">22</a>] and the Intcal2020 curve [<a href="#B23-land-13-01484" class="html-bibr">23</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) The landscape of Xiada Co. The arrow indicates the test pit location, and the dashed line outlines the surface collection area. (<b>b</b>) Stratigraphy of the test pit at Xiada Co site.</p>
Full article ">Figure 4
<p>(<b>a</b>) The landscape of Qusongguo. The arrow indicates the test pit location, and the dashed line outlines the surface collection area. (<b>b</b>) The test pit and the feature of near-circular concentration of cobbles (hearth) at Qusongguo site. (<b>c</b>) Stratigraphy of the test pit.</p>
Full article ">Figure 5
<p>The landscape of Dingzhonghuzhuzi.</p>
Full article ">Figure 6
<p>Plot of length and width of complete flakes from the three sites.</p>
Full article ">Figure 7
<p>Xiada Co: examples of core-flake assemblage. (1, 3) Double-platform cores; (2) single-platform cores; (4–6) complete flakes. Nos. 1 and 4 were excavated from the test pit.</p>
Full article ">Figure 8
<p>Xiada Co: examples of microblade assemblage. (1) Preform of boat-shaped microblade core; (2–4) wedge-shaped microblade cores; (5) overpassed flake; (6–13) microblades. Nos. 9–13 were excavated from the test pit.</p>
Full article ">Figure 9
<p>Xiada Co: examples of retouched pieces. (1, 2) Endscrapers made on thick flakes with steep retouching; (3, 4) circular endscrapers; (5) single-edged sidescraper; (6) convergent sidescraper; (7) notched piece; (8) bifacial piece. No. 1 was excavated from the test pit. Arrows indicate striking directions and the presence/absence of platforms, same below.</p>
Full article ">Figure 10
<p>Qusongguo: examples of the core-flake assemblage. (1, 2) Multi-platform cores; (3) single-platform core; (4) double-platform core; (5) complete flake. All cores were collected from the surface.</p>
Full article ">Figure 11
<p>Qusongguo: examples of the microblade assemblage. (1, 5) Wedge-shaped microblade cores; (3, 6) microblade cores or endscrapers; (4, 9) core tablets; (2, 7, 8, 9) overpassed flakes. Nos. 3–4 were excavated from the test pit; No. 2 was excavated from the feature.</p>
Full article ">Figure 12
<p>Qusongguo: examples of retouched pieces. (1) Convergent sidescraper; (2) bifacial piece; (3) endscraper; (4) double-edged sidescraper; (5) notched piece. All tools were collected from the surface.</p>
Full article ">Figure 13
<p>Dingzhonghuzhuzi: examples of the core-flake assemblage. (1, 2) Multi-platform cores; (3) single-platform cores; (4–6) complete flakes.</p>
Full article ">Figure 14
<p>Dingzhonghuzhuzi: examples of the microblade assemblage. (1, 5) Wedge-shaped microblade cores; (2, 3) microblade cores (shapes undetermined); (4, 5) blank of microblade cores.</p>
Full article ">Figure 15
<p>Dingzhonghuzhuzi: examples of retouched pieces. (1, 2) Endscrapers; (3) notched piece; (4) bifacial piece; (5, 6) sidescrapers.</p>
Full article ">Figure 16
<p>Frequencies of the types of raw material found in different assemblage at three sites.</p>
Full article ">Figure 17
<p>(<b>a</b>) Simulated least-cost path from the Xiada Co site to the nearest random point in the adjacent lowland area; (<b>b</b>) simulated travel time from the Xiada Co site to the seven potential points; (<b>c</b>) straight-line distance to the contemporaneous sites with similar technology in adjacent lowlands in Central Asia.</p>
Full article ">
15 pages, 8206 KiB  
Article
Fundus Image Deep Learning Study to Explore the Association of Retinal Morphology with Age-Related Macular Degeneration Polygenic Risk Score
by Adam Sendecki, Daniel Ledwoń, Aleksandra Tuszy, Julia Nycz, Anna Wąsowska, Anna Boguszewska-Chachulska, Andrzej W. Mitas, Edward Wylęgała and Sławomir Teper
Biomedicines 2024, 12(9), 2092; https://doi.org/10.3390/biomedicines12092092 - 13 Sep 2024
Viewed by 248
Abstract
Background: Age-related macular degeneration (AMD) is a complex eye disorder with an environmental and genetic origin, affecting millions worldwide. The study aims to explore the association between retinal morphology and the polygenic risk score (PRS) for AMD using fundus images and deep learning [...] Read more.
Background: Age-related macular degeneration (AMD) is a complex eye disorder with an environmental and genetic origin, affecting millions worldwide. The study aims to explore the association between retinal morphology and the polygenic risk score (PRS) for AMD using fundus images and deep learning techniques. Methods: The study used and pre-processed 23,654 fundus images from 332 subjects (235 patients with AMD and 97 controls), ultimately selecting 558 high-quality images for analysis. The fine-tuned DenseNet121 deep learning model was employed to estimate PRS from single fundus images. After training, deep features were extracted, fused, and used in machine learning regression models to estimate PRS for each subject. The Grad-CAM technique was applied to examine the relationship between areas of increased model activity and the retina’s morphological features specific to AMD. Results: Using the hybrid approach improved the results obtained by DenseNet121 in 5-fold cross-validation. The final evaluation metrics for all predictions from the best model from each fold are MAE = 0.74, MSE = 0.85, RMSE = 0.92, R2 = 0.18, MAPE = 2.41. Grad-CAM heatmap evaluation showed that the model decisions rely on lesion area, focusing mostly on the presence of drusen. The proposed approach was also shown to be sensitive to artifacts present in the image. Conclusions: The findings indicate an association between fundus images and AMD PRS, suggesting that deep learning models may effectively estimate genetic risk for AMD from retinal images, potentially aiding in early detection and personalized treatment strategies. Full article
(This article belongs to the Special Issue Emerging Issues in Retinal Degeneration)
Show Figures

Figure 1

Figure 1
<p>Flow chart of fundus images quality assessment and selection.</p>
Full article ">Figure 2
<p>Flow chart of training and validation procedures in the hybrid model for PRS estimation based on fundus images.</p>
Full article ">Figure 3
<p>Scatter plots for the results of all folds test sets comparing true and estimated PRS values and the distributions in the control and AMD groups (<b>a</b>) and a linear regression model fit (<b>b</b>).</p>
Full article ">
11 pages, 1496 KiB  
Article
An Improved Retinex-Based Approach Based on Attention Mechanisms for Low-Light Image Enhancement
by Shan Jiang, Yingshan Shi, Yingchun Zhang and Yulin Zhang
Electronics 2024, 13(18), 3645; https://doi.org/10.3390/electronics13183645 - 13 Sep 2024
Viewed by 286
Abstract
Captured images often suffer from issues like color distortion, detail loss, and significant noise. Therefore, it is necessary to improve image quality for reliable threat detection. Balancing brightness enhancement with the preservation of natural colors and details is particularly challenging in low-light image [...] Read more.
Captured images often suffer from issues like color distortion, detail loss, and significant noise. Therefore, it is necessary to improve image quality for reliable threat detection. Balancing brightness enhancement with the preservation of natural colors and details is particularly challenging in low-light image enhancement. To address these issues, this paper proposes an unsupervised low-light image enhancement approach using a U-net neural network with Retinex theory and a Convolutional Block Attention Module (CBAM). This method leverages Retinex-based decomposition to separate and enhance the reflectance map, ensuring visibility and contrast without introducing artifacts. A local adaptive enhancement function improves the brightness of the reflection map, while the designed loss function addresses illumination smoothness, brightness enhancement, color restoration, and denoising. Experiments validate the effectiveness of our method, revealing improved image brightness, reduced color deviation, and superior color restoration compared to leading approaches. Full article
(This article belongs to the Special Issue Network Security Management in Heterogeneous Networks)
Show Figures

Figure 1

Figure 1
<p>U-net network with attention mechanisms.</p>
Full article ">Figure 2
<p>CBAM module.</p>
Full article ">Figure 3
<p>Comparison of different methods.</p>
Full article ">Figure 4
<p>Comparison of details of different methods.</p>
Full article ">
9 pages, 1206 KiB  
Article
When Undergoing Thoracic CT (Computerized Tomography) Angiographies for Congenital Heart Diseases, Is It Possible to Identify Coronary Artery Anomalies?
by Cigdem Uner, Ali Osman Gulmez, Hasibe Gokce Cinar, Hasan Bulut, Ozkan Kaya, Fatma Dilek Gokharman and Sonay Aydin
Diagnostics 2024, 14(18), 2022; https://doi.org/10.3390/diagnostics14182022 - 12 Sep 2024
Viewed by 230
Abstract
Introduction and Objective: The aim of this study was to evaluate the coronary arteries in patients undergoing thoracic CT angiography for congenital heart disease, to determine the frequency of detection of coronary artery anomalies in congenital heart diseases, and to determine which type [...] Read more.
Introduction and Objective: The aim of this study was to evaluate the coronary arteries in patients undergoing thoracic CT angiography for congenital heart disease, to determine the frequency of detection of coronary artery anomalies in congenital heart diseases, and to determine which type of anomaly is more common in which disease. Materials and Methods: In our investigation, a 128-detector multidetector computed tomography machine was used to perform thorax CT angiography. The acquisition parameters were set to 80–100 kVp based on the patient’s age and mAs that the device automatically determined based on the patient’s weight. During the examination, an intravenous (IV) nonionic contrast material dose of 1–1.5 mL/kg was employed. An automated injector was used to inject contrast material at a rate of 1.5–2 mL/s. In the axial plane, 2.5 mm sections were extracted, and they were rebuilt with 0.625 mm section thickness. Results: Between October 2022 and May 2024, 132 patients who were diagnosed with congenital heart disease by echocardiography and underwent Thorax CT angiography in our department were retrospectively evaluated. Of the evaluated patients, 32 were excluded with exclusion criteria such as patients being younger than 3 months, older than 18 years, insufficient contrast enhancement in imaging and contrast-enhanced imaging, thin vascular structure, and motion and contrast artifacts; the remaining 100 patients were included in this study. The age range of these patients was 3 months to 18 years (mean age 4.4 years). Conclusion: In congenital heart diseases, attention to the coronary arteries on thoracic CT angiography examination in the presence of possible coronary anomalies may provide useful information. Full article
(This article belongs to the Special Issue Advances in Cardiovascular Diseases: Diagnosis and Management)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>–<b>d</b>) Examples of several 3D images of normal anatomical structure and absence of anomalies are indicated.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) Operated TGA (arterial switch), 4-month-old male patient: (<b>a</b>) thorax CT angiography; (<b>b</b>) in MIP images, the Cx artery (red arrows) separates from the right coronary artery and shows a retroaortic course.</p>
Full article ">Figure 3
<p>(<b>a</b>,<b>b</b>) Operated TGA (Rastelli), abdomen of an 11-year-old male patient. In thorax CT angiography MIP images, the left coronary artery (red arrows) separates from the right coronary artery and shows an interarterial course between the main pulmonary artery and the aorta.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) VSD—pulmonary artery stenosis—single ventricle morphology (Fontan), 11-year-old male patient: (<b>a</b>). thorax CT angiography and (<b>b</b>). in MIP images, the Cx artery (red arrow) separates from the right coronary sinus and shows a retroaortic course. The right coronary artery branches off from the right sinus of Valsalva just inferiorly (not shown in the picture).</p>
Full article ">Figure 5
<p>Operated aortic stenosis—pulmonary stenosis (ROSS); in the MIP image of thorax angiography, the left main coronary artery (red arrow) is separated from the right coronary sinus (red arrow).</p>
Full article ">
29 pages, 7566 KiB  
Article
Construction of Cultural Heritage Knowledge Graph Based on Graph Attention Neural Network
by Yi Wang, Jun Liu, Weiwei Wang, Jian Chen, Xiaoyan Yang, Lijuan Sang, Zhiqiang Wen and Qizhao Peng
Appl. Sci. 2024, 14(18), 8231; https://doi.org/10.3390/app14188231 - 12 Sep 2024
Viewed by 313
Abstract
To address the challenges posed by the vast and complex knowledge information in cultural heritage design, such as low knowledge retrieval efficiency and limited visualization, this study proposes a method for knowledge extraction and knowledge graph construction based on graph attention neural networks [...] Read more.
To address the challenges posed by the vast and complex knowledge information in cultural heritage design, such as low knowledge retrieval efficiency and limited visualization, this study proposes a method for knowledge extraction and knowledge graph construction based on graph attention neural networks (GAT). Using Tang Dynasty gold and silver artifacts as samples, we establish a joint knowledge extraction model based on GAT. The model employs the BERT pretraining model to encode collected textual knowledge data, conducts sentence dependency analysis, and utilizes GAT to allocate weights among entities, thereby enhancing the identification of target entities and their relationships. Comparative experiments on public datasets demonstrate that this model significantly outperforms baseline models in extraction effectiveness. Finally, the proposed method is applied to the construction of a knowledge graph for Tang Dynasty gold and silver artifacts. Taking the Gilded Musician Pattern Silver Cup as an example, this method provides designers with a visualized and interconnected knowledge collection structure. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Figure 1

Figure 1
<p>Unified data modeling for knowledge information based on knowledge graphs.</p>
Full article ">Figure 2
<p>Entity-relationship joint extraction model based on segmental attention fusion mechanism.</p>
Full article ">Figure 3
<p>Pooling attention mechanism.</p>
Full article ">Figure 4
<p>Segmental attention fusion mechanism.</p>
Full article ">Figure 5
<p>Training accuracy trend over 19 epochs.</p>
Full article ">Figure 6
<p>Architecture diagram of the unified data model for knowledge information on Tang Dynasty gold and silver artifacts.</p>
Full article ">Figure 7
<p>Framework of the Tang Dynasty gold and silver artifacts knowledge retrieval system.</p>
Full article ">Figure 8
<p>Knowledge upload interface.</p>
Full article ">Figure 9
<p>View knowledge upload status.</p>
Full article ">Figure 10
<p>Related entities of “Gilded Musician Pattern Silver Cup”.</p>
Full article ">Figure 11
<p>Knowledge comparison.</p>
Full article ">
33 pages, 10615 KiB  
Review
Large-Format Material Extrusion Additive Manufacturing for Circular Economy Practices: A Focus on Product Applications with Materials from Recycled Plastics and Biomass Waste
by Alessia Romani and Marinella Levi
Sustainability 2024, 16(18), 7966; https://doi.org/10.3390/su16187966 - 12 Sep 2024
Viewed by 375
Abstract
Additive Manufacturing has significantly impacted circular design, expanding the opportunities for designing new artifacts following circular economy principles, e.g., using secondary raw materials. Small-format 3D printing has reached a broader audience of stakeholders, including end-users, when dealing with filament feedstocks from plastic and [...] Read more.
Additive Manufacturing has significantly impacted circular design, expanding the opportunities for designing new artifacts following circular economy principles, e.g., using secondary raw materials. Small-format 3D printing has reached a broader audience of stakeholders, including end-users, when dealing with filament feedstocks from plastic and biomass waste. However, using large-format extrusion-based additive manufacturing with recycled feedstocks remains challenging, resulting in limited applications and awareness among practitioners. This work analyzes the most relevant product applications using large-format material extrusion additive manufacturing with recycled plastics and biomass waste feedstocks. It reviews the case studies from 2010 to mid-2024 dealing with new materials and applications from academic research and practical contexts. The applications were analyzed to outline the current situation and trends for large-format 3D printing with recycled plastics- and biomass-based feedstocks, focusing on secondary raw materials, manufacturability, impact on product aesthetics, application fields, and products. Despite more consolidated sectors, new technical applications using granulate feedstock systems, e.g., transportation, are emerging. Academic research studies new secondary raw materials and distributed practices through large-format 3D printing. Practitioners are exploiting different approaches to design products, optimizing building times, costs, and material usage through different manufacturing strategies, strengthening the product identity by highlighting circularity. Spreading specific expertise could enlarge the range of application sectors and products, as well as foster real-world collaborations and scaling-up. Thanks to this work, new synergies between the research and practical contexts can be encouraged for new circular economy practices, detecting and exploring new scraps, material categories, or Additive Manufacturing processes in the future. Full article
(This article belongs to the Section Waste and Recycling)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The clusters used for the product analysis: the two form-giving approaches (formal analysis), which means (<b>a</b>) primitive and free forms; and (<b>b</b>) additive, integrative, and integral forms adapted from Ferraris et al. [<a href="#B55-sustainability-16-07966" class="html-bibr">55</a>]; four approaches to product shape and their manufacturability with MEX LFAM (manufacturability analysis), which means (<b>c</b>) main external shape; (<b>d</b>) 3D printing building mode; (<b>e</b>) surface appearance and finishing; and (<b>f</b>) planar or nonplanar slicing.</p>
Full article ">Figure 2
<p>General analysis of the research (light blue) and practical contexts (dark blue) according to the selected case studies (products) per year for the reviewed period (from 2010 to mid-2024).</p>
Full article ">Figure 3
<p>Scraps, secondary raw material, and AM process analysis of the research context: (<b>a</b>) scraps and/or byproduct category; (<b>b</b>) scrap and/or byproduct type; (<b>c</b>) secondary raw material category; (<b>d</b>) secondary raw material feedstock type, with a focus on bio-based and recycled feedstocks; (<b>e</b>) MEX LFAM process; and (<b>f</b>) AM system.</p>
Full article ">Figure 4
<p>Application analysis of the research context: (<b>a</b>) study of new applications; (<b>b</b>) application fields; and (<b>c</b>) product types.</p>
Full article ">Figure 5
<p>Examples of demo products studied in the experimentations from the research context: (<b>a</b>) kayak paddles made of recycled ABS [<a href="#B43-sustainability-16-07966" class="html-bibr">43</a>]; (<b>b</b>) “Gathering Chandelier” made in recycled PET [<a href="#B75-sustainability-16-07966" class="html-bibr">75</a>]; and (<b>c</b>) coffee table made in virgin PLA filled with spent coffee grounds [<a href="#B67-sustainability-16-07966" class="html-bibr">67</a>].</p>
Full article ">Figure 6
<p>Scraps, secondary raw material, and AM process analysis of the practical context: (<b>a</b>) scraps and/or byproduct category; (<b>b</b>) scrap and/or byproduct type; (<b>c</b>) secondary raw material category; (<b>d</b>) secondary raw material feedstock type, with a focus on bio-based and recycled feedstocks; (<b>e</b>) MEX LFAM process; and (<b>f</b>) AM system.</p>
Full article ">Figure 6 Cont.
<p>Scraps, secondary raw material, and AM process analysis of the practical context: (<b>a</b>) scraps and/or byproduct category; (<b>b</b>) scrap and/or byproduct type; (<b>c</b>) secondary raw material category; (<b>d</b>) secondary raw material feedstock type, with a focus on bio-based and recycled feedstocks; (<b>e</b>) MEX LFAM process; and (<b>f</b>) AM system.</p>
Full article ">Figure 7
<p>Application analysis of the practical context: (<b>a</b>) application fields; and (<b>b</b>) product types.</p>
Full article ">Figure 8
<p>Examples of products retrieved from the practice context, i.e., design practice and industrial activities: (<b>a</b>) “Chubby” chairs by Dirk Van Der Kooij (reprinted with the permission from Studio Kooij) [<a href="#B81-sustainability-16-07966" class="html-bibr">81</a>] and (<b>b</b>) “Velaskello” stool by SuperForma (reprinted with the permission from SuperForma S.r.l.) [<a href="#B87-sustainability-16-07966" class="html-bibr">87</a>].</p>
Full article ">Figure 9
<p>Product analysis of the practical context: formal analysis on (<b>a</b>) primitive vs. freeform shape form-giving approach, and (<b>b</b>) additive vs. integral shape form-giving approach; manufacturability analysis on (<b>c</b>) main external shape, (<b>d</b>) 3D printing building mode, (<b>e</b>) surface appearance and finishing, and (<b>f</b>) planar or nonplanar slicing.</p>
Full article ">Figure 9 Cont.
<p>Product analysis of the practical context: formal analysis on (<b>a</b>) primitive vs. freeform shape form-giving approach, and (<b>b</b>) additive vs. integral shape form-giving approach; manufacturability analysis on (<b>c</b>) main external shape, (<b>d</b>) 3D printing building mode, (<b>e</b>) surface appearance and finishing, and (<b>f</b>) planar or nonplanar slicing.</p>
Full article ">Figure A1
<p>PRISMA flow diagram showing the selection process of the literature review (academic research context) starting from the records identified through the query strings defined in this work.</p>
Full article ">Figure A2
<p>PRISMA flow diagram showing the selection process of the companies/studios (practice context, i.e., design practice and industrial activities) starting from the records identified through the query strings defined in this work (Websites in the picture: Dezeen.com; Designboom.com; Materialdistrict.com; 3Dprinting.com; All3dp.com; 3Dprintingindustry.com, accessed on 8 September 2024).</p>
Full article ">
17 pages, 1647 KiB  
Article
Advanced Necklace for Real-Time PPG Monitoring in Drivers
by Anna Lo Grasso, Pamela Zontone, Roberto Rinaldo and Antonio Affanni
Sensors 2024, 24(18), 5908; https://doi.org/10.3390/s24185908 - 12 Sep 2024
Viewed by 218
Abstract
Monitoring heart rate (HR) through photoplethysmography (PPG) signals is a challenging task due to the complexities involved, even during routine daily activities. These signals can indeed be heavily contaminated by significant motion artifacts resulting from the subjects’ movements, which can lead to inaccurate [...] Read more.
Monitoring heart rate (HR) through photoplethysmography (PPG) signals is a challenging task due to the complexities involved, even during routine daily activities. These signals can indeed be heavily contaminated by significant motion artifacts resulting from the subjects’ movements, which can lead to inaccurate heart rate estimations. In this paper, our objective is to present an innovative necklace sensor that employs low-computational-cost algorithms for heart rate estimation in individuals performing non-abrupt movements, specifically drivers. Our solution facilitates the acquisition of signals with limited motion artifacts and provides acceptable heart rate estimations at a low computational cost. More specifically, we propose a wearable sensor necklace for assessing a driver’s well-being by providing information about the driver’s physiological condition and potential stress indicators through HR data. This innovative necklace enables real-time HR monitoring within a sleek and ergonomic design, facilitating seamless and continuous data gathering while driving. Prioritizing user comfort, the necklace’s design ensures ease of wear, allowing for extended use without disrupting driving activities. The collected physiological data can be transmitted wirelessly to a mobile application for instant analysis and visualization. To evaluate the sensor’s performance, two algorithms for estimating the HR from PPG signals are implemented in a microcontroller: a modified version of the mountaineer’s algorithm and a sliding discrete Fourier transform. The goal of these algorithms is to detect meaningful peaks corresponding to each heartbeat by using signal processing techniques to remove noise and motion artifacts. The developed design is validated through experiments conducted in a simulated driving environment in our lab, during which drivers wore the sensor necklace. These experiments demonstrate the reliability of the wearable sensor necklace in capturing dynamic changes in HR levels associated with driving-induced stress. The algorithms integrated into the sensor are optimized for low computational cost and effectively remove motion artifacts that occur when users move their heads. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of the developed sensor.</p>
Full article ">Figure 2
<p>PCB realization of the sensor: (<b>a</b>) top layer and (<b>b</b>) bottom layer. The sensing element, which comes into contact with the skin, is located exclusively on the bottom layer. (<b>c</b>) The necklace in its 3D printed case. The elastic band can be adjusted using a buckle.</p>
Full article ">Figure 3
<p>(<b>a</b>) Example of PPG and ECG waveform signals and their characteristic parameters; (<b>b</b>) example of PPG signals acquired by the necklace sensor from the R and IR channels.</p>
Full article ">Figure 4
<p>Raw data acquired from the R and IR channels. The peaks corresponding to heartbeats have a much smaller amplitude compared to the overall signal amplitude, as do the baseline and motion artifacts.</p>
Full article ">Figure 5
<p>Processed data for peak detection from the R channel—blue line: band-passed R data; red line: envelope detector; black markers: detected peaks.</p>
Full article ">Figure 6
<p>Flowchart of the algorithm implemented to extract the heart rate.</p>
Full article ">Figure 7
<p>Comparison between the reference heart rate (obtained from a simultaneous ECG) and the estimation results for Recording 5.</p>
Full article ">Figure 8
<p>A participant in the driving test scenario using our simulator.</p>
Full article ">Figure 9
<p>Comparison of reference heart rate (obtained from simultaneous ECG) and estimation results for Recording 12.</p>
Full article ">Figure 10
<p>Comparison of reference heart rate (obtained from simultaneous ECG) and estimation results for Recording 2.</p>
Full article ">Figure 11
<p>Comparison of reference heart rate (obtained from simultaneous ECG) and estimation results for Recording 9.</p>
Full article ">
22 pages, 30010 KiB  
Article
AmazingFT: A Transformer and GAN-Based Framework for Realistic Face Swapping
by Li Liu, Dingli Tong, Wenhua Shao and Zhiqiang Zeng
Electronics 2024, 13(18), 3589; https://doi.org/10.3390/electronics13183589 - 10 Sep 2024
Viewed by 254
Abstract
Current face-swapping methods often suffer from issues of detail blurriness and artifacts in generating high-quality images due to the inherent complexity in detail processing and feature mapping. To overcome these challenges, this paper introduces the Amazing Face Transformer (AmazingFT), an advanced face-swapping model [...] Read more.
Current face-swapping methods often suffer from issues of detail blurriness and artifacts in generating high-quality images due to the inherent complexity in detail processing and feature mapping. To overcome these challenges, this paper introduces the Amazing Face Transformer (AmazingFT), an advanced face-swapping model built upon Generative Adversarial Networks (GANs) and Transformers. The model is composed of three key modules: the Face Parsing Module, which segments facial regions and generates semantic masks; the Amazing Face Feature Transformation Module (ATM), which leverages Transformers to extract and transform features from both source and target faces; and the Amazing Face Generation Module (AGM), which utilizes GANs to produce high-quality swapped face images. Experimental results demonstrate that AmazingFT outperforms existing state-of-the-art (SOTA) methods, significantly enhancing detail fidelity and occlusion handling, ultimately achieving movie-grade face-swapping results. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The face-swapping results generated by AmazingFT. The swapped face results replace the face in the target image with the face from the source image.</p>
Full article ">Figure 2
<p>Overall architecture of our network.</p>
Full article ">Figure 3
<p>Amazing Generation Module network framework.</p>
Full article ">Figure 4
<p>Comparison experiments of AmazingFT with other SOTA methods.</p>
Full article ">Figure 5
<p>Face-swapping results of AmazingFT with different frame counts in videos.</p>
Full article ">Figure 6
<p>Comparison experiments of models with and without GAN and Transformer.</p>
Full article ">Figure 7
<p>Detail rendering of eyes and mouth.</p>
Full article ">Figure 8
<p>Comparison experiments of the same source face with different target faces.</p>
Full article ">
Back to TopTop