[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,863)

Search Parameters:
Keywords = uncertainty analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1708 KiB  
Article
Time-Synchronized Convergence Control for n-DOF Robotic Manipulators with System Uncertainties
by Duansong Wang, Gang Zhang, Tan Zhang, Jinzhong Zhang and Rui Chen
Sensors 2024, 24(18), 5986; https://doi.org/10.3390/s24185986 (registering DOI) - 15 Sep 2024
Abstract
A time-synchronized (TS) convergence control method for robotic manipulators is proposed. Adversely to finite-time control, a notion of time-synchronization convergence is introduced based on the ratio persistence property, which can ensure that all system components converge simultaneously in a finite time. Firstly, a [...] Read more.
A time-synchronized (TS) convergence control method for robotic manipulators is proposed. Adversely to finite-time control, a notion of time-synchronization convergence is introduced based on the ratio persistence property, which can ensure that all system components converge simultaneously in a finite time. Firstly, a robust disturbance observer is constructed to be compatible with the time-synchronized control framework and precisely estimate system uncertainties. Furthermore, we design a (finite) time-synchronized controller to ensure that all states of the robotic manipulator simultaneously converge to an equilibrium point, irrespective of initial conditions. Stability analysis shows the feasibility of the proposed TS control method. At last, simulations are performed with a two-link rehabilitation robotic system, and the comparison results indicate its superiority. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>The difference between TS control and FTC. (<b>a</b>) TS control performance. (<b>b</b>) FTC performance.</p>
Full article ">Figure 2
<p>The TS control scheme.</p>
Full article ">Figure 3
<p>Model of the knee rehabilitation robotic manipulator.</p>
Full article ">Figure 4
<p>The tracking performance of joint 1 under the effect of TS control.</p>
Full article ">Figure 5
<p>The tracking performance of joint 2 under the effect of TS control.</p>
Full article ">Figure 6
<p>Error convergence curves of the two joints.</p>
Full article ">Figure 7
<p>Sliding surface in (<a href="#FD31-sensors-24-05986" class="html-disp-formula">31</a>).</p>
Full article ">Figure 8
<p>Control input of the TS method.</p>
Full article ">Figure 9
<p>The real and estimated value of the observer.</p>
Full article ">Figure 10
<p>Tracking performance of joint 1 with FTC.</p>
Full article ">Figure 11
<p>Tracking performance of joint 2 with FTC.</p>
Full article ">Figure 12
<p>Error convergence curve with FTC.</p>
Full article ">
21 pages, 10053 KiB  
Article
Sensitivity Analysis of Fatigue Life for Cracked Carbon-Fiber Structures Based on Surrogate Sampling and Kriging Model under Distribution Parameter Uncertainty
by Haodong Liu, Zheng Liu, Liang Tu, Jinlong Liang and Yuhao Zhang
Appl. Sci. 2024, 14(18), 8313; https://doi.org/10.3390/app14188313 (registering DOI) - 15 Sep 2024
Viewed by 110
Abstract
The quality and reliability of wind turbine blades, as core components of wind turbines, are crucial for the operational safety of the entire system. Carbon fiber is the primary material for wind turbine blades. However, during the manufacturing process, manual intervention inevitably introduces [...] Read more.
The quality and reliability of wind turbine blades, as core components of wind turbines, are crucial for the operational safety of the entire system. Carbon fiber is the primary material for wind turbine blades. However, during the manufacturing process, manual intervention inevitably introduces minor defects, which can lead to crack propagation under complex working conditions. Due to limited understanding and measurement capabilities of the input variables of structural systems, the distribution parameters of these variables often exhibit uncertainty. Therefore, it is essential to assess the impact of distribution parameter uncertainty on the fatigue performance of carbon-fiber structures with initial cracks and quickly identify the key distribution parameters affecting their reliability through global sensitivity analysis. This paper proposes a sensitivity analysis method based on surrogate sampling and the Kriging model to address the computational challenges and engineering application difficulties in distribution parameter sensitivity analysis. First, fatigue tests were conducted on carbon-fiber structures with initial cracks to study the dispersion of their fatigue life under different initial crack lengths. Next, based on the Hashin fatigue failure criterion, a simulation analysis method for the fatigue cumulative damage life of cracked carbon-fiber structures was proposed. By introducing uncertainty parameters into the simulation model, a training sample set was obtained, and a Kriging model describing the relationship between distribution parameters and fatigue life was established. Finally, an efficient input variable sampling method using the surrogate sampling probability density function was introduced, and a Sobol sensitivity analysis method based on surrogate sampling and the Kriging model was proposed. The results show that this method significantly reduces the computational burden of distribution parameter sensitivity analysis while ensuring computational accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>Manufacturing Process of Wind Turbine Blades.</p>
Full article ">Figure 2
<p>Schematic Diagram of Distribution Parameter Uncertainty Transfer.</p>
Full article ">Figure 3
<p>Sensitivity Index Solving Process Based on Kriging and Surrogate Sampling.</p>
Full article ">Figure 4
<p>Geometry of the Specimen.</p>
Full article ">Figure 5
<p>Experimental Procedure. (<b>a</b>) Tensile strength testing; (<b>b</b>) fatigue testing; (<b>c</b>) fracture details.</p>
Full article ">Figure 6
<p>Experimental Data.</p>
Full article ">Figure 7
<p>Finite Element Model Setup.</p>
Full article ">Figure 8
<p>Cumulative Fatigue Damage Flow Chart.</p>
Full article ">Figure 9
<p>Stress Cloud of Cracked Carbon Fibers.</p>
Full article ">Figure 10
<p>Comparison of Fatigue Life Simulation and Experimental Results.</p>
Full article ">Figure 11
<p>Flowchart of Cyclic Calculation.</p>
Full article ">Figure 12
<p>Model Prediction Results.</p>
Full article ">Figure 13
<p>Comparison of Life Prediction Results from Different Models.</p>
Full article ">Figure 14
<p>Fatigue Life Frequency Fitting Curves.</p>
Full article ">Figure 15
<p>Comparison of Sensitivity Index Results.</p>
Full article ">
16 pages, 1294 KiB  
Article
A Generalized Method for Deriving Steady-State Behavior of Consistent Fuzzy Priority for Interdependent Criteria
by Jih-Jeng Huang and Chin-Yi Chen
Mathematics 2024, 12(18), 2863; https://doi.org/10.3390/math12182863 (registering DOI) - 14 Sep 2024
Viewed by 236
Abstract
Interdependent criteria play a crucial role in complex decision-making across various domains. Traditional methods often struggle to evaluate and prioritize criteria with intricate dependencies. This paper introduces a generalized method integrating the analytic network process (ANP), the decision-making trial and evaluation laboratory (DEMATEL), [...] Read more.
Interdependent criteria play a crucial role in complex decision-making across various domains. Traditional methods often struggle to evaluate and prioritize criteria with intricate dependencies. This paper introduces a generalized method integrating the analytic network process (ANP), the decision-making trial and evaluation laboratory (DEMATEL), and the consistent fuzzy analytic hierarchy process (CFAHP) in a fuzzy environment. The Drazin inverse technique is applied to derive a fuzzy total priority matrix, and we normalize the row sum to achieve the steady-state fuzzy priorities. A numerical example in the information systems (IS) industry demonstrates the approach’s real-world applications. The proposed method derives narrower fuzzy spreads compared to the past fuzzy analytic network process (FANP) approaches, minimizing objective uncertainty. Total priority interdependent maps provide insights into complex technical and usability criteria relationships. Comparative analysis highlights innovations, including non-iterative convergence of the total priority matrix and the ability to understand interdependencies between criteria. The integration of the FANP’s network structure with the fuzzy DEMATEL’s influence analysis transcends the capabilities of either method in isolation, marking a significant methodological advancement. By addressing challenges such as parameter selection and mathematical complexity, this research offers new perspectives for future research and application in multi-attribute decision-making (MADM). Full article
Show Figures

Figure 1

Figure 1
<p>The structure of the proposed algorithm.</p>
Full article ">Figure 2
<p>The total priority interdependent maps.</p>
Full article ">
28 pages, 45195 KiB  
Article
Uncertainty-Aware Federated Reinforcement Learning for Optimizing Accuracy and Energy in Heterogeneous Industrial IoT
by A. S. M. Sharifuzzaman Sagar, Muhammad Zubair Islam, Amir Haider and Hyung-Seok Kim
Appl. Sci. 2024, 14(18), 8299; https://doi.org/10.3390/app14188299 (registering DOI) - 14 Sep 2024
Viewed by 240
Abstract
The Internet of Things (IoT) technology has revolutionized various industries by allowing data collection, analysis, and decision-making in real time through interconnected devices. However, challenges arise in implementing Federated Learning (FL) in heterogeneous industrial IoT environments, such as maintaining model accuracy with non-Independent [...] Read more.
The Internet of Things (IoT) technology has revolutionized various industries by allowing data collection, analysis, and decision-making in real time through interconnected devices. However, challenges arise in implementing Federated Learning (FL) in heterogeneous industrial IoT environments, such as maintaining model accuracy with non-Independent and Identically Distributed (non-IID) datasets and straggler IoT devices, ensuring computation and communication efficiency, and addressing weight aggregation issues. In this study, we propose an Uncertainty-Aware Federated Reinforcement Learning (UA-FedRL) method that dynamically selects epochs of individual clients to effectively manage heterogeneous industrial IoT devices and improve accuracy, computation, and communication efficiency. Additionally, we introduce the Predictive Weighted Average Aggregation (PWA) method to tackle weight aggregation issues in heterogeneous industrial IoT scenarios by adjusting the weights of individual models based on their quality. The UA-FedRL addresses the inherent complexities and challenges of implementing FL in heterogeneous industrial IoT environments. Extensive simulations in complex IoT environments demonstrate the superior performance of UA-FedRL on both MNIST and CIFAR-10 datasets compared to other existing approaches in terms of accuracy, communication efficiency, and computation efficiency. The UA-FedRL algorithm attain an accuracy of 96.83% on the MNIST dataset and 62.75% on the CIFAR-10 dataset, despite the presence of 90% straggler IoT devices, attesting to its robust performance and adaptability in different datasets. Full article
Show Figures

Figure 1

Figure 1
<p>A generic architecture of the FL framework for IoT scenarios.</p>
Full article ">Figure 2
<p>Scenario of heterogeneous FL in an IoT network environment. This study focuses on the heterogeneity in device specifications and the non-Independent and Identically Distributed (non-IID) nature of datasets among individual devices.</p>
Full article ">Figure 3
<p>The overall workflow of UA-FedRL for adaptive epoch selection of heterogeneous industrial IoT devices.</p>
Full article ">Figure 4
<p>The overall architecture of the PWA which employs weight quality measurement to compute the weighted average of all local models’ weight.</p>
Full article ">Figure 5
<p>Hardware specifications of the selected IoT devices used in this study.</p>
Full article ">Figure 6
<p>The illustration of the communication between server and client side on Mininet.</p>
Full article ">Figure 7
<p>The accumulated rewards of the UA-FedRL for different gamma values when the learning rate was set to 0.1.</p>
Full article ">Figure 8
<p>The accumulated rewards of the UA-FedRL for different gamma values when the learning rate was set to 0.5.</p>
Full article ">Figure 9
<p>The accumulated rewards of the UA-FedRL for different gamma values when the learning rate was set to 0.9.</p>
Full article ">Figure 10
<p>The accuracy comparison between UA-FedRL and different FL methods on the MNIST dataset.</p>
Full article ">Figure 11
<p>The accuracy comparison between UA-FedRL, Fed_AVG, and Fed_Prox methods on the MNIST dataset with 90% straggler IoT devices.</p>
Full article ">Figure 12
<p>The accuracy comparison between UA-FedRL and different FL methods on the CIFAR-10 dataset.</p>
Full article ">Figure 13
<p>The accuracy comparison of UA-FedRL, Fed_AVG, and Fed_Prox methods with 90% straggler IoT devices on the CIFAR-10 dataset.</p>
Full article ">Figure 14
<p>Comparative analysis of normalized communication cost across different federated learning methods on MNIST and CIFAR-10 datasets.</p>
Full article ">Figure 15
<p>Comparative analysis of normalized energy consumption across different FL methods on MNIST and CIFAR-10 datasets.</p>
Full article ">Figure 16
<p>The uncertainty estimation of the UA-FedRL taking each action in terms of reward.</p>
Full article ">
18 pages, 7514 KiB  
Article
Enhancing Evapotranspiration Estimations through Multi-Source Product Fusion in the Yellow River Basin, China
by Runke Wang, Xiaoni You, Yaya Shi and Chengyong Wu
Water 2024, 16(18), 2603; https://doi.org/10.3390/w16182603 (registering DOI) - 14 Sep 2024
Viewed by 200
Abstract
An accurate estimation of evapotranspiration (ET) is critical to understanding the water cycle in watersheds and promoting the sustainable utilization of water resources. Although there are various ET products in the Yellow River Basin, various ET products have many uncertainties due to input [...] Read more.
An accurate estimation of evapotranspiration (ET) is critical to understanding the water cycle in watersheds and promoting the sustainable utilization of water resources. Although there are various ET products in the Yellow River Basin, various ET products have many uncertainties due to input data, parameterization schemes, and scale conversion, resulting in significant uncertainties in regional ET data products. To reduce the uncertainty of a single product and obtain more accurate ET data, more accurate ET data can be obtained by fusing different ET data. Addressing this challenge, by calculating the uncertainty of three ET data products, namely global land surface satellite (GLASS) ET, Penman–Monteith–Leuning (PML)-V2 ET, and reliability-affordable averaging (REA) ET, the weight of each product is obtained to drive the Bayesian three-cornered Hat (BTCH) algorithm to obtain higher quality fused ET data, which are then validated at the site and basin scales, and the accuracy has significantly improved compared to a single input product. On a daily scale, the fused data’s root mean square error (RMSE) is 0.78 mm/day and 1.14 mm/day. The mean absolute error (MAE) is 0.53 mm/day and 0.84 mm/day, respectively, which has a lower RMSE and MAE than the model input data; the correlation coefficients (R) are 0.9 and 0.83, respectively. At the basin scale, the RMSE and MAE of the annual average ET of the fused data are 11.77 mm/year and 14.95 mm/year, respectively, and the correlation coefficient is 0.84. The results show that the BTCH ET fusion data are better than single-input product data. An analysis of the fused ET data on a spatiotemporal scale shows that from 2001 to 2017, the ET increased in 85.64% of the area of the Yellow River Basin. Fluctuations in ET were greater in the middle reaches of the Yellow River than in the upstream and downstream regions. The BTCH algorithm has indispensable reference value for regional ET estimation research, and the ET data after BTCH algorithm fusion have higher data quality than the original input data. The fused ET data can inform the development of management strategies for water resources in the YRB and provide a deeper understanding of the regional water supply and demand balance mechanism. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the research area and distribution of major cities, hydrological stations, and flux observation stations.</p>
Full article ">Figure 2
<p>Technical flowchart.</p>
Full article ">Figure 3
<p>Uncertainty results of different evapotranspiration products, (<b>a</b>) GLASS ET, (<b>b</b>) PML-V2 ET, (<b>c</b>) REA ET, and (<b>d</b>) uncertainty statistics.</p>
Full article ">Figure 4
<p>Accuracy validation of ET products, fusion ET data, and flux tower data at Haibei shrub station from 2003 to 2010. (<b>a</b>) GLASS, (<b>b</b>) REA, (<b>c</b>) PML-V2, (<b>d</b>) BTCH (red: fitting line; dashed: best-fit line, a 1:1 line).</p>
Full article ">Figure 5
<p>Accuracy validation of ET products, fusion ET data, and flux tower data at Haibei wetland station in 2005. (<b>a</b>) GLASS, (<b>b</b>) REA, (<b>c</b>) PML-V2, (<b>d</b>) BTCH (red: fitting line; dashed: best-fit line, a 1:1 line).</p>
Full article ">Figure 6
<p>Energy balance ratio of observed data and fused ET data from the Haibei shrub station. (<b>a</b>) measured data, (<b>b</b>) fused ET data.</p>
Full article ">Figure 7
<p>Comparison of the average and observed values for ET from the different products in the YRB. (<b>a</b>) GLASS, (<b>b</b>) REA, (<b>c</b>) PML-V2, (<b>d</b>) BTCH.</p>
Full article ">Figure 8
<p>Variation characteristics for the average values of different ET products in the YRB.</p>
Full article ">Figure 9
<p>Interannual ET change rate (<b>a</b>) and M-K significance test (<b>b</b>) of BTCH fusion from 2001 to 2017.</p>
Full article ">
19 pages, 2845 KiB  
Study Protocol
Risk Assessment of Underground Tunnel Engineering Based on Pythagorean Fuzzy Sets and Bayesian Networks
by Zhenhua Wang, Tiantian Jiang and Zhiyong Li
Buildings 2024, 14(9), 2897; https://doi.org/10.3390/buildings14092897 - 13 Sep 2024
Viewed by 292
Abstract
With the acceleration of urbanization, the importance of risk management in underground construction projects has become increasingly prominent. In the process of risk assessment for underground construction projects, the uncertainty of subjective factors from experts poses a significant challenge to the accuracy of [...] Read more.
With the acceleration of urbanization, the importance of risk management in underground construction projects has become increasingly prominent. In the process of risk assessment for underground construction projects, the uncertainty of subjective factors from experts poses a significant challenge to the accuracy of assessment outcomes. This paper takes a section of the Nanchang Metro Line 2 as the research object, aiming to address the subjectivity issues in the risk assessment of underground construction projects and to enhance the scientific rigor and accuracy of the assessment. The study initially conducts a comprehensive identification and analysis of risk factors in underground engineering through a literature review and expert consultation method. Based on this, this paper introduces the theory of Pythagorean fuzzy sets to improve the Delphi method in order to reduce the impact of subjectivity in expert assessments. Furthermore, this paper constructs a Bayesian network model, incorporating risk factors into the network, and quantifies the construction risks through a probabilistic inference mechanism. The research findings indicate a total of 12 key risk factors that have been identified across four dimensions: geological and groundwater conditions, tunnel construction technical risks, construction management measures, and the surrounding environment. The Bayesian network assessment results indicate that the effectiveness of engineering quality management and the state of safety management at the construction site are the two most influential factors. Based on the assessment results, this paper further conducts a risk control analysis and proposes targeted risk management measures. Full article
(This article belongs to the Special Issue Advances in Life Cycle Management of Buildings)
Show Figures

Figure 1

Figure 1
<p>Flowchart of Delphi method survey.</p>
Full article ">Figure 2
<p>Multi-indicator evaluation process for underground tunnel construction based on the PFIOWLAD operator.</p>
Full article ">Figure 3
<p>Underground tunnel construction engineering risk assessment index system.</p>
Full article ">Figure 4
<p>Underground tunnel construction risk assessment flowchart.</p>
Full article ">Figure 5
<p>Construction plan of a section of Nanchang Metro Line 2.</p>
Full article ">Figure 6
<p>Diagram of the operation code and the operation result.</p>
Full article ">Figure 7
<p>Schematic diagram of risk factor probability data input.</p>
Full article ">Figure 8
<p>The subway construction engineering risk causal chain Bayesian network model.</p>
Full article ">
20 pages, 4165 KiB  
Article
Identifying Conservation Priority Areas of Hydrological Ecosystem Service Using Hot and Cold Spot Analysis at Watershed Scale
by Srishti Gwal, Dipaka Ranjan Sena, Prashant K. Srivastava and Sanjeev K. Srivastava
Remote Sens. 2024, 16(18), 3409; https://doi.org/10.3390/rs16183409 - 13 Sep 2024
Viewed by 253
Abstract
Hydrological Ecosystem Services (HES) are crucial components of environmental sustainability and provide indispensable benefits. The present study identifies critical hot and cold spots areas of HES in the Aglar watershed of the Indian Himalayan Region using six HES descriptors, namely water yield (WYLD), [...] Read more.
Hydrological Ecosystem Services (HES) are crucial components of environmental sustainability and provide indispensable benefits. The present study identifies critical hot and cold spots areas of HES in the Aglar watershed of the Indian Himalayan Region using six HES descriptors, namely water yield (WYLD), crop yield factor (CYF), sediment yield (SYLD), base flow (LATQ), surface runoff (SURFQ), and total water retention (TWR). The analysis was conducted using weightage-based approaches under two methods: (1) evaluating six HES descriptors individually and (2) grouping them into broad ecosystem service categories. Furthermore, the study assessed pixel-level uncertainties that arose because of the distinctive methods used in the identification of hot and cold spots. The associated synergies and trade-offs among HES descriptors were examined too. From method 1, 0.26% area of the watershed was classified as cold spots and 3.18% as hot spots, whereas method 2 classified 2.42% area as cold spots and 2.36% as hot spots. Pixel-level uncertainties showed that 0.57 km2 and 6.86 km2 of the watershed were consistently under cold and hot spots, respectively, using method 1, whereas method 2 identified 2.30 km2 and 6.97 km2 as cold spots and hot spots, respectively. The spatial analysis of hot spots showed consistent patterns in certain parts of the watershed, primarily in the south to southwest region, while cold spots were mainly found on the eastern side. Upon analyzing HES descriptors within broad ecosystem service categories, hot spots were mainly in the southern part, and cold spots were scattered throughout the watershed, especially in agricultural and scrubland areas. The significant synergistic relation between LATQ and WYLD, and sediment retention and WYLD and trade-offs between SURFQ and HES descriptors like WYLD, LATQ, sediment retention, and TWR was attributed to varying factors such as land use and topography impacting the water balance components in the watershed. The findings underscore the critical need for targeted conservation efforts to maintain the ecologically sensitive regions at watershed scale. Full article
Show Figures

Figure 1

Figure 1
<p>Map showing location of the study area. The inset diagram (<b>a</b>) represents the Indian Himalayan Range (IHR) in Indian subcontinent. The inset diagram (<b>b</b>) highlights the IHR states in yellow with precise location of Aglar watershed (in red) in Uttarakhand. The inset diagram (<b>c</b>) showcases the land use type in the Aglar watershed.</p>
Full article ">Figure 2
<p>PCA plots showing (<b>a</b>) relative contribution of the individual HES variable (<b>b</b>,<b>c</b>) relative contribution of each variable considered under regulating and supporting services categories, respectively.</p>
Full article ">Figure 3
<p>Hot and cold spot areas in Aglar watershed derived from four approaches along with their median under two distinct methods. (<b>a</b>) Method-1: Considering HES descriptors as an individual entity. (<b>b</b>) Method-2: Considering HES descriptors under broad ESs categories.</p>
Full article ">Figure 3 Cont.
<p>Hot and cold spot areas in Aglar watershed derived from four approaches along with their median under two distinct methods. (<b>a</b>) Method-1: Considering HES descriptors as an individual entity. (<b>b</b>) Method-2: Considering HES descriptors under broad ESs categories.</p>
Full article ">Figure 4
<p>Pixel-level uncertainty in hot and cold spot maps based on mode values obtained from method 1 and 2.</p>
Full article ">Figure 5
<p>Pearson correlation coefficients between HES descriptors.</p>
Full article ">
16 pages, 533 KiB  
Article
Regularizing Lifetime Drift Prediction in Semiconductor Electrical Parameters with Quantile Random Forest Regression
by Lukas Sommeregger and Jürgen Pilz
Technologies 2024, 12(9), 165; https://doi.org/10.3390/technologies12090165 - 13 Sep 2024
Viewed by 276
Abstract
Semiconductors play a crucial role in a wide range of applications and are integral to essential infrastructures. Manufacturers of these semiconductors must meet specific quality and lifetime targets. To estimate the lifetime of semiconductors, accelerated stress tests are conducted. This paper introduces a [...] Read more.
Semiconductors play a crucial role in a wide range of applications and are integral to essential infrastructures. Manufacturers of these semiconductors must meet specific quality and lifetime targets. To estimate the lifetime of semiconductors, accelerated stress tests are conducted. This paper introduces a novel approach to modeling drift in discrete electrical parameters within stress test devices. It incorporates a machine learning (ML) approach for arbitrary panel data sets of electrical parameters from accelerated stress tests. The proposed model involves an expert-in-the-loop MLOps decision process, allowing experts to choose between an interpretable model and a robust ML algorithm for regularization and fine-tuning. The model addresses the issue of outliers influencing statistical models by employing regularization techniques. This ensures that the model’s accuracy is not compromised by outliers. The model uses interpretable statistically calculated limits for lifetime drift and uncertainty as input data. It then predicts these limits for new lifetime stress test data of electrical parameters from the same technology. The effectiveness of the model is demonstrated using anonymized real data from Infineon technologies. The model’s output can help prioritize parameters by the level of significance for indication of degradation over time, providing valuable insights for the analysis and improvement of electrical devices. The combination of explainable statistical algorithms and ML approaches enables the regularization of quality control limit calculations and the detection of lifetime drift in stress test parameters. This information can be used to enhance production quality by identifying significant parameters that indicate degradation and detecting deviations in production processes. Full article
Show Figures

Figure 1

Figure 1
<p>Three different outcomes emerge from clustering highly similar trajectory data. Different clusters are indicated by different colors. Statistical models may yield divergent results on similar data sets due to the non-deterministic nature of clustering algorithms. In this example, similar data lead to widely different clustering.</p>
Full article ">Figure 2
<p>An anonymized example showing a set of trajectories for an electrical parameter. Electrical parameter values are on the <span class="html-italic">y</span>-axis, with time in hours on the <span class="html-italic">x</span>-axis. In this case, the electrical parameter values remain relatively consistent across the four readings and show low variability.</p>
Full article ">Figure 3
<p>A flowchart showing the data preparation pipeline used to generate training and validation data. Feature engineering and train/test splitting plus pre-processing are included.</p>
Full article ">Figure 4
<p>A comparison of MAE, RMSE and <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> for the different candidate models. Lower is better for MAE and RMSE; higher is better for <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math>. The random forest model shows the best results on real data. Standard Boxplot, black dot as Median, blue frame as Box.</p>
Full article ">Figure 5
<p>A comparison of RMSE confidence intervals for the different models. Lower is better. Again, random forest shows the best results.</p>
Full article ">Figure 6
<p>Plot of cross-validation results with varying parameters. Lower is better. The extratrees splitrule wins out with increasing number of predictors.</p>
Full article ">Figure 7
<p>Quantile forest regression predictions of lower test limit RMSE vs. quantiles. Lower is better. Some improvement with better quantile choice is possible. The black line denotes the 0.5-quantile. Dots are prediction results.</p>
Full article ">Figure 8
<p>Quantile forest regression predictions of upper test limit RMSE vs. quantiles. Lower is better. The median prediction can again be improved via smart quantile choice. The black line denotes the 0.5-quantile. Dots are prediction results.</p>
Full article ">Figure 9
<p>Violin plots of the calculated statistical lower test limits (Stat. Predictions) and the quantiles of the random forest regression. Higher means more conservative, and closer to the statistical predictions is better. Visually, slightly above the median seems to capture the predictions best.</p>
Full article ">Figure 10
<p>Violin plots of the calculated statistical upper test limits (Stat. Predictions) and the quantiles of the random forest regression. Lower means more conservative, and closer to the statistical predictions is better. Visually, a quantile slightly below the median should capture the predictions best.</p>
Full article ">Figure 11
<p>Visualizing regularization: quantile predictions of lower test limit vs. actual statistical predictions on the validation set. The confidence bound tends to overlap the calculated limit.</p>
Full article ">Figure 12
<p>Visualizing regularization: Quantile predictions of upper test limit vs. actual statistical predictions on validation set. The confidence bound tends to overlap the calculated limit.</p>
Full article ">Figure A1
<p>A flowchart showing a possible expert-in-the-loop decision process for automatically generating and proposing test limits.</p>
Full article ">Figure A2
<p>A comparison of MAE, RMSE and <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> for the different candidate models. Lower is better for MAE and RMSE; higher is better for <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math>. Standard boxplot with black dot for median and blue box for quartiles.</p>
Full article ">
15 pages, 3144 KiB  
Article
Multi-Fingerprints Indoor Localization for Variable Spatial Environments: A Naive Bayesian Approach
by Chengjie Hou and Zhizhong Zhang
Sensors 2024, 24(18), 5940; https://doi.org/10.3390/s24185940 - 13 Sep 2024
Viewed by 168
Abstract
Fingerprint-based indoor localization has been a hot research topic. However, the current fingerprint-based indoor localization approaches still rely on a single fingerprint database, where the average level of data at reference points is used as the fingerprint representation. In variable environmental conditions, the [...] Read more.
Fingerprint-based indoor localization has been a hot research topic. However, the current fingerprint-based indoor localization approaches still rely on a single fingerprint database, where the average level of data at reference points is used as the fingerprint representation. In variable environmental conditions, the variations in signals caused by changes in the environmental states introduce significant deviations between the average level and the actual fingerprint characteristics. This deviation leads to a mismatch between the constructed fingerprint database and the real-world conditions, thereby affecting the effectiveness of fingerprint matching. Meanwhile, the sharp noise interference caused by uncertainties such as personnel movement has a significant interference on the creation of the fingerprint database and fingerprint matching in online stage. Examination of the sampling data after denoising with Robust Principal Component Analysis (RPCA) revealed distinct multi-fingerprint characteristics with clear boundaries at certain access points. Based on these observations, the concept of constructing a fingerprint database using multiple fingerprints is introduced and its feasibility is explored. Additionally, a multi-fingerprint solution based on naive Bayes classification is proposed to accurately represent fingerprint characteristics under different environmental conditions. This method is based on the online stage fingerprints. The corresponding state space is selected using the naive Bayes classifier, enabling the selection of an appropriate fingerprint database for matching. Through simulations and empirical evaluations, the proposed multi-fingerprints construction scheme consistently outperforms the traditional single-fingerprint database in terms of positioning accuracy across all tested localization algorithms. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Multi-state spaces indoor localization system architecture based on multi-fingerprints.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) The display of 4000 sample data of two APs.</p>
Full article ">Figure 3
<p>Based on all the different door opening and closing situations, identify all the possible permutations that affect the AP states and use this information to construct a multi-fingerprints database that includes all possible scenarios.</p>
Full article ">Figure 4
<p>A total of 4000 samples were collected at a reference point for a single AP.</p>
Full article ">Figure 5
<p>Under the spatial layout of the experimental area, a commercial software, “Wireless Insite 3.4.4”, was utilized to simulate and compare the signal propagation of an AP in different spatial configurations. (<b>a</b>) Corresponds to the scenario when the doors of both laboratories are closed. (<b>b</b>) Corresponds to the scenario when the doors of both laboratories are open. The red points represent reference points while the green one is the AP.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>) Represent a comparison of the results before and after RPCA denoising for 4000 sampling data from two APs.</p>
Full article ">Figure 7
<p>The standard deviation of data for an RP.</p>
Full article ">Figure 8
<p>Comparison of AP4 sampling data before and after RPCA processing.</p>
Full article ">Figure 9
<p>CDF of the localization errors.</p>
Full article ">
14 pages, 1752 KiB  
Systematic Review
Wilson and Jungner Revisited: Are Screening Criteria Fit for the 21st Century?
by Elena Schnabel-Besson, Ulrike Mütze, Nicola Dikow, Friederike Hörster, Marina A. Morath, Karla Alex, Heiko Brennenstuhl, Sascha Settegast, Jürgen G. Okun, Christian P. Schaaf, Eva C. Winkler and Stefan Kölker
Int. J. Neonatal Screen. 2024, 10(3), 62; https://doi.org/10.3390/ijns10030062 - 13 Sep 2024
Viewed by 333
Abstract
Driven by technological innovations, newborn screening (NBS) panels have been expanded and the development of genomic NBS pilot programs is rapidly progressing. Decisions on disease selection for NBS are still based on the Wilson and Jungner (WJ) criteria published in 1968. Despite this [...] Read more.
Driven by technological innovations, newborn screening (NBS) panels have been expanded and the development of genomic NBS pilot programs is rapidly progressing. Decisions on disease selection for NBS are still based on the Wilson and Jungner (WJ) criteria published in 1968. Despite this uniform reference, interpretation of the WJ criteria and actual disease selection for NBS programs are highly variable. A systematic literature search [PubMED search “Wilson” AND “Jungner”; last search 16.07.22] was performed to evaluate the applicability of the WJ criteria for current and future NBS programs and the need for adaptation. By at least two reviewers, 105 publications (systematic literature search, N = 77; manual search, N = 28) were screened for relevant content and, finally, 38 publications were evaluated. Limited by the study design of qualitative text analysis, no statistical evaluation was performed, but a structured collection of reported aspects of criticism and proposed improvements was instead collated. This revealed a set of general limitations of the WJ criteria, such as imprecise terminology, lack of measurability and objectivity, missing pediatric focus, and absent guidance on program management. Furthermore, it unraveled specific aspects of criticism on clinical, diagnostic, therapeutic, and economical aspects. A major obstacle was found to be the incompletely understood natural history and phenotypic diversity of rare diseases prior to NBS implementation, resulting in uncertainty about case definition, risk stratification, and indications for treatment. This gap could be closed through the systematic collection and evaluation of real-world evidence on the quality, safety, and (cost-)effectiveness of NBS, as well as the long-term benefits experienced by screened individuals. An integrated NBS public health program that is designed to continuously learn would fulfil these requirements, and a multi-dimensional framework for future NBS programs integrating medical, ethical, legal, and societal perspectives is overdue. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA diagram of the literature search [<a href="#B38-IJNS-10-00062" class="html-bibr">38</a>].</p>
Full article ">Figure 2
<p>Revising the screening criteria. <span class="html-italic">Principles and Practice of Screening for Disease</span>, as published by Wilson and Jungner in 1968 [<a href="#B32-IJNS-10-00062" class="html-bibr">32</a>], includes 10 specific criteria which have been the basis of all NBS programs worldwide for more than 50 years (<b>left</b>). In subsequent studies, these criteria have been commonly re-arranged in four different sub-categories (<b>middle</b>). The systematic literature review not only identified relevant shortcomings of single WJ criteria, but also highlighted their missing focus on complex aspects of program management and the insufficient systematic consideration of ethical, legal, and societal implications (ELSI), which should form the basis of all NBS programs (<b>right</b>). A multi-dimensional framework integrating all relevant perspectives would be an excellent opportunity to revise the original WJ criteria and to make them fit for the demands and further developments of NBS programs. Figure was created with draw.io (<a href="https://drawio-app.com/" target="_blank">https://drawio-app.com/</a>, accessed on 17 July 2024).</p>
Full article ">Figure 3
<p>Extension of the phenotypic spectrum of an NBS target disease after NBS implementation. Introduction to NBS expands the phenotypic spectrum towards attenuated disease variants. While severe and attenuated variants are usually easy to distinguish, the exact differentiation between healthy and attenuated forms can be challenging, requiring risk-stratified and individualized treatment indications.</p>
Full article ">
29 pages, 32138 KiB  
Article
Seismic Identification and Characterization of Deep Strike-Slip Faults in the Tarim Craton Basin
by Fei Tian, Wenhao Zheng, Aosai Zhao, Jingyue Liu, Yunchen Liu, Hui Zhou and Wenjing Cao
Appl. Sci. 2024, 14(18), 8235; https://doi.org/10.3390/app14188235 - 12 Sep 2024
Viewed by 322
Abstract
Through hydrocarbon explorations, deep carbonate reservoirs within a craton were determined to be influenced by deep strike-slip faults, which exhibit small displacements and are challenging to identify. Previous research has established a correlation between seismic attributes and deep geological information, wherein large-scale faults [...] Read more.
Through hydrocarbon explorations, deep carbonate reservoirs within a craton were determined to be influenced by deep strike-slip faults, which exhibit small displacements and are challenging to identify. Previous research has established a correlation between seismic attributes and deep geological information, wherein large-scale faults can cause abrupt waveform discontinuities. However, due to the inherent limitations of seismic datasets, such as low signal-to-noise ratios and resolutions, accurately characterizing complex strike-slip faults remains difficult, resulting in increased uncertainties in fault characterization and reservoir prediction. In this study, we integrate advanced techniques such as principal component analysis and structure-oriented filtering with a fault-centric imaging approach to refine the resolution of seismic data from the Tarim craton. Our detailed evaluation encompassed 12 distinct seismic attributes, culminating in the creation of a sophisticated model for identifying strike-slip faults. This model incorporates select seismic attributes and leverages fusion algorithms like K-means, ellipsoid growth, and wavelet transformations. Through the technical approach introduced in this study, we have achieved multi-scale characterization of complex strike-slip faults with throws of less than 10 m. This workflow has the potential to be extended to other complex reservoirs governed by strike-slip faults in cratonic basins, thus offering valuable insights for hydrocarbon exploration and reservoir characterization in similar geological settings. Full article
(This article belongs to the Special Issue Seismic Data Processing and Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of the Tarim Basin and its subdivisions. The location of the research area is marked by a red rectangle. The location of the seismic section is marked by a pink line. (<b>b</b>) The seismic section in the north-central Tarim Basin. This seismic section shows that the deep structure in Tarim is very complicated and controlled by faults.</p>
Full article ">Figure 2
<p>Geological structure of a strike-slip fault33. (<b>a</b>) Structural zones of strike-slip faults, including the principal displacement zone (PDZ), restraining band, and horsetail splay. The zone or plane of dip-slip or strike-slip accounts for the greatest proportion of accumulated strain. Subsidiary structures such as synthetic and antithetic faults and folds (e.g., fault splays, back thrusts, fracture zones, and en echelon folds) will be kinematically linked to the PDZ. (<b>b</b>) Outcrop of strike-slip fault. (<b>c</b>) Strike-slip fault interpretation (red lines) based on the outcrop. The strike-slip displacement in the fault zone causes various structural deformations in the surrounding area.</p>
Full article ">Figure 3
<p>Quality improvement of the seismic data. (<b>a</b>) The original seismic profile. The burial depth map of the Tarim Ordovician strata is in the upper left corner, and the red line shows the location of the seismic section. The green arrow indicates the north direction. (<b>b</b>) The seismic profile after PCA+ structure-oriented filtering. This methodology involves calculating and analyzing the covariance matrix of seismic traces within a defined time window, facilitating a robust denoising process and improving the overall quality of seismic data for further analysis and interpretation. (<b>c</b>) The seismic profile of PCA+ structure-oriented filtering + fault-focused imaging. By maintaining the essential characteristics of the original seismic signal while improving its signal-to-noise ratio, this method effectively emphasizes the discontinuity along the in-phase axis.</p>
Full article ">Figure 4
<p>Attributes of seismic energy. Most areas are significantly disturbed by “non-fault” areas, and a small number of branch faults can be shown but are not obvious. (<b>a</b>) Map of the RMS amplitude attribute. It has a good correlation with the rock density and is often used in lithologic phase transition analysis. (<b>b</b>) Plane graph of the low-frequency energy attribute. (<b>c</b>) Map of the high-frequency attenuation attribute. Due to the seismic response characteristic of high-frequency absorption shown by the faults, characteristics similar to “low-frequency enhancements” appear, and the waveform characteristics show little variation with depth.</p>
Full article ">Figure 5
<p>Attributes of seismic curvature. The data points of the strike-slip faults are relatively clear, but there is little difference between them and the data points in the non-fault areas; that is, the boundary between the faults and non-faults is not clear. (<b>a</b>) Map of the maximum positive curvature attribute. The largest positive curvature in the normal curvature is called the maximum positive curvature. This curvature can amplify fault information and some small linear structures in the plane. (<b>b</b>) Map of the minimum negative curvature attribute. (<b>c</b>) Map of the dip attribute. The dip attribute reflects the change in the dip angle, and it is effective in depicting the dominant section with a large fault distance.</p>
Full article ">Figure 6
<p>Attributes of seismic correlation. Several major branch faults can be seen, but the whole branch is a network that is significantly affected by false faults, and it is difficult to distinguish true and false faults. (<b>a</b>) Map of coherent attribute. The coherence attribute is used to calculate the similarity between adjacent seismic tracks and analyze the transverse changes in strata and lithology in the same phase axis to achieve fault identification. (<b>b</b>) Map of the likelihood attribute. The likelihood attribute enhances the difference between fault and non-fault responses. The likelihood attributes of inclination and dip of each data sample point are scanned, and the maximum value is obtained when accurate inclination and dip are scanned. (<b>c</b>) Map of the ant tracking attribute. (<b>d</b>) Map of the AFE attribute. AFE is directional weighted coherence, which is obtained by further directional filtering based on sharpening.</p>
Full article ">Figure 7
<p>Attributes of seismic gradient. Many details in the trunk fault can be detected with minimal interference from non-breaks. (<b>a</b>) Map of the amplitude variance attribute. It describes the geological structure data mainly through the similarity attribute of adjacent seismic signals. (<b>b</b>) Map of the amplitude gradient attribute. By searching the disorder of the seismic amplitude gradient vector in each azimuth and dip angle in three-dimensional space, the most disordered surface is found to be the fault location.</p>
Full article ">Figure 8
<p>Fault identification comparison of preferred attributes. (<b>a</b>) Map of the ant track. (<b>b</b>) Fault interpretation of the ant tracking attribute. When applied to heterogeneous deep carbonate reservoirs in cratonic basins, the tracking attribute exhibits limitations, such as reduced recognition ability for carbonate faults and challenges in identifying micro-faults. (<b>c</b>) Map of the amplitude gradient attribute. (<b>d</b>) Fault interpretation (red lines) of the amplitude gradient attribute. The amplitude gradient attribute successfully reflects the trends and locations of these faults, whereas the ant tracking attribute often exhibits excessive disorder and interferes with the accurate determination of branch fault locations.</p>
Full article ">Figure 8 Cont.
<p>Fault identification comparison of preferred attributes. (<b>a</b>) Map of the ant track. (<b>b</b>) Fault interpretation of the ant tracking attribute. When applied to heterogeneous deep carbonate reservoirs in cratonic basins, the tracking attribute exhibits limitations, such as reduced recognition ability for carbonate faults and challenges in identifying micro-faults. (<b>c</b>) Map of the amplitude gradient attribute. (<b>d</b>) Fault interpretation (red lines) of the amplitude gradient attribute. The amplitude gradient attribute successfully reflects the trends and locations of these faults, whereas the ant tracking attribute often exhibits excessive disorder and interferes with the accurate determination of branch fault locations.</p>
Full article ">Figure 9
<p>Extraction of the amplitude gradient attribute fault confidence region. (<b>a</b>) The spatial range of the fault was divided based on the fault threshold. Because K-means can cluster similar data based on the distance between the data points, the data were classified into 2 clusters through K-means clustering: fault clusters and non-fault clusters. (<b>b</b>) The fault range of high probability was obtained by ellipsoid expansion. By setting the structural unit, the range of the extracted attribute points can be expanded according to geological theory to obtain the data body of the fault location.</p>
Full article ">Figure 10
<p>Fault map based on the fusion of the amplitude gradient attribute (blue) and ant tracking attribute (black). Shallow parts of the fault are primarily formed by oblique structures, while deeper sections are predominantly influenced by compressional and torsional faults.</p>
Full article ">Figure 11
<p>Strike-slip fault segment interpretations. The fault section interpretation consists of 6 sections, each of which includes the original seismic dataset, amplitude gradient attribute, ant tracking attribute, fusion attribute, and fault interpretation (red lines): (<b>a</b>) the tensile section, located at the tail of the fault, contains a relay-type fault in the tensile environment; (<b>b</b>) the extrusion section, located in the transition region from the tail of the fault to the middle of the fault, is affected by the extrusion environment and has an obvious internal structure of the fault; (<b>c</b>) the extrusion section, located in the middle of the fault, has more intense extrusion action; (<b>d</b>) the main displacement section, located in the middle of the fault, has a few branch faults; (<b>e</b>) the main displacement section, located in the transition region from the middle of the fault to the tail, has obvious strike-slip and no branch faults; and (<b>f</b>) the tensile section, located in the tail of the fault, has a large branch fault.</p>
Full article ">Figure 11 Cont.
<p>Strike-slip fault segment interpretations. The fault section interpretation consists of 6 sections, each of which includes the original seismic dataset, amplitude gradient attribute, ant tracking attribute, fusion attribute, and fault interpretation (red lines): (<b>a</b>) the tensile section, located at the tail of the fault, contains a relay-type fault in the tensile environment; (<b>b</b>) the extrusion section, located in the transition region from the tail of the fault to the middle of the fault, is affected by the extrusion environment and has an obvious internal structure of the fault; (<b>c</b>) the extrusion section, located in the middle of the fault, has more intense extrusion action; (<b>d</b>) the main displacement section, located in the middle of the fault, has a few branch faults; (<b>e</b>) the main displacement section, located in the transition region from the middle of the fault to the tail, has obvious strike-slip and no branch faults; and (<b>f</b>) the tensile section, located in the tail of the fault, has a large branch fault.</p>
Full article ">Figure 11 Cont.
<p>Strike-slip fault segment interpretations. The fault section interpretation consists of 6 sections, each of which includes the original seismic dataset, amplitude gradient attribute, ant tracking attribute, fusion attribute, and fault interpretation (red lines): (<b>a</b>) the tensile section, located at the tail of the fault, contains a relay-type fault in the tensile environment; (<b>b</b>) the extrusion section, located in the transition region from the tail of the fault to the middle of the fault, is affected by the extrusion environment and has an obvious internal structure of the fault; (<b>c</b>) the extrusion section, located in the middle of the fault, has more intense extrusion action; (<b>d</b>) the main displacement section, located in the middle of the fault, has a few branch faults; (<b>e</b>) the main displacement section, located in the transition region from the middle of the fault to the tail, has obvious strike-slip and no branch faults; and (<b>f</b>) the tensile section, located in the tail of the fault, has a large branch fault.</p>
Full article ">
18 pages, 2838 KiB  
Review
New Approaches in Finite Control Set Model Predictive Control for Grid-Connected PV Inverters: State of the Art
by Shakil Mirza and Arif Hussain
Solar 2024, 4(3), 491-508; https://doi.org/10.3390/solar4030023 - 12 Sep 2024
Viewed by 188
Abstract
Grid-connected PV inverters require sophisticated control procedures for smooth integration with the modern electrical grid. The ability of FCS-MPC to manage the discrete character of power electronic devices is highly acknowledged, since it enables direct manipulation of switching states without requiring modulation techniques. [...] Read more.
Grid-connected PV inverters require sophisticated control procedures for smooth integration with the modern electrical grid. The ability of FCS-MPC to manage the discrete character of power electronic devices is highly acknowledged, since it enables direct manipulation of switching states without requiring modulation techniques. This review discusses the latest approaches in FCS-MPC methods for PV-based grid-connected inverter systems. It also classifies these methods according to control objectives, such as active and reactive power control, harmonic suppression, and voltage regulation. The application of FCS-MPC particularly emphasizing its benefits, including quick response times, resistance to changes in parameters, and the capacity to manage restrictions and nonlinearities in the system without the requirement for modulators, has been investigated in this review. Recent developments in robust and adaptive MPC strategies, which enhance system performance despite distorted grid settings and parametric uncertainties, are emphasized. This analysis classifies FCS-MPC techniques based on their control goals, optimal parameters and cost function, this paper also identifies drawbacks in these existing control methods and provide recommendation for future research in FCS-MPC for grid-connected PV-inverter systems. Full article
(This article belongs to the Topic Smart Solar Energy Systems)
Show Figures

Figure 1

Figure 1
<p>Three-phase two-level grid-connected inverter.</p>
Full article ">Figure 2
<p>Output voltage vector of the inverter in SFR during the current (<span class="html-italic">k</span>) and next (<span class="html-italic">k</span> + 1) sampling time.</p>
Full article ">Figure 3
<p>Generalized FCS-MPC block diagram.</p>
Full article ">Figure 4
<p>Flow chart of FCS-MPC of the grid-connected inverter system.</p>
Full article ">Figure 5
<p>(<b>a</b>) Block diagram of the predictive power controller [<a href="#B25-solar-04-00023" class="html-bibr">25</a>]; (<b>b</b>) flowchart of MPC with increased switching states [<a href="#B23-solar-04-00023" class="html-bibr">23</a>].</p>
Full article ">Figure 6
<p>Block diagram of resonant MPC based on IMP.</p>
Full article ">Figure 7
<p>Block diagram of the resonant model predictive controller [<a href="#B65-solar-04-00023" class="html-bibr">65</a>].</p>
Full article ">Figure 8
<p>Performance of FCS-MPC: (<b>a</b>) improvement in harmonic suppression compare with conventional method and (<b>b</b>) comparison of harmonic suppression during distorted grid conditions [<a href="#B23-solar-04-00023" class="html-bibr">23</a>,<a href="#B65-solar-04-00023" class="html-bibr">65</a>].</p>
Full article ">
16 pages, 2905 KiB  
Article
A Novel Lock-In Amplification-Based Frequency Component Extraction Method for Performance Analysis and Power Monitoring of Grid-Connected Systems
by Abdur Rehman, Taeho An and Woojin Choi
Energies 2024, 17(18), 4580; https://doi.org/10.3390/en17184580 - 12 Sep 2024
Viewed by 234
Abstract
Recently, the increasing concern for climate control has led to the widespread application of grid-connected inverter (GIC)-based renewable-energy systems. In addition, the increased usage of non-linear loads and electrification of the transport sector cause ineffective grid-frequency management and the introduction of harmonics. These [...] Read more.
Recently, the increasing concern for climate control has led to the widespread application of grid-connected inverter (GIC)-based renewable-energy systems. In addition, the increased usage of non-linear loads and electrification of the transport sector cause ineffective grid-frequency management and the introduction of harmonics. These grid conditions affect power quality and result in uncertainty and inaccuracy in monitoring and measurement. Incorrect measurement leads to overbilling/underbilling, ineffective demand and supply forecasts for the power system, and inefficient performance analysis. To address the outlined problem, a novel, three-phase frequency component extraction and power measurement method based on Digital Lock-in Amplifier (DLIA) and Digital Lock-in Amplifier–Frequency-Locked Loop (DLIA–FLL) is proposed to provide accurate measurements under the conditions of harmonics and frequency offset. A combined filter, with a lowpass filter and notch filter, is employed to improve computation speed for DLIA. A comparative study is performed to verify the effectiveness of the proposed power measurement approach, by comparing the proposed method to the windowed interpolated fast Fourier transform (WIFFT). The ZERA COM 3003 (a commercial high-accuracy power measurement instrument) is used as the reference instrument in the experiment. Full article
Show Figures

Figure 1

Figure 1
<p>Error curves for digital input electricity meters under the condition of ±0.5 Hz frequency offset error in real power.</p>
Full article ">Figure 2
<p>Block diagram of the proposed component extraction and power measurement method with the Digital Lock-in Amplifier Frequency-Locked Loop (DLIA–FLL).</p>
Full article ">Figure 3
<p>Block diagram for Digital Lock-in Amplifier (DLIA) with proposed combined filter (Lowpass Filter + Notch Filter).</p>
Full article ">Figure 4
<p>Bode plot for the combined filter consisting of notch filter and lowpass filter.</p>
Full article ">Figure 5
<p>Block diagram for proposed reference signal synchronization method with DLIA–FLL.</p>
Full article ">Figure 6
<p>Experimental setup used for power measurement experiment.</p>
Full article ">Figure 7
<p>Flowchart for the proposed power measurement algorithm implemented in MCU.</p>
Full article ">Figure 8
<p>Frequency-tracking performance achieved by the proposed system with DLIA–FLL.</p>
Full article ">Figure 9
<p>Three-phase voltage (<b>a</b>) and current (<b>b</b>) signals generated by chroma supply.</p>
Full article ">
19 pages, 470 KiB  
Article
Entrepreneurship and Corporate ESG Performance—A Case Study of China’s A-Share Listed Companies
by Hanjin Xie, Zilong Qin and Jun Li
Sustainability 2024, 16(18), 7964; https://doi.org/10.3390/su16187964 - 12 Sep 2024
Viewed by 456
Abstract
This paper examines the contemporary implications of entrepreneurship and utilizes panel data from Chinese A-share listed companies spanning 2011 to 2022. Based on the five aspects of Chinese entrepreneurship, namely “patriotism, courage to innovate, integrity and law-abiding, social responsibility, and international vision”, the [...] Read more.
This paper examines the contemporary implications of entrepreneurship and utilizes panel data from Chinese A-share listed companies spanning 2011 to 2022. Based on the five aspects of Chinese entrepreneurship, namely “patriotism, courage to innovate, integrity and law-abiding, social responsibility, and international vision”, the findings suggest that fostering entrepreneurship enhances the environmental, social, and governance (ESG) performance of firms. Mechanism analysis indicates that green technology innovation, social performance enhancement, and governance capability optimization mediate this relationship. Furthermore, factors such as corporate market power, regional marketization processes, and advancements in artificial intelligence technology influence the link between entrepreneurship and ESG performance. Robust entrepreneurship equips firms to navigate environmental uncertainties, but entrepreneurship cannot improve corporate governance performance. This article elucidates the distinctive significance of entrepreneurship, expanding the institutional economics research perspective, offering practical insights for cultivating entrepreneurship and elucidating potential determinants of corporate ESG performance. This article also provides spiritual guidance for sustainable development. Full article
(This article belongs to the Special Issue Research on Entrepreneurship and Sustainable Economic Development)
16 pages, 328 KiB  
Article
Economic Policy Uncertainty, Managerial Ability, and Cost of Equity Capital: Evidence from a Developing Country
by Arafat Hamdy, Aref M. Eissa and Ahmed Diab
Economies 2024, 12(9), 244; https://doi.org/10.3390/economies12090244 - 11 Sep 2024
Viewed by 336
Abstract
This study investigates the relationship between economic policy uncertainty (EPU) and the cost of equity capital (CoEC). It also reveals the moderating role of managerial ability (MA) in the relationship between EPU and CoEC in Saudi Arabia. The study sample consists of listed [...] Read more.
This study investigates the relationship between economic policy uncertainty (EPU) and the cost of equity capital (CoEC). It also reveals the moderating role of managerial ability (MA) in the relationship between EPU and CoEC in Saudi Arabia. The study sample consists of listed non-financial firms in Tadawul from 2008 to 2019. We analyzed data using STATA, depending on Pearson correlation analysis, two independent sample t-tests, OLS regression, and OLS with robust standard errors clustered by firm. Our study shows a positive effect of EPU on the CoEC. In addition, the results confirm that MA mitigates the positive effect of EPU on the CoEC. This is the first research to investigate the influence of the relationship between EPU on CoEC in Saudi Arabia, one of the largest emerging economies in the Middle East and Gulf countries. Our findings motivate decision-makers to strengthen their MA and establish a safe and stable investment environment to ensure better financing and investment decisions during uncertain times. Lending agencies, investors, and other stakeholders should consider the MA of corporations when making investment decisions. Full article
(This article belongs to the Special Issue Financial Market Volatility under Uncertainty)
Back to TopTop