[go: up one dir, main page]

Next Issue
Volume 15, July
Previous Issue
Volume 15, May
 
 

Algorithms, Volume 15, Issue 6 (June 2022) – 44 articles

Cover Story (view full-size image): For maximum profitability, ethical compliance, minimized downtime, and optimal utilization of equipment, safety and reliability are often priorities. Despite the utility of the micro drill bit automatic regrinding equipment, constant monitoring of the grinder improves the drill bit’s life. Interestingly, vibration monitoring offers a reliable solution for identifying the different stages of degradation. As part of a pre-processing algorithm, the spectral isolation technique ensures that only the most critical spectral segments of inputs are retained for improved deep learning-based diagnostic accuracy at a reduced computational cost. While open issues exist, the possibilities of further improvement present ripe opportunities for continued research in the domain. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 577 KiB  
Article
Scale-Free Random SAT Instances
by Carlos Ansótegui , Maria Luisa Bonet and Jordi Levy
Algorithms 2022, 15(6), 219; https://doi.org/10.3390/a15060219 - 20 Jun 2022
Cited by 2 | Viewed by 2799
Abstract
We focus on the random generation of SAT instances that have properties similar to real-world instances. It is known that many industrial instances, even with a great number of variables, can be solved by a clever solver in a reasonable amount of time. [...] Read more.
We focus on the random generation of SAT instances that have properties similar to real-world instances. It is known that many industrial instances, even with a great number of variables, can be solved by a clever solver in a reasonable amount of time. This is not possible, in general, with classical randomly generated instances. We provide a different generation model of SAT instances, called scale-free random SAT instances. This is based on the use of a non-uniform probability distribution P(i)iβ to select variable i, where β is a parameter of the model. This results in formulas where the number of occurrences k of variables follows a power-law distribution P(k)kδ, where δ=1+1/β. This property has been observed in most real-world SAT instances. For β=0, our model extends classical random SAT instances. We prove the existence of a SAT–UNSAT phase transition phenomenon for scale-free random 2-SAT instances with β<1/2 when the clause/variable ratio is m/n=12β(1β)2. We also prove that scale-free random k-SAT instances are unsatisfiable with a high probability when the number of clauses exceeds ω(n(1β)k). The proof of this result suggests that, when β>11/k, the unsatisfiability of most formulas may be due to small cores of clauses. Finally, we show how this model will allow us to generate random instances similar to industrial instances, of interest for testing purposes. Full article
(This article belongs to the Special Issue Algorithms in Complex Networks)
Show Figures

Figure 1

Figure 1
<p>Estimated industrial function <math display="inline"><semantics> <mrow> <msup> <mi>ϕ</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> </mrow> </msup> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> (in red) and power-law function <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo>;</mo> <mn>0.82</mn> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo>−</mo> <mn>0.82</mn> <mo stretchy="false">)</mo> </mrow> <mspace width="0.166667em"/> <msup> <mi>x</mi> <mrow> <mo>−</mo> <mn>0.82</mn> </mrow> </msup> </mrow> </semantics></math> (in blue), with normal axes (<b>left</b>) and double-logarithmic axes (<b>right</b>).</p>
Full article ">Figure 2
<p>Comparison of the frequencies of variable occurrences obtained for the whole set of instances used in the SAT Race 2008, and for a scale-free random 3-SAT formula generated with <span class="html-italic">β</span> = 0.82, <span class="html-italic">n</span> = 10<sup>7</sup> and <span class="html-italic">m</span> = 2.5 × 10<sup>7</sup>. In both cases, the x-axis represents the number of occurrences, and the y-axis the number of variables with this number of occurrences. Both axes are logarithmic. It also shows the line with slope <span class="html-italic">α</span> = 1/0.82 + 1 = 2.22, corresponding to the function <span class="html-italic">f</span> (<span class="html-italic">x</span>) = <span class="html-italic">C</span> <span class="html-italic">x</span><sup>−2.22</sup> in double-logarithmic axes.</p>
Full article ">Figure 3
<p>Fraction of satisfiable formulas as a function of parameter <math display="inline"><semantics> <mi>β</mi> </semantics></math> and fraction of clause/variables <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>/</mo> <mi>n</mi> </mrow> </semantics></math>. The number of variables is <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> and the fraction is approximated repeating the experiment for 10 formulas at every point. We also draw the theoretical threshold <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>/</mo> <mi>n</mi> <mo>=</mo> <mfrac> <mrow> <mn>1</mn> <mo>−</mo> <mn>2</mn> <mi>β</mi> </mrow> <msup> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo>−</mo> <mi>β</mi> <mo stretchy="false">)</mo> </mrow> <mn>2</mn> </msup> </mfrac> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Fraction of satisfiable formulas as a function of <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>/</mo> <mi>α</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo>−</mo> <mi>β</mi> <mo stretchy="false">)</mo> </mrow> <mn>2</mn> </msup> <mi>ζ</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mi>β</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </mfrac> <msup> <mi>n</mi> <mrow> <mn>2</mn> <mo stretchy="false">(</mo> <mn>1</mn> <mo>−</mo> <mi>β</mi> <mo stretchy="false">)</mo> </mrow> </msup> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>, and distinct values of <span class="html-italic">n</span> between <math display="inline"><semantics> <msup> <mn>2</mn> <mn>10</mn> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mn>2</mn> <mn>17</mn> </msup> </semantics></math>. Every point is computed repeating the experiment for 100 formulas, and checking how many of them are satisfiable.</p>
Full article ">Figure 5
<p>Estimation of the number of clauses that are needed to make unsatisfiable <math display="inline"><semantics> <mrow> <mn>50</mn> <mo>%</mo> </mrow> </semantics></math> of the formulas generated for distinct values of <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> as a function of the number of variables.</p>
Full article ">
34 pages, 2044 KiB  
Review
A Review of an Artificial Intelligence Framework for Identifying the Most Effective Palm Oil Prediction
by Fatini Nadhirah Mohd Nain, Nurul Hashimah Ahamed Hassain Malim, Rosni Abdullah, Muhamad Farid Abdul Rahim, Mohd Azinuddin Ahmad Mokhtar and Nurul Syafika Mohamad Fauzi
Algorithms 2022, 15(6), 218; https://doi.org/10.3390/a15060218 - 20 Jun 2022
Cited by 6 | Viewed by 4617
Abstract
Machine Learning (ML) offers new precision technologies with intelligent algorithms and robust computation. This technology benefits various agricultural industries, such as the palm oil sector, which possesses one of the most sustainable industries worldwide. Hence, an in-depth analysis was conducted, which is derived [...] Read more.
Machine Learning (ML) offers new precision technologies with intelligent algorithms and robust computation. This technology benefits various agricultural industries, such as the palm oil sector, which possesses one of the most sustainable industries worldwide. Hence, an in-depth analysis was conducted, which is derived from previous research on ML utilisation in the palm oil in-dustry. The study provided a brief overview of widely used features and prediction algorithms and critically analysed current the state of ML-based palm oil prediction. This analysis is extended to the ML application in the palm oil industry and a comparison of related studies. The analysis was predicated on thoroughly examining the advantages and disadvantages of ML-based palm oil prediction and the proper identification of current and future agricultural industry challenges. Potential solutions for palm oil prediction were added to this list. Artificial intelligence and ma-chine vision were used to develop intelligent systems, revolutionising the palm oil industry. Overall, this article provided a framework for future research in the palm oil agricultural industry by highlighting the importance of ML. Full article
Show Figures

Figure 1

Figure 1
<p>The PRISMA flow diagram.</p>
Full article ">Figure 2
<p>Type of Articles.</p>
Full article ">Figure 3
<p>The annual publication of articles that have been reviewed.</p>
Full article ">Figure 4
<p>The trend of using categorical factors between 2015 and 2021.</p>
Full article ">Figure 5
<p>Factors category.</p>
Full article ">Figure 6
<p>Widely used palm oil prediction algorithms.</p>
Full article ">Figure 7
<p>Identifying palm oil prediction flowchart.</p>
Full article ">
21 pages, 2055 KiB  
Article
An Online Algorithm for Routing an Unmanned Aerial Vehicle for Road Network Exploration Operations after Disasters under Different Refueling Strategies
by Lorena Reyes-Rubiano, Jana Voegl and Patrick Hirsch
Algorithms 2022, 15(6), 217; https://doi.org/10.3390/a15060217 - 20 Jun 2022
Cited by 6 | Viewed by 2494
Abstract
This paper is dedicated to studying on-line routing decisions for exploring a disrupted road network in the context of humanitarian logistics using an unmanned aerial vehicle (UAV) with flying range limitations. The exploration aims to extract accurate information for assessing damage to infrastructure [...] Read more.
This paper is dedicated to studying on-line routing decisions for exploring a disrupted road network in the context of humanitarian logistics using an unmanned aerial vehicle (UAV) with flying range limitations. The exploration aims to extract accurate information for assessing damage to infrastructure and road accessibility of victim locations in the aftermath of a disaster. We propose an algorithm to conduct routing decisions involving the aerial and road network simultaneously, assuming that no information about the state of the road network is available in the beginning. Our solution approach uses different strategies to deal with the detected disruptions and refueling decisions during the exploration process. The strategies differ mainly regarding where and when the UAV is refueled. We analyze the interplay of the type and level of disruption of the network with the number of possible refueling stations and the refueling strategy chosen. The aim is to find the best combination of the number of refueling stations and refueling strategy for different settings of the network type and disruption level. Full article
(This article belongs to the Special Issue Advanced Graph Algorithms)
Show Figures

Figure 1

Figure 1
<p>Known road network.</p>
Full article ">Figure 2
<p>Disrupted network.</p>
Full article ">Figure 3
<p>Aerial network.</p>
Full article ">Figure 4
<p>Example of the relationship between networks involved in the on-line UAV routing problem.</p>
Full article ">Figure 5
<p>Example of the flying range limitation with only the DMC working as a refueling station.</p>
Full article ">Figure 6
<p>Example of the flying range limitation with the DMC and victim location 1 working as a refueling station.</p>
Full article ">Figure 7
<p>Example of the dynamic buffer.</p>
Full article ">Figure 8
<p>Global performance of refueling strategies.</p>
Full article ">Figure 9
<p>Performance of refueling strategies: normalized travel time of exploration route for instance cluster A. The standardized travel time 100% is equivalent to 21.56 h.</p>
Full article ">Figure 10
<p>Performance of refueling strategies: normalized travel time of exploration route for instance cluster B. The standardized travel time 100% is equivalent to 57.52 h.</p>
Full article ">Figure 11
<p>Performance of refueling strategies: normalized travel time of exploration route for instance cluster C. The standardized travel time 100% is equivalent to 38.23 h.</p>
Full article ">Figure 12
<p>Cluster A: impact of the number of refueling stations on the performance of refueling strategies.</p>
Full article ">Figure 13
<p>Cluster B: impact of the number of refueling stations on the performance of refueling strategies.</p>
Full article ">Figure 14
<p>Cluster C: impact of the number of refueling stations on the performance of refueling strategies.</p>
Full article ">
12 pages, 546 KiB  
Article
Pulsed Electromagnetic Field Transmission through a Small Rectangular Aperture: A Solution Based on the Cagniard–DeHoop Method of Moments
by Martin Štumpf
Algorithms 2022, 15(6), 216; https://doi.org/10.3390/a15060216 - 20 Jun 2022
Cited by 1 | Viewed by 2410
Abstract
Pulsed electromagnetic (EM) field transmission through a relatively small rectangular aperture is analyzed with the aid of the Cagniard–deHoop method of moments (CdH-MoM). The classic EM scattering problem is formulated using the EM reciprocity theorem of the time-convolution type. The resulting TD reciprocity [...] Read more.
Pulsed electromagnetic (EM) field transmission through a relatively small rectangular aperture is analyzed with the aid of the Cagniard–deHoop method of moments (CdH-MoM). The classic EM scattering problem is formulated using the EM reciprocity theorem of the time-convolution type. The resulting TD reciprocity relation is then, under the assumption of piecewise-linear, space–time magnetic-current distribution over the aperture, cast analytically into the form of discrete time-convolution equations. The latter equations are subsequently solved via a stable marching-on-in-time scheme. Illustrative examples are presented and validated using a 3D numerical EM tool. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

Figure 1
<p>Rectangular aperture in a PEC plane.</p>
Full article ">Figure 2
<p>Incident plane-wave pulse shape.</p>
Full article ">Figure 3
<p>Electric-field pulse shapes as induced at the center of the aperture with relative dimensions <math display="inline"><semantics> <mrow> <mn>2</mn> <msub> <mo>Δ</mo> <mi>x</mi> </msub> <mo>/</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <msub> <mi>t</mi> <mi mathvariant="normal">w</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>10</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>2</mn> <msub> <mo>Δ</mo> <mi>y</mi> </msub> <mo>/</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <msub> <mi>t</mi> <mi mathvariant="normal">w</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>20</mn> </mrow> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>E</mi> <mi>x</mi> </msub> </semantics></math>-field component; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>E</mi> <mi>y</mi> </msub> </semantics></math>-field component.</p>
Full article ">Figure 4
<p>Pulse shape of <math display="inline"><semantics> <msub> <mi>E</mi> <mi>y</mi> </msub> </semantics></math>-field as induced at the center of the narrow aperture with relative dimensions <math display="inline"><semantics> <mrow> <mn>2</mn> <msub> <mo>Δ</mo> <mi>x</mi> </msub> <mo>/</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <msub> <mi>t</mi> <mi mathvariant="normal">w</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>2</mn> <msub> <mo>Δ</mo> <mi>y</mi> </msub> <mo>/</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <msub> <mi>t</mi> <mi mathvariant="normal">w</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Electric far-field amplitude behind the aperture.</p>
Full article ">
21 pages, 3074 KiB  
Article
Domain Generalization Model of Deep Convolutional Networks Based on SAND-Mask
by Jigang Wang, Liang Chen and Rui Wang
Algorithms 2022, 15(6), 215; https://doi.org/10.3390/a15060215 - 18 Jun 2022
Viewed by 2063
Abstract
In the actual operation of the machine, due to a large number of operating conditions and a wide range of operating conditions, the data under many operating conditions cannot be obtained. However, the different data distributions between different operating conditions will reduce the [...] Read more.
In the actual operation of the machine, due to a large number of operating conditions and a wide range of operating conditions, the data under many operating conditions cannot be obtained. However, the different data distributions between different operating conditions will reduce the performance of fault diagnosis. Currently, most studies remain on the level of generalization caused by a change of working conditions under a single condition. In the scenario where various conditions such as speed, load and temperature lead to changes in working conditions, there are problems such as the explosion of working conditions and complex data distribution. Compared with previous research work, this is more difficult to generalize. To cope with this problem, this paper improves generalization method SAND-Mask (Smoothed-AND (SAND)-masking) by using the total gradient variance of samples in a batch instead of the gradient variance of each sample to calculate parameter σ. The SAND-Mask method is extended to the fault diagnosis domain, and the DCNG model (Deep Convolutional Network Generalization) is proposed. Finally, multi-angle experiments were conducted on three publicly available bearing datasets, and diagnostic performances of more than 90%, 99%, and 70% were achieved on all transfer tasks. The results show that the DCNG model has better stability as well as diagnostic performance compared to other generalization methods. Full article
Show Figures

Figure 1

Figure 1
<p>DCNG network.</p>
Full article ">Figure 2
<p>SAND-Mask Calculates <span class="html-italic">σ</span> in a batch.</p>
Full article ">Figure 3
<p>The gradient variance of all samples in the batch replaces the gradient variance of each sample.</p>
Full article ">Figure 4
<p>Generalization performance of DCNG on CWRU data sets.</p>
Full article ">Figure 5
<p>Performance comparison of different generalization methods on CWRU data sets.</p>
Full article ">Figure 6
<p>Performance comparison of different generalization methods on MFPT data sets.</p>
Full article ">Figure 7
<p>Generalization performance of DCNG on KAT data sets.</p>
Full article ">Figure 8
<p>Performance comparison of different generalization methods on KAT data sets.</p>
Full article ">Figure 9
<p>The influence of different parameter settings on the DCNG model in the CWRU data set.</p>
Full article ">Figure 10
<p>The influence of different parameter Settings on DCNG model in MFPT data set.</p>
Full article ">Figure 11
<p>The influence of different parameter settings on DCNG model in KAT data set.</p>
Full article ">Figure 12
<p>DCNG and ungeneralized model performance on 24 tasks in the CWRU dataset.</p>
Full article ">Figure 13
<p>DCNG and ungeneralized model performance on 15 tasks in the MFPT dataset.</p>
Full article ">Figure 14
<p>DCNG and ungeneralized model performance on 24 tasks in the KAT dataset.</p>
Full article ">
13 pages, 1958 KiB  
Article
Pendulum Search Algorithm: An Optimization Algorithm Based on Simple Harmonic Motion and Its Application for a Vaccine Distribution Problem
by Nor Azlina Ab. Aziz and Kamarulzaman Ab. Aziz
Algorithms 2022, 15(6), 214; https://doi.org/10.3390/a15060214 - 17 Jun 2022
Cited by 7 | Viewed by 3132
Abstract
The harmonic motion of pendulum swinging centered at a pivot point is mimicked in this work. The harmonic motion’s amplitude at both side of the pivot are equal, damped, and decreased with time. This behavior is mimicked by the agents of the pendulum [...] Read more.
The harmonic motion of pendulum swinging centered at a pivot point is mimicked in this work. The harmonic motion’s amplitude at both side of the pivot are equal, damped, and decreased with time. This behavior is mimicked by the agents of the pendulum search algorithm (PSA) to move and look for an optimization solution within a search area. The high amplitude at the beginning encourages exploration and expands the search area while the small amplitude towards the end encourages fine-tuning and exploitation. PSA is applied for a vaccine distribution problem. The extended SEIR model of Hong Kong’s 2009 H1N1 influenza epidemic is adopted here. The results show that PSA is able to generate a good solution that is able to minimize the total infection better than several other methods. PSA is also tested using 13 multimodal functions from the CEC2014 benchmark function. To optimize multimodal functions, an algorithm must be able to avoid premature convergence and escape from local optima traps. Hence, the functions are chosen to validate the algorithm as a robust metaheuristic optimizer. PSA is found to be able to provide low error values. PSA is then benchmarked with the state-of-the-art particle swarm optimization (PSO) and sine cosine algorithm (SCA). PSA is better than PSO and SCA in a greater number of test functions; these positive results show the potential of PSA. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>Pendulum Harmonic Motion.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <mrow> <mi>p</mi> <mi>e</mi> <mi>n</mi> <msubsup> <mi>d</mi> <mi>i</mi> <mi>d</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> Magnitude of Oscillation.</p>
Full article ">Figure 3
<p>Fitness error of the benchmark functions with different number of agents.</p>
Full article ">Figure 4
<p>Convergence Curves.</p>
Full article ">Figure 5
<p>Infection dynamic for 5% vaccination coverage.</p>
Full article ">Figure 6
<p>Infection dynamic for 10% vaccination coverage.</p>
Full article ">Figure 7
<p>Infection dynamic for 20% vaccination coverage.</p>
Full article ">
16 pages, 1801 KiB  
Article
A New Subject-Sensitive Hashing Algorithm Based on MultiRes-RCF for Blockchains of HRRS Images
by Kaimeng Ding, Shiping Chen, Jiming Yu, Yanan Liu and Jie Zhu
Algorithms 2022, 15(6), 213; https://doi.org/10.3390/a15060213 - 17 Jun 2022
Cited by 2 | Viewed by 2639
Abstract
Aiming at the deficiency that blockchain technology is too sensitive to the binary-level changes of high resolution remote sensing (HRRS) images, we propose a new subject-sensitive hashing algorithm specially for HRRS image blockchains. To implement this subject-sensitive hashing algorithm, we designed and implemented [...] Read more.
Aiming at the deficiency that blockchain technology is too sensitive to the binary-level changes of high resolution remote sensing (HRRS) images, we propose a new subject-sensitive hashing algorithm specially for HRRS image blockchains. To implement this subject-sensitive hashing algorithm, we designed and implemented a deep neural network model MultiRes-RCF (richer convolutional features) for extracting features from HRRS images. A MultiRes-RCF network is an improved RCF network that borrows the MultiRes mechanism of MultiResU-Net. The subject-sensitive hashing algorithm based on MultiRes-RCF can detect the subtle tampering of HRRS images while maintaining robustness to operations that do not change the content of the HRRS images. Experimental results show that our MultiRes-RCF-based subject-sensitive hashing algorithm has better tamper sensitivity than the existing deep learning models such as RCF, AAU-net, and Attention U-net, meeting the needs of HRRS image blockchains. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

Figure 1
<p>The structure of the improved blockchain for HRRS image.</p>
Full article ">Figure 2
<p>The structure of MultiRes blocks.</p>
Full article ">Figure 3
<p>The structure of our proposed MultiRes-RCF networks.</p>
Full article ">Figure 4
<p>The structure of the proposed MultiRes-RCF networks (without encryption).</p>
Full article ">Figure 5
<p>Loss and accuracy of Multi-RCF during training process.</p>
Full article ">Figure 6
<p>Examples of authentication for HRRS image: (<b>a</b>) Original TIFF format; (<b>b</b>) PNG format (TIFF format to PNG format); (<b>c</b>) watermark embedding (32 bits embedded in single band); (<b>d</b>) Tampering Example 1; (<b>e</b>) Tampering Example 2.</p>
Full article ">
19 pages, 2519 KiB  
Article
A Cost-Efficient MCSA-Based Fault Diagnostic Framework for SCIM at Low-Load Conditions
by Chibuzo Nwabufo Okwuosa, Ugochukwu Ejike Akpudo and Jang-Wook Hur
Algorithms 2022, 15(6), 212; https://doi.org/10.3390/a15060212 - 16 Jun 2022
Cited by 11 | Viewed by 2998
Abstract
In industry, electric motors such as the squirrel cage induction motor (SCIM) generate motive power and are particularly popular due to their low acquisition cost, strength, and robustness. Along with these benefits, they have minimal maintenance costs and can run for extended periods [...] Read more.
In industry, electric motors such as the squirrel cage induction motor (SCIM) generate motive power and are particularly popular due to their low acquisition cost, strength, and robustness. Along with these benefits, they have minimal maintenance costs and can run for extended periods before requiring repair and/or maintenance. Early fault detection in SCIMs, especially at low-load conditions, further helps minimize maintenance costs and mitigate abrupt equipment failure when loading is increased. Recent research on these devices is focused on fault/failure diagnostics with the aim of reducing downtime, minimizing costs, and increasing utility and productivity. Data-driven predictive maintenance offers a reliable avenue for intelligent monitoring whereby signals generated by the equipment are harnessed for fault detection and isolation (FDI). Particularly, motor current signature analysis (MCSA) provides a reliable avenue for extracting and/or exploiting discriminant information from signals for FDI and/or fault diagnosis. This study presents a fault diagnostic framework that exploits underlying spectral characteristics following MCSA and intelligent classification for fault diagnosis based on extracted spectral features. Results show that the extracted features reflect induction motor fault conditions with significant diagnostic performance (minimal false alarm rate) from intelligent models, out of which the random forest (RF) classifier was the most accurate, with an accuracy of 79.25%. Further assessment of the models showed that RF had the highest computational cost of 3.66 s, while NBC had the lowest at 0.003 s. Other significant empirical assessments were conducted, and the results support the validity of the proposed FDI technique. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Proposed diagnostic model.</p>
Full article ">Figure 2
<p>A pictorial view of the experimental setup.</p>
Full article ">Figure 3
<p>Fault conditions to replicate for the failure modes: (<b>a</b>) rotor misalignment, (<b>b</b>) broken rotor bar, and (<b>c</b>) inter-turn short circuit winding.</p>
Full article ">Figure 4
<p>Current signals collected from the induction motors: (<b>a</b>) ARM-1, (<b>b</b>) BRB-2, (<b>c</b>) ISC-3, and (<b>d</b>) NOM-4.</p>
Full article ">Figure 5
<p>Phase current signal spectra for different fault conditions: (<b>a</b>) FFT (<b>b</b>) PSD, and (<b>c</b>) ACF.</p>
Full article ">Figure 6
<p>Feature evaluation results: (<b>a</b>) correlation heatmap between features, (<b>b</b>) probability density plot of the features, and (<b>c</b>) LLE-assisted discriminative property assessment.</p>
Full article ">Figure 7
<p>Global performance evaluation of the classifiers on the test data.</p>
Full article ">Figure 8
<p>Confusion matrix on test data: (<b>a</b>) LR, (<b>b</b>) ABC, (<b>c</b>) NBC, (<b>d</b>) kNN, (<b>e</b>) GPC, (<b>f</b>) RF, (<b>g</b>) GBC, (<b>h</b>) DT, (<b>i</b>) MLP, (<b>j</b>) Linear SVM, (<b>k</b>) Gaussian SVM, and (<b>l</b>) QDA.</p>
Full article ">
16 pages, 345 KiB  
Article
Optimizing Cybersecurity Investments over Time
by Alessandro Mazzoccoli and Maurizio Naldi
Algorithms 2022, 15(6), 211; https://doi.org/10.3390/a15060211 - 16 Jun 2022
Cited by 4 | Viewed by 2887
Abstract
In the context of growing vulnerabilities, cyber-risk management cannot rely on a one-off approach, instead calling for a continuous re-assessment of the risk and adaptation of risk management strategies. Under the mixed investment–insurance approach, where both risk mitigation and risk transfer are employed, [...] Read more.
In the context of growing vulnerabilities, cyber-risk management cannot rely on a one-off approach, instead calling for a continuous re-assessment of the risk and adaptation of risk management strategies. Under the mixed investment–insurance approach, where both risk mitigation and risk transfer are employed, the adaptation implies the re-computation of the optimal amount to invest in security over time. In this paper, we deal with the problem of computing the optimal balance between investment and insurance payments to achieve the minimum overall security expense when the vulnerability grows over time according to a logistic function, adopting a greedy approach, where strategy adaptation is carried out periodically at each investment epoch. We consider three liability degrees, from full liability to partial liability with deductibles. We find that insurance represents by far the dominant component in the mix and may be relied on as a single protection tool when the vulnerability is very low. Full article
Show Figures

Figure 1

Figure 1
<p>Impact of investments on vulnerability (GL1 model with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2.7</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1.1</mn> </mrow> </semantics></math>) (logistic growth model with <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2.68</mn> </mrow> </semantics></math>) (attack probability <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 2
<p>Optimal cumulative investment under full liability and GL1 model.</p>
Full article ">Figure 3
<p>Optimal cumulative expense <span class="html-italic">E</span> under full liability and GL1 model.</p>
Full article ">Figure 4
<p>Optimal cumulative investment under partial liability and GL1 model.</p>
Full article ">Figure 5
<p>Optimal cumulative expense <span class="html-italic">E</span> under partial liability and GL1 model.</p>
Full article ">Figure 6
<p>Optimal cumulative investment under partial liability with deductibles and GL1 model.</p>
Full article ">Figure 7
<p>Optimal cumulative expense <span class="html-italic">E</span> under partial liability with deductibles and GL1 model.</p>
Full article ">Figure 8
<p>Optimal cumulative investment under full liability and GL2 model.</p>
Full article ">Figure 9
<p>Optimal cumulative expense <span class="html-italic">E</span> under full liability and GL2 model.</p>
Full article ">Figure 10
<p>Optimal cumulative investment under partial liability and GL2 model.</p>
Full article ">Figure 11
<p>Optimal cumulative expense <span class="html-italic">E</span> under partial liability and GL2 model.</p>
Full article ">Figure 12
<p>Optimal cumulative investment under partial liability with deductibles and GL2 model.</p>
Full article ">Figure 13
<p>Optimal cumulative Expense <span class="html-italic">E</span> under partial liability with deductibles and GL2 model.</p>
Full article ">
28 pages, 1344 KiB  
Review
Overview of Distributed Machine Learning Techniques for 6G Networks
by Eugenio Muscinelli, Swapnil Sadashiv Shinde and Daniele Tarchi
Algorithms 2022, 15(6), 210; https://doi.org/10.3390/a15060210 - 15 Jun 2022
Cited by 29 | Viewed by 5782
Abstract
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to [...] Read more.
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to serve many heterogeneous wireless devices. Various machine learning (ML) techniques are expected to be deployed over the intelligent 6G wireless network that provide solutions to highly complex networking problems. In order to do this, various 6G nodes and devices are expected to generate tons of data through external sensors, and data analysis will be needed. With such massive and distributed data, and various innovations in computing hardware, distributed ML techniques are expected to play an important role in 6G. Though they have several advantages over the centralized ML techniques, implementing the distributed ML algorithms over resource-constrained wireless environments can be challenging. Therefore, it is important to select a proper ML algorithm based upon the characteristics of the wireless environment and the resource requirements of the learning process. In this work, we survey the recently introduced distributed ML techniques with their characteristics and possible benefits by focusing our attention on the most influential papers in the area. We finally give our perspective on the main challenges and advantages for telecommunication networks, along with the main scenarios that could eventuate. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
Show Figures

Figure 1

Figure 1
<p>Main applications of distributed learning.</p>
Full article ">Figure 2
<p>Reinforcement learning framework.</p>
Full article ">Figure 3
<p>Federated learning framework.</p>
Full article ">Figure 4
<p>Multi-agent reinforcement learning framework.</p>
Full article ">Figure 5
<p>Main applications of distributed learning.</p>
Full article ">Figure 6
<p>Proposed scheme for the joint FL and task-offloading processes optimization. Reprinted/adapted with permission from Ref. [<a href="#B71-algorithms-15-00210" class="html-bibr">71</a>]. ©2022, IEEE.</p>
Full article ">Figure 7
<p>The hierarchical FL platform in a Multi-EC VN architecture. Reprinted/adapted with permission from Ref. [<a href="#B78-algorithms-15-00210" class="html-bibr">78</a>]. 2021, Attribution 4.0 International (CC BY 4.0).</p>
Full article ">Figure 8
<p>Computation offloading for FL. Reprinted/adapted with permission from Ref. [<a href="#B78-algorithms-15-00210" class="html-bibr">78</a>]. 2021, Attribution 4.0 International (CC BY 4.0).</p>
Full article ">Figure 9
<p>Vehicular-communication-based collaborative MARL process.</p>
Full article ">Figure 10
<p>Distributed FL process.</p>
Full article ">
16 pages, 1314 KiB  
Article
Maximum Entropy Approach to Massive Graph Spectrum Learning with Applications
by Diego Granziol, Binxin Ru, Xiaowen Dong, Stefan Zohren, Michael Osborne and Stephen Roberts
Algorithms 2022, 15(6), 209; https://doi.org/10.3390/a15060209 - 15 Jun 2022
Viewed by 2194
Abstract
We propose an alternative maximum entropy approach to learning the spectra of massive graphs. In contrast to state-of-the-art Lanczos algorithm for spectral density estimation and applications thereof, our approach does not require kernel smoothing. As the choice of kernel function and associated bandwidth [...] Read more.
We propose an alternative maximum entropy approach to learning the spectra of massive graphs. In contrast to state-of-the-art Lanczos algorithm for spectral density estimation and applications thereof, our approach does not require kernel smoothing. As the choice of kernel function and associated bandwidth heavily affect the resulting output, our approach mitigates these issues. Furthermore, we prove that kernel smoothing biases the moments of the spectral density. Our approach can be seen as an information-theoretically optimal approach to learning a smooth graph spectral density, which fully respects moment information. The proposed method has a computational cost linear in the number of edges, and hence can be applied even to large networks with millions of nodes. We showcase the approach on problems of graph similarity learning and counting cluster number in the graph, where the proposed method outperforms existing iterative spectral approaches on both synthetic and real-world graphs. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) EGS semicircle fit for different moment number <span class="html-italic">m</span>. (<b>b</b>) KL divergence between semicircle density and EGS.</p>
Full article ">Figure 2
<p>(<b>a</b>) EGS fit to randomly generated Erdős-Rényi graph (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5000</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>). The number of moments <span class="html-italic">m</span> used increases from 3 to 100 and the number of bins used for the eigenvalue histogram is <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>b</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math>. (<b>b</b>) EGS fit to randomly generated Barabási-Albert graph (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5000</mn> </mrow> </semantics></math>). The number of moments used for computing EGSs and the number of bins used for the eigenvalue histogram are <span class="html-italic">m</span> = 30, <math display="inline"><semantics> <msub> <mi>n</mi> <mi>b</mi> </msub> </semantics></math> = 50 (<b>Left</b>) and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>b</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> (<b>Right</b>).</p>
Full article ">Figure 3
<p>Symmetric KL heatmap between 9 graphs from the SNAP dataset: (0) bio-human-gene1, (1) bio-human-gene2, (2) bio-mouse-gene, (3) ca-AstroPh, (4) ca-CondMat, (5) ca-GrQc, (6) ca-HepPh, (7) ca-HepTh, (8) roadNet-CA, (9) roadNet-PA, (10) roadNet-TX.</p>
Full article ">Figure 4
<p>Log error of cluster number detection using EGS and Lanczos methods on large synthetic networks with (<b>a</b>) 201,600 nodes and 305 clusters and (<b>b</b>) 404,420 nodes and 1355 clusters, and on small-scale real-world networks (<b>c</b>) Email network of 1003 nodes and (<b>d</b>) NetScience network of 1589 nodes.</p>
Full article ">Figure 5
<p>Eigenvalues of the Email dataset with clear spectral gap and <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mo>*</mo> </msub> <mo>≈</mo> <mn>0.005</mn> </mrow> </semantics></math>. The shaded area multiplied by the number of nodes <span class="html-italic">n</span> predicts the number of clusters.</p>
Full article ">Figure 6
<p>Spectral density of three large-scale real-world networks estimated by EGS and Lanczos. (<b>a</b>) DBLP dataset, (<b>b</b>) Amazon dataset, (<b>c</b>) YouTube dataset.</p>
Full article ">
25 pages, 3969 KiB  
Article
Eyes versus Eyebrows: A Comprehensive Evaluation Using the Multiscale Analysis and Curvature-Based Combination Methods in Partial Face Recognition
by Regina Lionnie, Catur Apriono and Dadang Gunawan
Algorithms 2022, 15(6), 208; https://doi.org/10.3390/a15060208 - 14 Jun 2022
Cited by 6 | Viewed by 2782
Abstract
This work aimed to find the most discriminative facial regions between the eyes and eyebrows for periocular biometric features in a partial face recognition system. We propose multiscale analysis methods combined with curvature-based methods. The goal of this combination was to capture the [...] Read more.
This work aimed to find the most discriminative facial regions between the eyes and eyebrows for periocular biometric features in a partial face recognition system. We propose multiscale analysis methods combined with curvature-based methods. The goal of this combination was to capture the details of these features at finer scales and offer them in-depth characteristics using curvature. The eye and eyebrow images cropped from four face 2D image datasets were evaluated. The recognition performance was calculated using the nearest neighbor and support vector machine classifiers. Our proposed method successfully produced richer details in finer scales, yielding high recognition performance. The highest accuracy results were 76.04% and 98.61% for the limited dataset and 96.88% and 93.22% for the larger dataset for the eye and eyebrow images, respectively. Moreover, we compared the results between our proposed methods and other works, and we achieved similar high accuracy results using only eye and eyebrow images. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

Figure 1
<p>The overall flowchart of the proposed idea. Connectors A, B, and C detail the curvature-based, scale-space, and discrete wavelet transform processes.</p>
Full article ">Figure 1 Cont.
<p>The overall flowchart of the proposed idea. Connectors A, B, and C detail the curvature-based, scale-space, and discrete wavelet transform processes.</p>
Full article ">Figure 2
<p>Example of face images from datasets and cropped results for eye and eyebrow images: (<b>a</b>) EYB, (<b>b</b>) ABD, (<b>c</b>) PES, and (<b>d</b>) RFFMDS v1.0.</p>
Full article ">Figure 3
<p>The accuracy results (%) from (<b>a</b>) 1-<span class="html-italic">NN</span>, (<b>b</b>) <span class="html-italic">SVM</span>-1, and (<b>c</b>) <span class="html-italic">SVM</span>-2. The results were evaluated for 12 octave <span class="html-italic">O</span> and level <span class="html-italic">L</span> variables from <span class="html-italic">SS</span> and 5 curvature-based scale spaces (<span class="html-italic">SS</span>, <span class="html-italic">SS + K</span>, <span class="html-italic">SS + H</span>, <span class="html-italic">SS + X</span>, and <span class="html-italic">SS + N</span>).</p>
Full article ">Figure 4
<p>Combination results for (<b>a</b>) <span class="html-italic">SS + K</span>, (<b>b</b>) <span class="html-italic">SS + H</span>, (<b>c</b>) <span class="html-italic">SS + X</span>, and (<b>d</b>) <span class="html-italic">SS + N</span> with <span class="html-italic">O</span> = 1 and <span class="html-italic">L</span> = 1 on an eye image.</p>
Full article ">Figure 5
<p>The accuracy results (%) from (<b>a</b>) 1-<span class="html-italic">NN</span>, (<b>b</b>) <span class="html-italic">SVM</span>-1, and (<b>c</b>) <span class="html-italic">SVM</span>-2. The results were evaluated for a total of 16 variables for wavelet coefficients (<span class="html-italic">A</span>, <span class="html-italic">V</span>, <span class="html-italic">Hr</span>, and <span class="html-italic">D</span>) with 2 decomposition levels and curvature-based methods (<span class="html-italic">K</span>, <span class="html-italic">H</span>, <span class="html-italic">X</span>, and <span class="html-italic">N</span>).</p>
Full article ">Figure 6
<p>Combination results of (<b>a</b>) <span class="html-italic">SS + K</span>, (<b>b</b>) <span class="html-italic">SS + H</span>, (<b>c</b>) <span class="html-italic">SS + X</span>, and (<b>d</b>) <span class="html-italic">SS + N</span> with <span class="html-italic">O</span> = 1 and <span class="html-italic">L</span> = 1 for an eyebrow image.</p>
Full article ">Figure 7
<p><span class="html-italic">Sym</span>2 wavelet coefficients on an eyebrow image with (<b>a</b>) the first decomposition level and (<b>b</b>) the second decomposition level.</p>
Full article ">Figure 8
<p><span class="html-italic">Sym</span>2 wavelet from the first decomposition level combined with curvature-based method: (<b>a</b>) <span class="html-italic">A</span>-<span class="html-italic">K</span>, (<b>b</b>) <span class="html-italic">A</span>-<span class="html-italic">H</span>, (<b>c</b>) <span class="html-italic">A</span>-<span class="html-italic">X</span>, and (<b>d</b>) <span class="html-italic">A</span>-<span class="html-italic">N</span>.</p>
Full article ">Figure 9
<p>Accuracy results (%) from EYB, ABD, PES, and RFFMDS v1.0 datasets for eye images and eyebrow images.</p>
Full article ">Figure 10
<p>Examples of images according to the order of the datasets from controlled to uncontrolled conditions: (<b>a</b>) EYB dataset, (<b>b</b>) RFFMDS v1.0, (<b>c</b>) PES dataset, and (<b>d</b>) ABD.</p>
Full article ">
15 pages, 1543 KiB  
Article
Multi-View Graph Fusion for Semi-Supervised Learning: Application to Image-Based Face Beauty Prediction
by Fadi Dornaika and Abdelmalik Moujahid
Algorithms 2022, 15(6), 207; https://doi.org/10.3390/a15060207 - 14 Jun 2022
Cited by 4 | Viewed by 2486
Abstract
Facial Beauty Prediction (FBP) is an important visual recognition problem to evaluate the attractiveness of faces according to human perception. Most existing FBP methods are based on supervised solutions using geometric or deep features. Semi-supervised learning for FBP is an almost unexplored research [...] Read more.
Facial Beauty Prediction (FBP) is an important visual recognition problem to evaluate the attractiveness of faces according to human perception. Most existing FBP methods are based on supervised solutions using geometric or deep features. Semi-supervised learning for FBP is an almost unexplored research area. In this work, we propose a graph-based semi-supervised method in which multiple graphs are constructed to find the appropriate graph representation of the face images (with and without scores). The proposed method combines both geometric and deep feature-based graphs to produce a high-level representation of face images instead of using a single face descriptor and also improves the discriminative ability of graph-based score propagation methods. In addition to the data graph, our proposed approach fuses an additional graph adaptively built on the predicted beauty values. Experimental results on the SCUTFBP-5500 facial beauty dataset demonstrate the superiority of the proposed algorithm compared to other state-of-the-art methods. Full article
(This article belongs to the Special Issue Advanced Graph Algorithms)
Show Figures

Figure 1

Figure 1
<p>The general flowchart of the multi-view graph fusion that integrates label graph for semi-supervised learning. It is used for multi-class classification problems. Given the training dataset, we construct different similarity graphs based on different descriptors. Then, from the available labelling data, we construct the label space information graph, which we call a correlation graph, and merge both the label space graph and the data space graph into a new graph that is used in the FME algorithm to predict the beauty value of the unknown face images.</p>
Full article ">Figure 2
<p>The framework of the proposed method. Different similarity graphs corresponding to the different descriptors are constructed. The correlation graph is constructed in the score space. All graphs are merged into a single graph. The FME method is repeatedly used to process the resulting fused graph and to refine the face beauty prediction for the unseen images in each iteration.</p>
Full article ">Figure 3
<p>Some face images from the SCUT-FBP5500 dataset.</p>
Full article ">Figure 4
<p>The 81 facial points detected in a face image.</p>
Full article ">Figure 5
<p>Three prediction examples using two semi-supervised methods. The upper row shows the ground truth scores.</p>
Full article ">
1 pages, 189 KiB  
Correction
Correction: Filion, G.J. Analytic Combinatorics for Computing Seeding Probabilities. Algorithms 2018, 11, 3
by Guillaume J. Filion
Algorithms 2022, 15(6), 206; https://doi.org/10.3390/a15060206 - 14 Jun 2022
Viewed by 1604
Abstract
The author wishes to make the following correction to this paper [...] Full article
43 pages, 565 KiB  
Review
A Review: Machine Learning for Combinatorial Optimization Problems in Energy Areas
by Xinyi Yang, Ziyi Wang, Hengxi Zhang, Nan Ma, Ning Yang, Hualin Liu, Haifeng Zhang and Lei Yang
Algorithms 2022, 15(6), 205; https://doi.org/10.3390/a15060205 - 13 Jun 2022
Cited by 18 | Viewed by 9382
Abstract
Combinatorial optimization problems (COPs) are a class of NP-hard problems with great practical significance. Traditional approaches for COPs suffer from high computational time and reliance on expert knowledge, and machine learning (ML) methods, as powerful tools have been used to overcome these problems. [...] Read more.
Combinatorial optimization problems (COPs) are a class of NP-hard problems with great practical significance. Traditional approaches for COPs suffer from high computational time and reliance on expert knowledge, and machine learning (ML) methods, as powerful tools have been used to overcome these problems. In this review, the COPs in energy areas with a series of modern ML approaches, i.e., the interdisciplinary areas of COPs, ML and energy areas, are mainly investigated. Recent works on solving COPs using ML are sorted out firstly by methods which include supervised learning (SL), deep learning (DL), reinforcement learning (RL) and recently proposed game theoretic methods, and then problems where the timeline of the improvements for some fundamental COPs is the layout. Practical applications of ML methods in the energy areas, including the petroleum supply chain, steel-making, electric power system and wind power, are summarized for the first time, and challenges in this field are analyzed. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

Figure 1
<p>The interdisciplinary areas of COP, ML and energy areas.</p>
Full article ">Figure 2
<p>ML methods for COPs. Recent works are categorized firstly based on the ML approach and then the fundamental problems in COPs.</p>
Full article ">
17 pages, 1416 KiB  
Article
Topic Modeling for Automatic Analysis of Natural Language: A Case Study in an Italian Customer Support Center
by Gabriele Papadia, Massimo Pacella and Vincenzo Giliberti
Algorithms 2022, 15(6), 204; https://doi.org/10.3390/a15060204 - 13 Jun 2022
Cited by 8 | Viewed by 3731
Abstract
This paper focuses on the automatic analysis of conversation transcriptions in the call center of a customer care service. The goal is to recognize topics related to problems and complaints discussed in several dialogues between customers and agents. Our study aims to implement [...] Read more.
This paper focuses on the automatic analysis of conversation transcriptions in the call center of a customer care service. The goal is to recognize topics related to problems and complaints discussed in several dialogues between customers and agents. Our study aims to implement a framework able to automatically cluster conversation transcriptions into cohesive and well-separated groups based on the content of the data. The framework can alleviate the analyst selecting proper values for the analysis and the clustering processes. To pursue this goal, we consider a probabilistic model based on the latent Dirichlet allocation, which associates transcriptions with a mixture of topics in different proportions. A case study consisting of transcriptions in the Italian natural language, and collected in a customer support center of an energy supplier, is considered in the paper. Performance comparison of different inference techniques is discussed using the case study. The experimental results demonstrate the approach’s efficacy in clustering Italian conversation transcriptions. It also results in a practical tool to simplify the analytic process and off-load the parameter tuning from the end-user. According to recent works in the literature, this paper may be valuable for introducing latent Dirichlet allocation approaches in topic modeling for the Italian natural language. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>General schema of data processing and modeling.</p>
Full article ">Figure 2
<p>Data structure in transcription 18 of 993.</p>
Full article ">Figure 3
<p>Segment 8 of 26 id <span class="html-italic">Agent</span> in transcription 18 of 993. The Italian for the text: “ok, can you kindly provide me the customer code ma’am, so I can check”.</p>
Full article ">Figure 4
<p>Words of the segment 8 of 26 in transcription 18 of 993. The Italian for the text: “ok, can you kindly provide me the customer code ma’am, so I can check”.</p>
Full article ">Figure 5
<p>The effects of the tf-idf correction on <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>(</mo> <msub> <mi>d</mi> <mrow> <mn>1</mn> <mo>:</mo> <mn>10</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>w</mi> <mrow> <mn>1</mn> <mo>:</mo> <mn>10</mn> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>“Perplexity” trend (vertical axis on the left—continuous lines), time elapsed trend (vertical axis on the right—dashed lines), topics ranging between 5 and 40 (horizontal axis), and for 4 LDA inference algorithms (cgs—red lines; avb—blue lines; cvb0—green lines; savb—magenta lines).</p>
Full article ">Figure 7
<p>Word clouds of topic <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>14</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>18</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>t-SNE representation of 2D clusters.</p>
Full article ">Figure 9
<p>Topic probability for transcription from 1 to 10 in the set of 993.</p>
Full article ">
17 pages, 349 KiB  
Article
An Algorithm for the Closed-Form Solution of Certain Classes of Volterra–Fredholm Integral Equations of Convolution Type
by Efthimios Providas
Algorithms 2022, 15(6), 203; https://doi.org/10.3390/a15060203 - 12 Jun 2022
Cited by 2 | Viewed by 2480
Abstract
In this paper, a direct operator method is presented for the exact closed-form solution of certain classes of linear and nonlinear integral Volterra–Fredholm equations of the second kind. The method is based on the existence of the inverse of the relevant linear Volterra [...] Read more.
In this paper, a direct operator method is presented for the exact closed-form solution of certain classes of linear and nonlinear integral Volterra–Fredholm equations of the second kind. The method is based on the existence of the inverse of the relevant linear Volterra operator. In the case of convolution kernels, the inverse is constructed using the Laplace transform method. For linear integral equations, results for the existence and uniqueness are given. The solution of nonlinear integral equations depends on the existence and type of solutions ofthe corresponding nonlinear algebraic system. A complete algorithm for symbolic computations in a computer algebra system is also provided. The method finds many applications in science and engineering. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
23 pages, 1335 KiB  
Article
Constraint Preserving Mixers for the Quantum Approximate Optimization Algorithm
by Franz Georg Fuchs, Kjetil Olsen Lye, Halvor Møll Nilsen, Alexander Johannes Stasik and Giorgio Sartor
Algorithms 2022, 15(6), 202; https://doi.org/10.3390/a15060202 - 10 Jun 2022
Cited by 23 | Viewed by 3870
Abstract
The quantum approximate optimization algorithm/quantum alternating operator ansatz (QAOA) is a heuristic to find approximate solutions of combinatorial optimization problems. Most of the literature is limited to quadratic problems without constraints. However, many practically relevant optimization problems do have (hard) constraints that need [...] Read more.
The quantum approximate optimization algorithm/quantum alternating operator ansatz (QAOA) is a heuristic to find approximate solutions of combinatorial optimization problems. Most of the literature is limited to quadratic problems without constraints. However, many practically relevant optimization problems do have (hard) constraints that need to be fulfilled. In this article, we present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space given by these constraints. We generalize the “XY”-mixer designed to preserve the subspace of “one-hot” states to the general case of subspaces given by a number of computational basis states. We expose the underlying mathematical structure which reveals more of how mixers work and how one can minimize their cost in terms of the number of CX gates, particularly when Trotterization is taken into account. Our analysis also leads to valid Trotterizations for an “XY”-mixer with fewer CX gates than is known to date. In view of practical implementations, we also describe algorithms for efficient decomposition into basis gates. Several examples of more general cases are presented and analyzed. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

Figure 1
<p>Illustration of properties of Hamiltonians constructed with Theorem 1.</p>
Full article ">Figure 2
<p>Corollary 2 shows that adding a mixer with support outside <math display="inline"><semantics> <mrow> <mo form="prefix">Sp</mo> <mfenced open="(" close=")"> <mi>B</mi> </mfenced> </mrow> </semantics></math> is also a valid mixer for <span class="html-italic">B</span>.</p>
Full article ">Figure 3
<p>Examples of the squared overlap between two states for the case <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>B</mi> <mo>|</mo> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>. The squared overlap is independent of what the states in <math display="inline"><semantics> <mrow> <mi>B</mi> <mo>=</mo> <mo>{</mo> <mo stretchy="false">|</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo stretchy="false">⟩</mo> <mo>,</mo> <mo stretchy="false">|</mo> <msub> <mi>z</mi> <mn>1</mn> </msub> <mo stretchy="false">⟩</mo> <mo>,</mo> <mo stretchy="false">|</mo> <msub> <mi>z</mi> <mn>2</mn> </msub> <mo stretchy="false">⟩</mo> <mo>,</mo> <mo stretchy="false">|</mo> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo stretchy="false">⟩</mo> <mo>}</mo> </mrow> </semantics></math> are. The comparison for different <span class="html-italic">T</span> shows that there exists a <math display="inline"><semantics> <mi>β</mi> </semantics></math> such that the overlap is nonzero, except for <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mn>2</mn> <mo>↔</mo> <mn>3</mn> </mrow> </msub> </semantics></math> which, as expected, does not provide transitions between <math display="inline"><semantics> <mrow> <mo stretchy="false">|</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo stretchy="false">⟩</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo stretchy="false">|</mo> <msub> <mi>z</mi> <mn>3</mn> </msub> <mo stretchy="false">⟩</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Examples of the structure of <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mo form="prefix">Ham</mo> <mo>(</mo> <mi>d</mi> <mo>)</mo> </mrow> </msub> </semantics></math>. The black color represents non-vanishing entries equal to one, representing pairs with the specified Hamming distance.</p>
Full article ">Figure 5
<p>In the commutation graph (<b>middle</b>) of the terms of the mixer given in Equation (<a href="#FD47-algorithms-15-00202" class="html-disp-formula">47</a>), an edge occurs if the terms commute. From this, we can group terms into three (nodes connected by green edge) or two (nodes connected by red/blue edges) sets. Only the <b>left</b>/green grouping preserves the feasible subspace, the <b>right</b> one does not.</p>
Full article ">Figure 6
<p>Valid (white) and invalid (black) transitions between pairs of states, as defined in Theorem 2 for Trotterized mixer Hamiltonians. The first row shows that for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>T</mi> <mrow> <mi>O</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mo>,</mo> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msub> </mrow> </semantics></math>, the mixer <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>=</mo> <msup> <mi>e</mi> <mrow> <mo>−</mo> <mi>i</mi> <mi>β</mi> <msub> <mi>T</mi> <mn>1</mn> </msub> </mrow> </msup> <msup> <mi>e</mi> <mrow> <mo>−</mo> <mi>i</mi> <mi>β</mi> <msub> <mi>T</mi> <mn>2</mn> </msub> </mrow> </msup> </mrow> </semantics></math> does not provide transitions between all pairs of feasible states, although <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>=</mo> <msup> <mi>e</mi> <mrow> <mo>−</mo> <mi>i</mi> <mi>β</mi> <mo>(</mo> <msub> <mi>T</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>T</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </msup> </mrow> </semantics></math> does.</p>
Full article ">Figure 7
<p>Comparison of different Trotterization mixers restricted to “one-hot” states. All markers represent cases when the resulting mixer provides transitions for all pairs of feasible states; see also <a href="#algorithms-15-00202-f006" class="html-fig">Figure 6</a>. All versions can be implemented in linear depth. The most efficient Trotterizations are achieved by using sub-diagonal entries. The cost equals 4 times # (XX + YY)-terms.</p>
Full article ">
20 pages, 342 KiB  
Article
Comparing the Reasoning Capabilities of Equilibrium Theories and Answer Set Programs
by Jorge Fandinno, David Pearce, Concepción Vidal and Stefan Woltran
Algorithms 2022, 15(6), 201; https://doi.org/10.3390/a15060201 - 8 Jun 2022
Cited by 2 | Viewed by 2115
Abstract
Answer Set Programming (ASP) is a well established logical approach in artificial intelligence that is widely used for knowledge representation and problem solving. Equilibrium logic extends answer set semantics to more general classes of programs and theories. When intertheory relations are studied in [...] Read more.
Answer Set Programming (ASP) is a well established logical approach in artificial intelligence that is widely used for knowledge representation and problem solving. Equilibrium logic extends answer set semantics to more general classes of programs and theories. When intertheory relations are studied in ASP, or in the more general form of equilibrium logic, they are usually understood in the form of comparisons of the answer sets or equilibrium models of theories or programs. This is the case for strong and uniform equivalence and their relativised and projective versions. However, there are many potential areas of application of ASP for which query answering is relevant and a comparison of programs in terms of what can be inferred from them may be important. We formulate and study some natural equivalence and entailment concepts for programs and theories that are couched in terms of inference and query answering. We show that, for the most part, these new intertheory relations coincide with their model-theoretic counterparts. We also extend some previous results on projective entailment for theories and for the new connective called fork. Full article
(This article belongs to the Special Issue Logic-Based Artificial Intelligence)
19 pages, 2550 KiB  
Article
MedicalSeg: A Medical GUI Application for Image Segmentation Management
by Christian Mata, Josep Munuera, Alain Lalande, Gilberto Ochoa-Ruiz and Raul Benitez
Algorithms 2022, 15(6), 200; https://doi.org/10.3390/a15060200 - 8 Jun 2022
Cited by 6 | Viewed by 3939
Abstract
In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One [...] Read more.
In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One of the main focuses in the computer vision field is based on artificial intelligence algorithms for segmentation and classification, including machine learning and deep learning approaches. The main drawback of supervised segmentation approaches is that a large dataset of ground truth validated by medical experts is required. In this sense, many research groups have developed their segmentation approaches according to their specific needs. However, a generalised application aimed at visualizing, assessing and comparing the results of different methods facilitating the generation of a ground-truth repository is not found in recent literature. In this paper, a new graphical user interface application (MedicalSeg) for the management of medical imaging based on pre-processing and segmentation is presented. The objective is twofold, first to create a test platform for comparing segmentation approaches, and secondly to generate segmented images to create ground truths that can then be used for future purposes as artificial intelligence tools. An experimental demonstration and performance analysis discussion are presented in this paper. Full article
(This article belongs to the Special Issue 1st Online Conference on Algorithms (IOCA2021))
Show Figures

Figure 1

Figure 1
<p>Example of the medical database used to test MedicalSeg application: (<b>a</b>) mammography, (<b>b</b>) brain, (<b>c</b>) prostate, (<b>d</b>) retinal, (<b>e</b>) ecography.</p>
Full article ">Figure 2
<p>Design of the MedicalSeg GUI interface.</p>
Full article ">Figure 3
<p>Example of the organization used to classify the results distributed in folders.</p>
Full article ">Figure 4
<p>Example of the calculated <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </semantics></math> as a threshold value obtained from the maximum value in a histogram display.</p>
Full article ">Figure 5
<p>Example of an digital mammography image with noise and tagged labels inside. A pre-processing step is required.</p>
Full article ">Figure 6
<p>Example of a ultrasound image using a Watershed algorithm.</p>
Full article ">Figure 7
<p>Example of a mammography MRI image using a GMM and Snake algorithm.</p>
Full article ">Figure 8
<p>Example of a prostate MRI image using a Canny automatic threshold and Otsu algorithm.</p>
Full article ">Figure 9
<p>Example of a vascular cerebral TRANCE image using different segmentation techniques (Intensity, Watershed, and GMM).</p>
Full article ">Figure 10
<p>Example of a DSC comparison of the segmented images against a reference ground truth. The different colours represent true positive pixels (TP, in white), true negative pixels (TN, in black), false positives (FP, in green) and false negatives (FN, in magenta).</p>
Full article ">
29 pages, 3889 KiB  
Article
XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework
by Ghada El-khawaga, Mervat Abu-Elkheir and Manfred Reichert
Algorithms 2022, 15(6), 199; https://doi.org/10.3390/a15060199 - 8 Jun 2022
Cited by 13 | Viewed by 3417
Abstract
Predictive Process Monitoring (PPM) has been integrated into process mining use cases as a value-adding task. PPM provides useful predictions on the future of the running business processes with respect to different perspectives, such as the upcoming activities to be executed next, the [...] Read more.
Predictive Process Monitoring (PPM) has been integrated into process mining use cases as a value-adding task. PPM provides useful predictions on the future of the running business processes with respect to different perspectives, such as the upcoming activities to be executed next, the final execution outcome, and performance indicators. In the context of PPM, Machine Learning (ML) techniques are widely employed. In order to gain trust of stakeholders regarding the reliability of PPM predictions, eXplainable Artificial Intelligence (XAI) methods have been increasingly used to compensate for the lack of transparency of most of predictive models. Multiple XAI methods exist providing explanations for almost all types of ML models. However, for the same data, as well as, under the same preprocessing settings or same ML models, generated explanations often vary significantly. Corresponding variations might jeopardize the consistency and robustness of the explanations and, subsequently, the utility of the corresponding model and pipeline settings. This paper introduces a framework that enables the analysis of the impact PPM-related settings and ML-model-related choices may have on the characteristics and expressiveness of the generated explanations. Our framework provides a means to examine explanations generated either for the whole reasoning process of an ML model, or for the predictions made on the future of a certain business process instance. Using well-defined experiments with different settings, we uncover how choices made through a PPM workflow affect and can be reflected through explanations. This framework further provides the means to compare how different characteristics of explainability methods can shape the resulting explanations and reflect on the underlying model reasoning process. Full article
(This article belongs to the Special Issue Process Mining and Its Applications)
Show Figures

Figure 1

Figure 1
<p>Explainability taxonomy in ML.</p>
Full article ">Figure 2
<p>XAI Comparison Framework Components.</p>
Full article ">Figure 3
<p>PFI scores over <span class="html-italic">Traffic_fines</span> event log.</p>
Full article ">Figure 4
<p>SHAP Dependence plots of <span class="html-italic">BPIC2017_Refused</span> (single aggregation) event log.</p>
Full article ">Figure 5
<p>ALE plot of <span class="html-italic">Traffic_fines</span> event log.</p>
Full article ">Figure 6
<p>PFI scores over <span class="html-italic">Sepsis1</span> event log.</p>
Full article ">Figure 7
<p>SHAP Dependence plots of <span class="html-italic">Sepsis1</span> event log.</p>
Full article ">Figure 8
<p>ALE scores of <span class="html-italic">Sepsis1</span> event log.</p>
Full article ">Figure 9
<p>PFI scores of <span class="html-italic">BPIC2017_Accepted</span> event log.</p>
Full article ">Figure 10
<p>SHAP Dependence plots of <span class="html-italic">BPIC2017_Accepted</span> event log.</p>
Full article ">Figure 11
<p>ALE scores of <span class="html-italic">BPIC2017_Accepted</span> event log.</p>
Full article ">Figure 12
<p>LIME vs. SHAP explanations of LR prediction for one instance in <span class="html-italic">Traffic_fines</span> preprocessed with prefix index combination.</p>
Full article ">Figure 13
<p>LIME vs. SHAP explanations of XGBoost prediction for one process instance in <span class="html-italic">BPIC2017_Cancelled</span> preprocessed with single aggregation combination.</p>
Full article ">Figure A1
<p>Execution times (in seconds) of XAI methods on event logs preprocessed using prefix index combination.</p>
Full article ">Figure A2
<p>Execution times (in seconds) of XAI methods on event logs preprocessed using single aggregation combination.</p>
Full article ">
22 pages, 22712 KiB  
Article
Improved JPS Path Optimization for Mobile Robots Based on Angle-Propagation Theta* Algorithm
by Yuan Luo, Jiakai Lu, Qiong Qin and Yanyu Liu
Algorithms 2022, 15(6), 198; https://doi.org/10.3390/a15060198 - 8 Jun 2022
Cited by 12 | Viewed by 4070
Abstract
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path [...] Read more.
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path optimization strategy of the JPS algorithm by combining the viewable angle of the Angle-Propagation Theta* (AP Theta*) algorithm, and it proposes the AP-JPS algorithm based on an any-angle pathfinding strategy. First, based on the JPS algorithm, this paper proposes a vision triangle judgment method to optimize the generated path by selecting the successor search point. Secondly, the idea of the node viewable angle in the AP Theta* algorithm is introduced to modify the line of sight (LOS) reachability detection between two nodes. Finally, the paths are optimized using a seventh-order polynomial based on minimum snap, so that the AP-JPS algorithm generates paths that better match the actual robot motion. The feasibility and effectiveness of this method are proved by simulation experiments and comparison with other algorithms. The results show that the path planning algorithm in this paper obtains paths with good smoothness in environments with different obstacle densities and different map sizes. In the algorithm comparison experiments, it can be seen that the AP-JPS algorithm reduces the path by 1.61–4.68% and the total turning angle of the path by 58.71–84.67% compared with the JPS algorithm. The AP-JPS algorithm reduces the computing time by 98.59–99.22% compared with the AP-Theta* algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Non-smooth path for mobile robots.</p>
Full article ">Figure 2
<p>Limitations of search angle.</p>
Full article ">Figure 3
<p>JPS algorithm for natural neighbors and forced neighbors. (<b>a</b>) pruning rule of JPS algorithm, (<b>b</b>) pruning rule of JPS algorithm, (<b>c</b>) the way to determine the forced neighbor nodes, (<b>d</b>) the way to determine the forced neighbor nodes.</p>
Full article ">Figure 4
<p>Unoptimized case based on line of sight.</p>
Full article ">Figure 5
<p>Unoptimized case based on line of sight.</p>
Full article ">Figure 6
<p>Visual angle of x node in AP theta*.</p>
Full article ">Figure 7
<p>The viewable angle range of node <math display="inline"><semantics> <mi>x</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Example of the pathing process. (<b>a</b>) (e,3) is selected as the next search point, (<b>b</b>) (b,7) succeeds (e,3) as the next search point, (<b>c</b>) comparison of paths before and after optimization.</p>
Full article ">Figure 9
<p>Comparison of JPS algorithm, AP-JPS algorithm and polynomial optimized AP-JPS algorithm under three size maps. (<b>a</b>) 10% obstacles, (<b>b</b>) 30% obstacles, (<b>c</b>) 50% obstacles, (<b>d</b>) 10% obstacles, (<b>e</b>) 30% obstacles, (<b>f</b>) 50% obstacles, (<b>g</b>) 10% obstacles, (<b>h</b>) 30% obstacles, (<b>i</b>) 50% obstacles.</p>
Full article ">Figure 9 Cont.
<p>Comparison of JPS algorithm, AP-JPS algorithm and polynomial optimized AP-JPS algorithm under three size maps. (<b>a</b>) 10% obstacles, (<b>b</b>) 30% obstacles, (<b>c</b>) 50% obstacles, (<b>d</b>) 10% obstacles, (<b>e</b>) 30% obstacles, (<b>f</b>) 50% obstacles, (<b>g</b>) 10% obstacles, (<b>h</b>) 30% obstacles, (<b>i</b>) 50% obstacles.</p>
Full article ">Figure 10
<p>AP Theta* Main Flow Chart.</p>
Full article ">Figure 11
<p>Comparison of JPS algorithm and AP-JPS algorithm path total turning angle parameters. (<b>a</b>) JPS algorithm path total turning angle, (<b>b</b>) AP-JPS algorithm path total turning angle, (<b>c</b>) Total Turning Angle Difference, (<b>d</b>) Total Turning Angle Change Rate.</p>
Full article ">Figure 12
<p>Comparison of JPS algorithm and AP-JPS algorithm to generate path length parameters. (<b>a</b>) JPS algorithm path length, (<b>b</b>) AP-JPS algorithm path length, (<b>c</b>) Path length difference, (<b>d</b>) Path length change rate.</p>
Full article ">Figure 13
<p>Experimental procedure of algorithm comparison. (<b>a</b>) Shanghai, (<b>b</b>) Shanghai, (<b>c</b>) New York, (<b>d</b>) New York, (<b>e</b>) Boston, (<b>f</b>) Boston.</p>
Full article ">Figure 13 Cont.
<p>Experimental procedure of algorithm comparison. (<b>a</b>) Shanghai, (<b>b</b>) Shanghai, (<b>c</b>) New York, (<b>d</b>) New York, (<b>e</b>) Boston, (<b>f</b>) Boston.</p>
Full article ">Figure 14
<p>Data set experiment. (<b>a</b>) AR0310SR, (<b>b</b>) AR0513SR, (<b>c</b>) AR0709SR, (<b>d</b>) AR0709SR, (<b>e</b>) divideandconquer, (<b>f</b>) plunderisle, (<b>g</b>) moonglade, (<b>h</b>) harvestmoon, (<b>i</b>) BlastFurnace, (<b>j</b>) Crossroads, (<b>k</b>) Hellfire, (<b>l</b>) SapphireIsles.</p>
Full article ">
22 pages, 5237 KiB  
Article
BAG-DSM: A Method for Generating Alternatives for Hierarchical Multi-Attribute Decision Models Using Bayesian Optimization
by Martin Gjoreski, Vladimir Kuzmanovski and Marko Bohanec
Algorithms 2022, 15(6), 197; https://doi.org/10.3390/a15060197 - 7 Jun 2022
Cited by 3 | Viewed by 2480
Abstract
Multi-attribute decision analysis is an approach to decision support in which decision alternatives are evaluated by multi-criteria models. An advanced feature of decision support models is the possibility to search for new alternatives that satisfy certain conditions. This task is important for practical [...] Read more.
Multi-attribute decision analysis is an approach to decision support in which decision alternatives are evaluated by multi-criteria models. An advanced feature of decision support models is the possibility to search for new alternatives that satisfy certain conditions. This task is important for practical decision support; however, the related work on generating alternatives for qualitative multi-attribute decision models is quite scarce. In this paper, we introduce Bayesian Alternative Generator for Decision Support Models (BAG-DSM), a method to address the problem of generating alternatives. More specifically, given a multi-attribute hierarchical model and an alternative representing the initial state, the goal is to generate alternatives that demand the least change in the provided alternative to obtain a desirable outcome. The brute force approach has exponential time complexity and has prohibitively long execution times, even for moderately sized models. BAG-DSM avoids these problems by using a Bayesian optimization approach adapted to qualitative DEX models. BAG-DSM was extensively evaluated and compared to a baseline method on 43 different DEX decision models with varying complexity, e.g., different depth and attribute importance. The comparison was performed with respect to: the time to obtain the first appropriate alternative, the number of generated alternatives, and the number of attribute changes required to reach the generated alternatives. BAG-DSM outperforms the baseline in all of the experiments by a large margin. Additionally, the evaluation confirms BAG-DSM’s suitability for the task, i.e., on average, it generates at least one appropriate alternative within two seconds. The relation between the depth of the multi-attribute hierarchical models—a parameter that increases the search space exponentially—and the time to obtaining the first appropriate alternative was linear and not exponential, by which BAG-DSM’s scalability is empirically confirmed. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

Figure 1
<p>Differences between a typical usage of decision support models (Step 1 and Step 2) and augmented usage with BAG-DSM (Step 1 and Step 2—with BAG-DSM).</p>
Full article ">Figure 2
<p>Structure and value scales of a DEX decision model for the evaluation of vehicles [<a href="#B8-algorithms-15-00197" class="html-bibr">8</a>].</p>
Full article ">Figure 3
<p>Decision rules for aggregating PRICE and TECH.CHAR. to CAR.</p>
Full article ">Figure 4
<p>Example evaluation of two cars.</p>
Full article ">Figure 5
<p>Structure of a DEX decision model for the assessment of primary productivity of agricultural fields [<a href="#B24-algorithms-15-00197" class="html-bibr">24</a>].</p>
Full article ">Figure 6
<p>Examples of the structure and weight distributions of a benchmark DEX decision model. (<b>a</b>) The hierarchical structure of a benchmark DEX model with a depth of 3 levels and one linked attribute (Input_122); (<b>b</b>–<b>d</b>) examples of skewed (<b>b</b>), normal (<b>c</b>), and uniform (<b>d</b>) distributions of aggregation weights [<a href="#B26-algorithms-15-00197" class="html-bibr">26</a>].</p>
Full article ">Figure 7
<p>Proposed BAG-DSM method for generating alternatives (<b>left</b>) vs. naïve approach (<b>right</b>).</p>
Full article ">Figure 8
<p>Boxplots for the distances (shorter is better) between the current alternative and the optimal alternative found by BAG-DSM (Bayesian) vs. baseline for the Landmark model.</p>
Full article ">Figure 9
<p>Boxplots for the size of the solution set (bigger is better) generated by BAG-DSM vs. baseline for the Landmark model.</p>
Full article ">Figure 10
<p>Boxplots for the time (shorter is better) required to find the first optimal alternative by BAG-DSM vs. baseline for the Landmark model.</p>
Full article ">Figure 11
<p>Paired boxplots for the distances (shorter is better) between the current alternative and the optimal alternative found by BAG-DSM (Bayesian) vs. baseline for decision support models with a depth of 3, 4, and 5. The horizontal lines connect two experimental runs with the same experimental setup.</p>
Full article ">Figure 12
<p>Paired boxplots for the time (shorter is better) required to find the first optimal alternative by BAG-DSM vs. baseline for decision support models with a depth of 3, 4, and 5. The horizontal lines connect two experimental runs with the same experimental setup. The red lines represent the experimental setups in which the baseline performed better.</p>
Full article ">Figure 13
<p>Paired boxplots for the size of the solution set (bigger is better) generated by BAG-DSM vs. baseline for decision support models with a depth of 3, 4, and 5. The horizontal lines connect two experimental runs with the same experimental setup. The red horizontal lines represent the experimental setups in which the baseline performed better.</p>
Full article ">Figure 14
<p>Boxplots for the time in seconds (smaller is better), required to find the first optimal alternative for different benchmark models.</p>
Full article ">Figure 15
<p>Mean optimization score for different decision models. The full line represents the average optimization score of the alternatives generated by the surrogate model. The dashed line represents the average optimization score of the random alternatives.</p>
Full article ">Figure 16
<p>Mean absolute error for the surrogate model calculated as the absolute difference between the estimated optimization score and true optimization value. The shaded part presents one standard deviation in each direction.</p>
Full article ">
15 pages, 1894 KiB  
Article
Process Mining the Performance of a Real-Time Healthcare 4.0 Systems Using Conditional Survival Models
by Adele H. Marshall and Aleksandar Novakovic
Algorithms 2022, 15(6), 196; https://doi.org/10.3390/a15060196 - 7 Jun 2022
Viewed by 2211
Abstract
As the world moves into the exciting age of Healthcare 4.0, it is essential that patients and clinicians have confidence and reassurance that the real-time clinical decision support systems being used throughout their care guarantee robustness and optimal quality of care. However, current [...] Read more.
As the world moves into the exciting age of Healthcare 4.0, it is essential that patients and clinicians have confidence and reassurance that the real-time clinical decision support systems being used throughout their care guarantee robustness and optimal quality of care. However, current systems involving autonomic behaviour and those with no prior clinical feedback, have generally to date had little focus on demonstrating robustness in the use of data and final output, thus generating a lack of confidence. This paper wishes to address this challenge by introducing a new process mining approach based on a statistically robust methodology that relies on the utilisation of conditional survival models for the purpose of evaluating the performance of Healthcare 4.0 systems and the quality of the care provided. Its effectiveness is demonstrated by analysing the performance of a clinical decision support system operating in an intensive care setting with the goal to monitor ventilated patients in real-time and to notify clinicians if the patient is predicted at risk of receiving injurious mechanical ventilation. Additionally, we will also demonstrate how the same metrics can be used for evaluating the patient quality of care. The proposed methodology can be used to analyse the performance of any Healthcare 4.0 system and the quality of care provided to the patient. Full article
(This article belongs to the Special Issue Process Mining and Its Applications)
Show Figures

Figure 1

Figure 1
<p>An architectural overview of the VILIAlert System [<a href="#B19-algorithms-15-00196" class="html-bibr">19</a>].</p>
Full article ">Figure 2
<p>A schematic overview of five analysed post alert time windows (APA) covering the period of 6 h post-alert generation. (<b>a</b>) An illustration of the tabular dataset in its raw form, where 0 and 1 symbolise defective (DB) and non-defective (NDB) blocks, respectively. (<b>b</b>) An illustration of the lengths of their follow-up times, where each length corresponds to the time to occurrence of the event of interest, i.e., 1st DB block during APA. These lengths are used for the calculation of defect free and conditional defect free survival probabilities.</p>
Full article ">Figure 3
<p>Kaplan–Meier estimates of overall defect free survival probability.</p>
Full article ">Figure 4
<p>Kaplan–Meier estimates of gender stratified overall defect free survival probability.</p>
Full article ">Figure 5
<p>An overview of standardised differences <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>d</mi> <mrow> <mi>F</mi> <mi>M</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> used for analysing the effect size that the gender covariate has estimated the conditional defect free survival probabilities <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>D</mi> <mi>F</mi> <mi>S</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi>y</mi> </msub> <mo>|</mo> <msub> <mi>t</mi> <mi>x</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. Note: <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> denotes baseline time, i.e., the time of alert generation.</p>
Full article ">Figure 6
<p>An overview of conditional defect free survival probabilities (including their 95% confidence intervals) denoting that analysed post alert time window will remain defect free until at least time <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>y</mi> </msub> </mrow> </semantics></math> given that it was previously defect free to the time <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>x</mi> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>x</mi> </msub> <mo>&lt;</mo> <msub> <mi>t</mi> <mi>y</mi> </msub> </mrow> </semantics></math>. Note: <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> denotes baseline time, i.e., the time of alert generation.</p>
Full article ">
18 pages, 300 KiB  
Article
Machine Learning and rs-fMRI to Identify Potential Brain Regions Associated with Autism Severity
by Igor D. Rodrigues, Emerson A. de Carvalho, Caio P. Santana and Guilherme S. Bastos
Algorithms 2022, 15(6), 195; https://doi.org/10.3390/a15060195 - 7 Jun 2022
Cited by 9 | Viewed by 3800
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized primarily by social impairments that manifest in different severity levels. In recent years, many studies have explored the use of machine learning (ML) and resting-state functional magnetic resonance images (rs-fMRI) to investigate the disorder. [...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized primarily by social impairments that manifest in different severity levels. In recent years, many studies have explored the use of machine learning (ML) and resting-state functional magnetic resonance images (rs-fMRI) to investigate the disorder. These approaches evaluate brain oxygen levels to indirectly measure brain activity and compare typical developmental subjects with ASD ones. However, none of these works have tried to classify the subjects into severity groups using ML exclusively applied to rs-fMRI data. Information on ASD severity is frequently available since some tools used to support ASD diagnosis also include a severity measurement as their outcomes. The aforesaid is the case of the Autism Diagnostic Observation Schedule (ADOS), which splits the diagnosis into three groups: ‘autism’, ‘autism spectrum’, and ‘non-ASD’. Therefore, this paper aims to use ML and fMRI to identify potential brain regions as biomarkers of ASD severity. We used the ADOS score as a severity measurement standard. The experiment used fMRI data of 202 subjects with an ASD diagnosis and their ADOS scores available at the ABIDE I consortium to determine the correct ASD sub-class for each one. Our results suggest a functional difference between the ASD sub-classes by reaching 73.8% accuracy on cingulum regions. The aforementioned shows the feasibility of classifying and characterizing ASD using rs-fMRI data, indicating potential areas that could lead to severity biomarkers in further research. However, we highlight the need for more studies to confirm our findings. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

Figure 1
<p>Classification curve generated by SVM with two features.</p>
Full article ">
20 pages, 15507 KiB  
Article
Exploring the Efficiencies of Spectral Isolation for Intelligent Wear Monitoring of Micro Drill Bit Automatic Regrinding In-Line Systems
by Ugochukwu Ejike Akpudo and Jang-Wook Hur
Algorithms 2022, 15(6), 194; https://doi.org/10.3390/a15060194 - 6 Jun 2022
Cited by 2 | Viewed by 2724
Abstract
Despite the increasing digitalization of equipment diagnostic/condition monitoring systems, it remains a challenge to accurately harness discriminant information from multiple sensors with unique spectral (and transient) behaviors. High-precision systems such as the automatic regrinding in-line equipment provide intelligent regrinding of micro drill bits; [...] Read more.
Despite the increasing digitalization of equipment diagnostic/condition monitoring systems, it remains a challenge to accurately harness discriminant information from multiple sensors with unique spectral (and transient) behaviors. High-precision systems such as the automatic regrinding in-line equipment provide intelligent regrinding of micro drill bits; however, immediate monitoring of the grinder during the grinding process has become necessary because ignoring it directly affects the drill bit’s life and the equipment’s overall utility. Vibration signals from the frame and the high-speed grinding wheels reflect the different health stages of the grinding wheel and can be exploited for intelligent condition monitoring. The spectral isolation technique as a preprocessing tool ensures that only the critical spectral segments of the inputs are retained for improved diagnostic accuracy at reduced computational costs. This study explores artificial intelligence-based models for learning the discriminant spectral information stored in the vibration signals and considers the accuracy and cost implications of spectral isolation of the critical spectral segments of the signals for accurate equipment monitoring. Results from one-dimensional convolutional neural networks (1D-CNN) and multi-layer perceptron (MLP) neural networks, respectively, reveal that spectral isolation offers a higher condition monitoring accuracy at reduced computational costs. Experimental results using different 1D-CNN and MLP architectures reveal 4.6% and 7.5% improved diagnostic accuracy by the 1D-CNNs and MLPs, respectively, at about 1.3% and 5.71% reduced computational costs, respectively. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

Figure 1
<p>CAD plan view of the G50150 micro drill bit ARIS highlighting the collet and regrinding unit.</p>
Full article ">Figure 2
<p>Proposed monitoring framework.</p>
Full article ">Figure 3
<p>A picture of the micro drill bit ARIS (model TGM-1011) with the sensor locations and data acquisition system.</p>
Full article ">Figure 4
<p>Pictures of the grinding wheel at (<b>a</b>) healthy/normal state, (<b>b</b>) fairly used state—about 40–50% life remaining useful, and (<b>c</b>) faulty state—about 0–10% life remaining useful.</p>
Full article ">Figure 5
<p>Raw vibration signals for (<b>a</b>) spindle at healthy/normal state, (<b>b</b>) spindle at fairly used state, (<b>c</b>) spindle at faulty state, (<b>d</b>) frame at healthy/normal state, (<b>e</b>) frame at fairly used state, and (<b>f</b>) frame at faulty state.</p>
Full article ">Figure 6
<p>Spectral comparison of different grinder health states. (<b>a</b>) Spindle, (<b>b</b>) frame.</p>
Full article ">Figure 7
<p>Spectral comparison of different grinder health states. (<b>a</b>) Spindle (critical segment), (<b>b</b>) frame (critical segment).</p>
Full article ">Figure 8
<p>Raw vibration signals (in blue) and their corresponding spectral isolation outputs (in red) for (<b>a</b>) spindle at healthy/normal state, (<b>b</b>) spindle at fairly used state, (<b>c</b>) spindle at faulty state, (<b>d</b>) frame at healthy/normal state, (<b>e</b>) frame at fairly used state, and (<b>f</b>) frame at faulty state.</p>
Full article ">Figure 9
<p>Training history for (<b>a</b>) <span class="html-italic">CNN</span>64, (<b>b</b>) <span class="html-italic">CNN</span>64_64, (<b>c</b>) <span class="html-italic">CNN</span>64_<span class="html-italic">Dense</span>100, (<b>d</b>) FCN, (<b>e</b>) DNN64, (<b>f</b>) DNN128_64, and (<b>g</b>) DNN100_150_50. For each subfigure, the eight images respectively represent the training histories for the eight different input/source data channels in the order: <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>P</mi> <mi>R</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>P</mi> <mi>C</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <msub> <mi>R</mi> <mi>R</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>P</mi> <mi>C</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>P</mi> <mo>_</mo> <mi>F</mi> <msub> <mi>R</mi> <mi>R</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>P</mi> <mo>_</mo> <mi>F</mi> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>P</mi> <mi>R</mi> </msub> <mi>F</mi> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>P</mi> <mi>C</mi> </msub> <mi>F</mi> <msub> <mi>R</mi> <mi>R</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Worst and best Confusion matrices produced by the DL models (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>N</mi> <mi>N</mi> <mn>64</mn> </mrow> </semantics></math>, (<b>b</b>) <span class="html-italic">CNN</span>64_64, (<b>c</b>) <span class="html-italic">CNN</span>64_<span class="html-italic">Dense</span>100, (<b>d</b>) FCN, (<b>e</b>) DNN64, (<b>f</b>) DNN128_64, and (<b>g</b>) DNN100_150_50.</p>
Full article ">Figure 11
<p>Computational cost assessment of the DL models on the different input/source data channels. The green bars (Avg cost) are the mean value of the computational costs of the DL models per input/source data channel.</p>
Full article ">
22 pages, 848 KiB  
Review
A Survey on Network Optimization Techniques for Blockchain Systems
by Robert Antwi, James Dzisi Gadze, Eric Tutu Tchao, Axel Sikora, Henry Nunoo-Mensah, Andrew Selasi Agbemenu, Kwame Opunie-Boachie Obour Agyekum, Justice Owusu Agyemang, Dominik Welte and Eliel Keelson
Algorithms 2022, 15(6), 193; https://doi.org/10.3390/a15060193 - 4 Jun 2022
Cited by 22 | Viewed by 6949
Abstract
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as [...] Read more.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

Figure 1
<p>Structure of Survey.</p>
Full article ">Figure 2
<p>Survey Methodology.</p>
Full article ">Figure 3
<p>The Blockchain Architecture.</p>
Full article ">Figure 4
<p>The Gossip Dissemination Protocol.</p>
Full article ">Figure 5
<p>Categorization of Network Optimization Frameworks in Blockchain.</p>
Full article ">Figure 6
<p>Clustering of Peers by Hao et al. [<a href="#B80-algorithms-15-00193" class="html-bibr">80</a>].</p>
Full article ">Figure 7
<p>SDN-Inspired Topology Management by Deshpande et al. [<a href="#B83-algorithms-15-00193" class="html-bibr">83</a>].</p>
Full article ">Figure 8
<p>Semi-Distributed Topology Management Scheme used by Baniata et al. [<a href="#B84-algorithms-15-00193" class="html-bibr">84</a>].</p>
Full article ">Figure 9
<p>Clustering Technique used by Jin et al. [<a href="#B96-algorithms-15-00193" class="html-bibr">96</a>].</p>
Full article ">
15 pages, 928 KiB  
Article
Constructing the Neighborhood Structure of VNS Based on Binomial Distribution for Solving QUBO Problems
by Dhidhi Pambudi and Masaki Kawamura
Algorithms 2022, 15(6), 192; https://doi.org/10.3390/a15060192 - 2 Jun 2022
Cited by 1 | Viewed by 2827
Abstract
The quadratic unconstrained binary optimization (QUBO) problem is categorized as an NP-hard combinatorial optimization problem. The variable neighborhood search (VNS) algorithm is one of the leading algorithms used to solve QUBO problems. As neighborhood structure change is the central concept in the VNS [...] Read more.
The quadratic unconstrained binary optimization (QUBO) problem is categorized as an NP-hard combinatorial optimization problem. The variable neighborhood search (VNS) algorithm is one of the leading algorithms used to solve QUBO problems. As neighborhood structure change is the central concept in the VNS algorithm, the design of the neighborhood structure is crucial. This paper presents a modified VNS algorithm called “B-VNS”, which can be used to solve QUBO problems. A binomial trial was used to construct the neighborhood structure, and this was used with the aim of reducing computation time. The B-VNS and VNS algorithms were tested on standard QUBO problems from Glover and Beasley, on standard max-cut problems from Helmberg–Rendl, and on those proposed by Burer, Monteiro, and Zhang. Finally, Mann–Whitney tests were conducted using α=0.05, to statistically compare the performance of the two algorithms. It was shown that the B-VNS and VNS algorithms are able to provide good solutions, but the B-VNS algorithm runs substantially faster. Furthermore, the B-VNS algorithm performed the best in all of the max-cut problems, regardless of problem size, and it performed the best in QUBO problems, with sizes less than 500. The results suggest that the use of binomial distribution, to construct the neighborhood structure, has the potential for further development. Full article
Show Figures

Figure 1

Figure 1
<p>VNS algorithm. (<b>a</b>) The algorithm gradually expands its neighborhood structure according to <span class="html-italic">k</span> for going out from a local optimum trap [<a href="#B32-algorithms-15-00192" class="html-bibr">32</a>]. (<b>b</b>) The distance of <span class="html-italic">k</span> is gradually increased by one to build a broader neighborhood structure.</p>
Full article ">Figure 2
<p>Distribution of <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">X</mi> <mo>′</mo> </msup> </semantics></math> related to <span class="html-italic">p</span>. A neighboring point <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">X</mi> <mo>′</mo> </msup> </semantics></math> having distance <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>p</mi> <mi>c</mi> </mrow> </semantics></math> has a high probability of appearing.</p>
Full article ">Figure 3
<p>Neighborhood structure. (<b>a</b>) The neighborhood structure of B-VNS becomes wider according to probability <math display="inline"><semantics> <msub> <mi>p</mi> <mi>c</mi> </msub> </semantics></math>, while (<b>b</b>) the neighborhood structure of basic VNS becomes wider according to variable <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>k</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> [<a href="#B32-algorithms-15-00192" class="html-bibr">32</a>].</p>
Full article ">Figure 4
<p>Local search for QUBO [<a href="#B37-algorithms-15-00192" class="html-bibr">37</a>].</p>
Full article ">
27 pages, 1576 KiB  
Article
Clustering Algorithm with a Greedy Agglomerative Heuristic and Special Distance Measures
by Guzel Shkaberina, Leonid Verenev, Elena Tovbis, Natalia Rezova and Lev Kazakovtsev
Algorithms 2022, 15(6), 191; https://doi.org/10.3390/a15060191 - 1 Jun 2022
Cited by 2 | Viewed by 3774
Abstract
Automatic grouping (clustering) involves dividing a set of objects into subsets (groups) so that the objects from one subset are more similar to each other than to the objects from other subsets according to some criterion. Kohonen neural networks are a class of [...] Read more.
Automatic grouping (clustering) involves dividing a set of objects into subsets (groups) so that the objects from one subset are more similar to each other than to the objects from other subsets according to some criterion. Kohonen neural networks are a class of artificial neural networks, the main element of which is a layer of adaptive linear adders, operating on the principle of “winner takes all”. One of the advantages of Kohonen networks is their ability of online clustering. Greedy agglomerative procedures in clustering consistently improve the result in some neighborhood of a known solution, choosing as the next solution the option that provides the least increase in the objective function. Algorithms using the agglomerative greedy heuristics demonstrate precise and stable results for a k-means model. In our study, we propose a greedy agglomerative heuristic algorithm based on a Kohonen neural network with distance measure variations to cluster industrial products. Computational experiments demonstrate the comparative efficiency and accuracy of using the greedy agglomerative heuristic in the problem of grouping of industrial products into homogeneous production batches. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

Figure 1
<p>Accuracy of device clustering for SCL, SCL-GREEDY(2) and SCL-GREEDY(3) algorithms.</p>
Full article ">Figure 2
<p>(<b>a</b>) Coefficient of variation of the objective function value for two-batch mixed lot; (<b>b</b>) span factor of the objective function value for two-batch mixed lot; (<b>c</b>) coefficient of variation of the objective function value for three-batch mixed lot; (<b>d</b>) span factor of the objective function value for three-batch mixed lot; (<b>e</b>) coefficient of variation of the objective function value for four-batch mixed lot; (<b>f</b>) span factor of the objective function value for four-batch mixed lot.</p>
Full article ">Figure 3
<p>Coefficient of variation of the objective function value for the two-batch mixed lot.</p>
Full article ">Figure 4
<p>Coefficient of variation of the objective function value for the three-batch mixed lot.</p>
Full article ">Figure 5
<p>Coefficient of variation of the objective function value for the four-batch mixed lot.</p>
Full article ">
16 pages, 7382 KiB  
Article
A Missing Data Reconstruction Method Using an Accelerated Least-Squares Approximation with Randomized SVD
by Siriwan Intawichai and Saifon Chaturantabut
Algorithms 2022, 15(6), 190; https://doi.org/10.3390/a15060190 - 31 May 2022
Cited by 6 | Viewed by 2438
Abstract
An accelerated least-squares approach is introduced in this work by incorporating a greedy point selection method with randomized singular value decomposition (rSVD) to reduce the computational complexity of missing data reconstruction. The rSVD is used to speed up the computation of a low-dimensional [...] Read more.
An accelerated least-squares approach is introduced in this work by incorporating a greedy point selection method with randomized singular value decomposition (rSVD) to reduce the computational complexity of missing data reconstruction. The rSVD is used to speed up the computation of a low-dimensional basis that is required for the least-squares projection by employing randomness to generate a small matrix instead of a large matrix from high-dimensional data. A greedy point selection algorithm, based on the discrete empirical interpolation method, is then used to speed up the reconstruction process in the least-squares approximation. The accuracy and computational time reduction of the proposed method are demonstrated through three numerical experiments. The first two experiments consider standard testing images with missing pixels uniformly distributed on them, and the last numerical experiment considers a sequence of many incomplete two-dimensional miscible flow images. The proposed method is shown to accelerate the reconstruction process while maintaining roughly the same order of accuracy when compared to the standard least-squares approach. Full article
Show Figures

Figure 1

Figure 1
<p>(Numerical Test 1) The original Lena image and incomplete images with 30%, 50%, and 65% missing pixels distributed uniformly over the image.</p>
Full article ">Figure 2
<p>(Numerical Test 1) Reconstructions of incomplete images with 30%, 50%, and 65% missing pixels by the standard POD-LS accelerated POD-LS approach using <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> (<b>Top</b>) and the proposed POD-LS approach using <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (<b>Bottom</b>).</p>
Full article ">Figure 2 Cont.
<p>(Numerical Test 1) Reconstructions of incomplete images with 30%, 50%, and 65% missing pixels by the standard POD-LS accelerated POD-LS approach using <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> (<b>Top</b>) and the proposed POD-LS approach using <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (<b>Bottom</b>).</p>
Full article ">Figure 3
<p>(Numerical Test 1) Average relative error of the reconstruction for 30%, 50%, and 65% missing components using the standard LS method and the accelerated LS with basis from rSVD, denoted by LS-DEIM for case (i) and LS-DEIM-b for case (ii) as described in Algorithm 4 for POD dimension <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> with DEIM dimension <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>20</mn> <mo>,</mo> <mo> </mo> <mn>30</mn> <mo>,</mo> <mo> </mo> <mn>40</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> <mo>,</mo> <mo> </mo> <mn>60</mn> <mo>,</mo> <mo> </mo> <mn>70</mn> <mo>,</mo> <mo> </mo> <mn>80</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>(Numerical Test 2) (<b>a</b>) Original image and (<b>b</b>) an incomplete image with 50% missing pixels.</p>
Full article ">Figure 5
<p>(Numerical Test 2) Reconstructed image from standard POD-LS with basis from SVD using dimension <math display="inline"><semantics> <mrow> <mi>k</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math> and from the accelerated POD-LS with basis from rSVD using <math display="inline"><semantics> <mrow> <mi>k</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>m</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>20</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math> and using <math display="inline"><semantics> <mrow> <mi>k</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>m</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>50</mn> <mo>,</mo> <mo> </mo> <mn>80</mn> <mo>,</mo> <mo> </mo> <mn>150</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>(Numerical Test 2) Average relative error of the reconstructed missing components using the standard POD-LS method and the accelerated POD-LS method, denoted by LS-DEIM for case (i) and LS-DEIM-b for case (ii) defined in Algorithm 4 for POD dimension <math display="inline"><semantics> <mrow> <mi>k</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>30</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math> are shown in (<b>a</b>–<b>c</b>), respectively, with DEIM dimension <math display="inline"><semantics> <mrow> <mi>m</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>30</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> <mo>,</mo> <mo> </mo> <mo>…</mo> <mo>,</mo> <mo> </mo> <mn>210</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>(Numerical Test 3) Snapshots of flow data at different four time instances.</p>
Full article ">Figure 8
<p>(Numerical Test 3) Example of an incomplete image (<b>a</b>), with the reconstructions from the standard least-squares with SVD (<b>b</b>) and the accelerated least-squares with rSVD (<b>c</b>).</p>
Full article ">Figure 9
<p>(Numerical Test 3) Average relative error of the reconstructed missing components using the standard LS method and the accelerated least-squares with basis from rSVD, denoted by LS-DEIM for case (i) and LS-DEIM-b for case (ii) defined in Algorithm 4 for POD dimensions <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>30</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> </mrow> </semantics></math> with DEIM dimensions <math display="inline"><semantics> <mrow> <mi>m</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>30</mn> <mo>,</mo> <mo> </mo> <mn>50</mn> <mo>,</mo> <mo> </mo> <mn>70</mn> <mo>,</mo> <mo> </mo> <mn>90</mn> <mo>,</mo> <mo> </mo> <mn>110</mn> <mo>,</mo> <mo> </mo> <mn>130</mn> <mo>,</mo> <mo> </mo> <mn>150</mn> <mo>,</mo> <mo> </mo> <mn>170</mn> <mo>,</mo> <mo> </mo> <mn>190</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>(Numerical Test 3) Reconstruction error (<b>a</b>) and CPU time (<b>b</b>) of the incomplete image from the standard least-squares approach and from the accelerated least-squares approach case (i) in Algorithm 4 using different dimensions of <span class="html-italic">k</span> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mi>k</mi> </mrow> </semantics></math>.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop