[go: up one dir, main page]

Next Issue
Volume 13, September
Previous Issue
Volume 13, July
 
 

Algorithms, Volume 13, Issue 8 (August 2020) – 27 articles

Cover Story (view full-size image): The novelty in application of the Reed–Solomon algorithm for remote sensing lies in the different interpretation of the algorithm itself at the preparatory stage―data fusion. The rationale is to include all possible information from all acquired spectral bands, assuming that complete composite information in the form of one compound image will improve both the quality of visualization and some aspects of further quantitative and qualitative analyses. The concept arose from an empirical, heuristic combination of geographic information systems (GIS), map algebra, and two-dimensional cellular automata. The challenges are related to handling big quantitative datasets and the awareness that these numbers are, in fact, descriptors of a real-world multidimensional view. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 5839 KiB  
Article
Methodology for Analyzing the Traditional Algorithms Performance of User Reviews Using Machine Learning Techniques
by Abdul Karim, Azhari Azhari, Samir Brahim Belhaouri, Ali Adil Qureshi and Maqsood Ahmad
Algorithms 2020, 13(8), 202; https://doi.org/10.3390/a13080202 - 18 Aug 2020
Cited by 4 | Viewed by 5062
Abstract
Android-based applications are widely used by almost everyone around the globe. Due to the availability of the Internet almost everywhere at no charge, almost half of the globe is engaged with social networking, social media surfing, messaging, browsing and plugins. In the Google [...] Read more.
Android-based applications are widely used by almost everyone around the globe. Due to the availability of the Internet almost everywhere at no charge, almost half of the globe is engaged with social networking, social media surfing, messaging, browsing and plugins. In the Google Play Store, which is one of the most popular Internet application stores, users are encouraged to download thousands of applications and various types of software. In this research study, we have scraped thousands of user reviews and the ratings of different applications. We scraped 148 application reviews from 14 different categories. A total of 506,259 reviews were accumulated and assessed. Based on the semantics of reviews of the applications, the results of the reviews were classified negative, positive or neutral. In this research, different machine-learning algorithms such as logistic regression, random forest and naïve Bayes were tuned and tested. We also evaluated the outcome of term frequency (TF) and inverse document frequency (IDF), measured different parameters such as accuracy, precision, recall and F1 score (F1) and present the results in the form of a bar graph. In conclusion, we compared the outcome of each algorithm and found that logistic regression is one of the best algorithms for the review-analysis of the Google Play Store from an accuracy perspective. Furthermore, we were able to prove and demonstrate that logistic regression is better in terms of speed, rate of accuracy, recall and F1 perspective. This conclusion was achieved after preprocessing a number of data values from these data sets. Full article
(This article belongs to the Special Issue Advanced Data Mining: Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>Methodology diagram of data collection and sample shot of dataset.</p>
Full article ">Figure 2
<p>Flow of Google Play Store application reviews classification.</p>
Full article ">Figure 3
<p>(<b>a</b>) Bar-chart visualization of term frequency (TF) naïve Bayes multinomial algorithm after preprocessing; (<b>b</b>) bar-chart visualization of term frequency/inverse document frequency (TF/IDF) naïve Bayes multinomial algorithm after preprocessing.</p>
Full article ">Figure 4
<p>(<b>a</b>) Bar-chart visualization of the TF random forest algorithm after preprocessing; (<b>b</b>) bar-chart visualization of the TF/IDF random forest algorithm after preprocessing.</p>
Full article ">Figure 5
<p>(<b>a</b>) Bar-chart visualization of the TF logistic regression algorithm after preprocessing; (<b>b</b>) bar-chart visualization of the TF/IDF logistic regression algorithm after preprocessing.</p>
Full article ">Figure 6
<p>Pie chart visualization of different machine-learning algorithm comparison of TF-based data after preprocessing.</p>
Full article ">Figure 7
<p>Pie chart visualization of different machine-learning algorithm comparison of TF/IDF-based data after preprocessing.</p>
Full article ">Figure 8
<p>(<b>a</b>) Bar-chart visualization of TF naïve Bayes multinomial algorithm based without preprocessing of data; (<b>b</b>) bar-chart visualization of TF/IDF naïve Bayes multinomial algorithm based without preprocessing of data.</p>
Full article ">Figure 9
<p>(<b>a</b>) Bar-chart visualization of TF random forest algorithm based without preprocessing of data; (<b>b</b>) bar-chart visualization of TF random forest algorithm based without preprocessing of data.</p>
Full article ">Figure 10
<p>(<b>a</b>) Bar-chart visualization of TF logistic regression algorithm based without preprocessing of data; (<b>b</b>) bar-chart visualization of TF/IDF logistic regression algorithm based without preprocessing of data.</p>
Full article ">Figure 11
<p>Pie chart visualization of different machine-learning algorithm comparison on TF based without preprocessing of data.</p>
Full article ">Figure 12
<p>Pie chart visualization of different machine-learning algorithm comparison on TF/IDF based without preprocessing of data.</p>
Full article ">Figure 13
<p>Sample screenshot of the original dataset that was scraped.</p>
Full article ">Figure 14
<p>Sample screenshot of the cleaning dataset after preprocessing.</p>
Full article ">Figure 15
<p>(<b>a</b>) Positive and (<b>b</b>) negative word dictionary by using the word cloud corpus.</p>
Full article ">Figure 16
<p>Final sentiment analysis results on Google Play reviews using the logistic regression algorithm.</p>
Full article ">
27 pages, 1770 KiB  
Article
A NARX Model Reference Adaptive Control Scheme: Improved Disturbance Rejection Fractional-Order PID Control of an Experimental Magnetic Levitation System
by Hossein Alimohammadi, Baris Baykant Alagoz, Aleksei Tepljakov, Kristina Vassiljeva and Eduard Petlenkov
Algorithms 2020, 13(8), 201; https://doi.org/10.3390/a13080201 - 18 Aug 2020
Cited by 18 | Viewed by 4397
Abstract
Real control systems require robust control performance to deal with unpredictable and altering operating conditions of real-world systems. Improvement of disturbance rejection control performance should be considered as one of the essential control objectives in practical control system design tasks. This study presents [...] Read more.
Real control systems require robust control performance to deal with unpredictable and altering operating conditions of real-world systems. Improvement of disturbance rejection control performance should be considered as one of the essential control objectives in practical control system design tasks. This study presents a multi-loop Model Reference Adaptive Control (MRAC) scheme that leverages a nonlinear autoregressive neural network with external inputs (NARX) model in as the reference model. Authors observed that the performance of multi-loop MRAC-fractional-order proportional integral derivative (FOPID) control with MIT rule largely depends on the capability of the reference model to represent leading closed-loop dynamics of the experimental ML system. As such, the NARX model is used to represent disturbance-free dynamical behavior of PID control loop. It is remarkable that the obtained reference model is independent of the tuning of other control loops in the control system. The multi-loop MRAC-FOPID control structure detects impacts of disturbance incidents on control performance of the closed-loop FOPID control system and adapts the response of the FOPID control system to reduce the negative effects of the additive input disturbance. This multi-loop control structure deploys two specialized control loops: an inner loop, which is the closed-loop FOPID control system for stability and set-point control, and an outer loop, which involves a NARX reference model and an MIT rule to increase the adaptation ability of the system. Thus, the two-loop MRAC structure allows improvement of disturbance rejection performance without deteriorating precise set-point control and stability characteristics of the FOPID control loop. This is an important benefit of this control structure. To demonstrate disturbance rejection performance improvements of the proposed multi-loop MRAC-FOPID control with NARX model, an experimental study is conducted for disturbance rejection control of magnetic levitation test setup in the laboratory. Simulation and experimental results indicate an improvement of disturbance rejection performance. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2019)
Show Figures

Figure 1

Figure 1
<p>Implementation of the retuning fractional-order proportional integral derivative (FOPID) system by using the PID control loop [<a href="#B9-algorithms-13-00201" class="html-bibr">9</a>].</p>
Full article ">Figure 2
<p>Block diagram of multi-loop Model Reference Adaptive Control (MRAC)-FOPID system [<a href="#B9-algorithms-13-00201" class="html-bibr">9</a>].</p>
Full article ">Figure 3
<p>Magnetic Levitation System (MLS) setup in A-Lab.</p>
Full article ">Figure 4
<p>Representation of neural network (NN)-nonlinear autoregressive neural network with external inputs (NARX) structure. The first layer activation function <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math> is generally a nonlinear activation function to model nonlinear relations in the data. The second layer activation function <math display="inline"><semantics> <msub> <mi>f</mi> <mn>2</mn> </msub> </semantics></math> is a linear activation function that rescales NARX output to original output data. Thus, it can yield satisfactory predictions of a nonlinear system dynamics.</p>
Full article ">Figure 5
<p>Neural network model with ML’s filter.</p>
Full article ">Figure 6
<p>Overview of three parts, hardware, ANN model and Mathematical model.</p>
Full article ">Figure 7
<p>Input and output signals to train the NARX model for closed-loop model identification.</p>
Full article ">Figure 8
<p>Ball position comparison among real model, ANN model and Mathematical model.</p>
Full article ">Figure 9
<p>A comparison of ANN model performance and mathematical model performance.</p>
Full article ">Figure 10
<p>Block diagram of multi-loop MRAC-FOPID control with a NARX reference model.</p>
Full article ">Figure 11
<p>Complete experimental configuration for evaluating the multi-loop control structure. There are three control loops in total: the original PID control loop with reference input <math display="inline"><semantics> <mrow> <msup> <mi>r</mi> <mrow> <mo>′</mo> <mo>′</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, the retuning loop with reference input <math display="inline"><semantics> <mrow> <msup> <mi>r</mi> <mo>′</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> that replaces the dynamics of the original loop with those of the optimally tuned FOPID controller, and the MRAC loop to which the original reference input <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> is connected. By bypassing reference inputs in various ways, it is possible to achieve different simulation scenarios with the MRAC loop and retuning control loop enabled or disabled independently. This schematic diagram serves as the basis for both pure software simulations and real-time experiments in MATLAB/Simulink software.</p>
Full article ">Figure 12
<p>Disturbance responses of the multi-loop MRAC-FOPID control with NARX reference model and the FOPID control loop (When MRAC is disabled).</p>
Full article ">Figure 13
<p>Step disturbance responses of the multi-loop MRAC-FOPID control with NARX reference model and the FOPID control loop (When MRAC is disabled).</p>
Full article ">Figure 14
<p>Sinusoidal disturbance responses of the multi-loop MRAC-FOPID control with NARX reference model and the FOPID control loop (When MRAC is disabled).</p>
Full article ">Figure 15
<p>Step disturbance responses of the MRAC system with the original PID control loop.</p>
Full article ">Figure 16
<p>Sine wave disturbance responses of MRAC system with original PID control loop.</p>
Full article ">
15 pages, 315 KiB  
Article
Adaptive Metrics for Adaptive Samples
by Nicholas J. Cavanna and Donald R. Sheehy
Algorithms 2020, 13(8), 200; https://doi.org/10.3390/a13080200 - 18 Aug 2020
Viewed by 2753
Abstract
We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological [...] Read more.
We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological inference results and to prove topological guarantees using the critical points theory of distance functions. This ultimately leads to an algorithm for homology inference from samples whose spacing depends on their distance to a discrete representation of the complement space. Full article
(This article belongs to the Special Issue Topological Data Analysis)
28 pages, 3005 KiB  
Article
Scalable Block Preconditioners for Linearized Navier-Stokes Equations at High Reynolds Number
by Filippo Zanetti and Luca Bergamaschi
Algorithms 2020, 13(8), 199; https://doi.org/10.3390/a13080199 - 16 Aug 2020
Cited by 4 | Viewed by 3669
Abstract
We review a number of preconditioners for the advection-diffusion operator and for the Schur complement matrix, which, in turn, constitute the building blocks for Constraint and Triangular Preconditioners to accelerate the iterative solution of the discretized and linearized Navier-Stokes equations. An intensive numerical [...] Read more.
We review a number of preconditioners for the advection-diffusion operator and for the Schur complement matrix, which, in turn, constitute the building blocks for Constraint and Triangular Preconditioners to accelerate the iterative solution of the discretized and linearized Navier-Stokes equations. An intensive numerical testing is performed onto the driven cavity problem with low values of the viscosity coefficient. We devise an efficient multigrid preconditioner for the advection-diffusion matrix, which, combined with the commuted BFBt Schur complement approximation, and inserted in a 2×2 block preconditioner, provides convergence of the Generalized Minimal Residual (GMRES) method in a number of iteration independent of the meshsize for the lowest values of the viscosity parameter. The low-rank acceleration of such preconditioner is also investigated, showing its great potential. Full article
Show Figures

Figure 1

Figure 1
<p>Linear and quadratic triangular elements, used in the Taylor-Hood approximation.</p>
Full article ">Figure 2
<p>Ordering of the nodes for the smoothing.</p>
Full article ">Figure 3
<p>Velocity magnitude for the Stokes problem.</p>
Full article ">Figure 4
<p>Velocity direction for the Stokes problem.</p>
Full article ">Figure 5
<p>Sparsity pattern of matrix M10.</p>
Full article ">Figure 6
<p>Spectra of the BFBt-preconditioned Schur complement for different values of viscosity. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.005</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Spectra of various Multigrid schemes for the matrix M40 and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.005</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>On the left, the spectra of the preconditioned Schur complement for BFBt and BFBt-c preconditioners. On the right the spectra of the preconditioned <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> block and the Schur complement using BFBT-c. Plots referring to matrix M40 and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.005</mn> </mrow> </semantics></math>.</p>
Full article ">
27 pages, 14281 KiB  
Article
Pavement Defect Segmentation in Orthoframes with a Pipeline of Three Convolutional Neural Networks
by Roland Lõuk, Andri Riid, René Pihlak and Aleksei Tepljakov
Algorithms 2020, 13(8), 198; https://doi.org/10.3390/a13080198 - 14 Aug 2020
Cited by 10 | Viewed by 3854
Abstract
In the manuscript, the issue of detecting and segmenting out pavement defects on highway roads is addressed. Specifically, computer vision (CV) methods are developed and applied to the problem based on deep learning of convolutional neural networks (ConvNets). A novel neural network structure [...] Read more.
In the manuscript, the issue of detecting and segmenting out pavement defects on highway roads is addressed. Specifically, computer vision (CV) methods are developed and applied to the problem based on deep learning of convolutional neural networks (ConvNets). A novel neural network structure is considered, based on a pipeline of three ConvNets and endowed with the capacity for context awareness, which improves grid-based search for defects on orthoframes by considering the surrounding image content—an approach, which essentially draws inspiration from how humans tend to solve the task of image segmentation. Also, methods for assessing the quality of segmentation are discussed. The contribution also describes the complete procedure of working with pavement defects in an industrial setting, involving the workcycle of defect annotation, ConvNet training and validation. The results of ConvNet evaluation provided in the paper hint at a successful implementation of the proposed technique. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Show Figures

Figure 1

Figure 1
<p>The orthoframes depicting the road.</p>
Full article ">Figure 2
<p>Generation of the region-of-interest orthoframe mask.</p>
Full article ">Figure 3
<p>Overview of the inference process for defect segmentation over an orthoframe.</p>
Full article ">Figure 4
<p>Illustration of Road Segmentation Network (RSN) architecture.</p>
Full article ">Figure 5
<p>Road area image pre-processing.</p>
Full article ">Figure 6
<p>Extraction of a content-context sample from an orthoframe.</p>
Full article ">Figure 7
<p>Architecture of the defect detection convolutional neural network (ConvNet) used in this work.</p>
Full article ">Figure 8
<p>Architecture of the defect segmentation U-Net used in this work.</p>
Full article ">Figure 9
<p>An illustration of various metrics. (<b>a</b>) The original image; (<b>b</b>) TP, FN and FP pixels are depicted by colors of green, blue and red, respectively; (<b>c</b>) Three objects obtained by combining the predictions and ground truth instances; (<b>d</b>) TP, FN and FP pixels with edge-tolerant intersection over union (IoU) computation.</p>
Full article ">Figure 10
<p>Classification of segmented defects. As in <a href="#algorithms-13-00198-f009" class="html-fig">Figure 9</a>, colors green, blue and red depict TP, FN and FP pixels, respectively. (<b>a</b>) true positive; (<b>b</b>) false positive; (<b>c</b>) false negative; (<b>d</b>) false negative and false positive; (<b>e</b>) false positive; (<b>f</b>) false negative;</p>
Full article ">Figure 11
<p>Simplified UML diagram of the class and package relationships making up the DATM annotation tool.</p>
Full article ">Figure 12
<p>Graphical user interface of the annotation tool written in Python 3 with the PyQt framework.</p>
Full article ">Figure 13
<p>Active learning of road contours and pavement defects.</p>
Full article ">Figure 14
<p>Three of the sampling strategies visualized on an orthoframe. Each green square is of size 336 × 336 and added to the given dataset. For illustrative purposes, annotated defects are marked in blue and predicted false positives (only shown in c) are marked in red. For the sake of visual clarity, partial overlapping is not displayed and the focused part of the road in conjunction with the road area mask is highlighted. (<b>a</b>) Full sampling.; (<b>b</b>) Dilated defect contour sampling.; (<b>c</b>) Defect contour sampling with false positives.</p>
Full article ">Figure 15
<p>Defect Segmentation Network (DSN) results for different encoder architectures over 40 epochs trained on the 200 orthoframe dataset. Performed with 2-fold cross-validation.</p>
Full article ">Figure 16
<p>Visualization of the context captured for different DSN patch sizes. For models with input sizes of 336 × 336 or 448 × 448, the output gets cropped to the central part of size 224 × 224. In each case, the orthoframe still gets partitioned into patches of size 224 × 224, which are shown as yellow squares.</p>
Full article ">Figure 17
<p>Orthoframes from the test set processed by the PDDS . RSN outputs multiplied by the region of interest mask are highlighted. Patches classified by the DDN as non-defective have white borders and patches classified as defective have red borders. For the masks produced by DSN, the green, red and blue contours represent true positive, false positive and false negative pixels respectively. (<b>a</b>) IoU = 0.846, eIoU = 0.930, iIoU = 0.457, cPr = 0.600, cRc = 0.750; (<b>b</b>) IoU = 0.790, eIoU = 0.991, iIoU = 0.582, cPr = 1, cRc = 1; (<b>c</b>) IoU = 0.104, eIoU = 0.110, iIoU = 0.306, cPr = 0.143, cRc = 1; (<b>d</b>) IoU = 0.515, eIoU = 0.575, iIoU = 0.306, cPr = 0.750, cRc = 0.500.</p>
Full article ">
16 pages, 297 KiB  
Article
A Brief Survey of Fixed-Parameter Parallelism
by Faisal N. Abu-Khzam and Karam Al Kontar
Algorithms 2020, 13(8), 197; https://doi.org/10.3390/a13080197 - 14 Aug 2020
Cited by 3 | Viewed by 2703
Abstract
This paper provides an overview of the field of parameterized parallel complexity by surveying previous work in addition to presenting a few new observations and exploring potential new directions. In particular, we present a general view of how known FPT techniques, [...] Read more.
This paper provides an overview of the field of parameterized parallel complexity by surveying previous work in addition to presenting a few new observations and exploring potential new directions. In particular, we present a general view of how known FPT techniques, such as bounded search trees, color coding, kernelization, and iterative compression, can be modified to produce fixed-parameter parallel algorithms. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Figure 1
<p><math display="inline"><semantics> <mrow> <mi>F</mi> <mi>P</mi> <mi>P</mi> <mi>T</mi> <mo>=</mo> <mi>F</mi> <mi>P</mi> <mi>P</mi> <mo>⊂</mo> <mi>P</mi> <mi>N</mi> <mi>C</mi> <mo>⊂</mo> <mi>N</mi> <mi>C</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>P</mi> <mi>P</mi> <mi>T</mi> <mo>=</mo> <mi>F</mi> <mi>P</mi> <mi>P</mi> <mo>⊂</mo> <mi>P</mi> <mi>N</mi> <mi>C</mi> <mo>⊂</mo> <mi>F</mi> <mi>P</mi> <mi>T</mi> </mrow> </semantics></math>.</p>
Full article ">
21 pages, 1475 KiB  
Article
Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification
by Luc Bonnet, Jean-Luc Akian, Éric Savin and T. J. Sullivan
Algorithms 2020, 13(8), 196; https://doi.org/10.3390/a13080196 - 13 Aug 2020
Viewed by 3485
Abstract
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem [...] Read more.
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design. Full article
Show Figures

Figure 1

Figure 1
<p>Possible ground truth functions between two consecutive points <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>, and our choice of piecewise constant interpolant.</p>
Full article ">Figure 2
<p>New area when one adds a point at the middle of the biggest rectangle.</p>
Full article ">Figure 3
<p>Evolution of <math display="inline"><semantics> <msup> <mi>F</mi> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </msup> </semantics></math> and the <span class="html-italic">∞</span>- and 1-norms of the error <math display="inline"><semantics> <mrow> <msup> <mi>F</mi> <mo>†</mo> </msup> <mo>−</mo> <msup> <mi>F</mi> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </msup> </mrow> </semantics></math> as functions of the iteration count, <span class="html-italic">n</span>, for a smooth ground truth <math display="inline"><semantics> <msup> <mi>F</mi> <mo>†</mo> </msup> </semantics></math>.</p>
Full article ">Figure 4
<p>Evolution of <math display="inline"><semantics> <msup> <mi>F</mi> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </msup> </semantics></math> and the <span class="html-italic">∞</span>- and 1-norms of the error <math display="inline"><semantics> <mrow> <msup> <mi>F</mi> <mo>†</mo> </msup> <mo>−</mo> <msup> <mi>F</mi> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </msup> </mrow> </semantics></math> as functions of the iteration count, <span class="html-italic">n</span>, for a discontinuous ground truth <math display="inline"><semantics> <msup> <mi>F</mi> <mo>†</mo> </msup> </semantics></math>.</p>
Full article ">Figure 5
<p>Evolution of <math display="inline"><semantics> <msup> <mi>F</mi> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </msup> </semantics></math> and the minimum of the quality and the total area as functions of the iteration count, <span class="html-italic">n</span>, for a discontinuous ground truth <math display="inline"><semantics> <msup> <mi>F</mi> <mo>†</mo> </msup> </semantics></math> with <math display="inline"><semantics> <mrow> <mi mathvariant="script">E</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Evolution of <math display="inline"><semantics> <msup> <mi>F</mi> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </msup> </semantics></math> and the minimum of the quality and the total area as functions of the iteration count, <span class="html-italic">n</span>, for a discontinuous ground truth <math display="inline"><semantics> <msup> <mi>F</mi> <mo>†</mo> </msup> </semantics></math> with <math display="inline"><semantics> <mrow> <mi mathvariant="script">E</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>4</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Black lines: Maximum and minimum deformation of the RAE2822 profile. Red: Maximum deformation of the third bump alone. Blue: Minimum deformation of the third bump alone. This image is taken from Dumont et al. [<a href="#B12-algorithms-13-00196" class="html-bibr">12</a>].</p>
Full article ">Figure 8
<p>Picture depicting the lift <math display="inline"><semantics> <msub> <mi>C</mi> <mi>l</mi> </msub> </semantics></math> and the drag <math display="inline"><semantics> <msub> <mi>C</mi> <mi>d</mi> </msub> </semantics></math> of an airfoil.</p>
Full article ">Figure 9
<p>Response surface in the <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mo>Ξ</mo> <mn>1</mn> </msub> <mo>,</mo> <msub> <mo>Ξ</mo> <mn>3</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> plane with <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mo>Ξ</mo> <mn>2</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>0.0025</mn> <mo>,</mo> <msub> <mo>Ξ</mo> <mn>4</mn> </msub> <mo>=</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math> (<b>a</b>) and (<math display="inline"><semantics> <mrow> <msub> <mo>Ξ</mo> <mn>2</mn> </msub> <mo>=</mo> <mn>0.0025</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mo>Ξ</mo> <mn>4</mn> </msub> <mrow> <mo>=</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> (<b>b</b>). These images are taken from Dumont et al. [<a href="#B12-algorithms-13-00196" class="html-bibr">12</a>].</p>
Full article ">Figure 10
<p>Evolution of <math display="inline"><semantics> <mrow> <msub> <mover> <mi>P</mi> <mo>¯</mo> </mover> <mi mathvariant="script">A</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> as function of the iteration count, <span class="html-italic">n</span>.</p>
Full article ">Figure 11
<p>Evolution of the minimum of the quality and the total area as function of the iteration count, <span class="html-italic">n</span>.</p>
Full article ">
15 pages, 1632 KiB  
Article
Deep Learning-Enabled Semantic Inference of Individual Building Damage Magnitude from Satellite Images
by Bradley J. Wheeler and Hassan A. Karimi
Algorithms 2020, 13(8), 195; https://doi.org/10.3390/a13080195 - 13 Aug 2020
Cited by 24 | Viewed by 4018
Abstract
Natural disasters are phenomena that can occur in any part of the world. They can cause massive amounts of destruction and leave entire cities in great need of assistance. The ability to quickly and accurately deliver aid to impacted areas is crucial toward [...] Read more.
Natural disasters are phenomena that can occur in any part of the world. They can cause massive amounts of destruction and leave entire cities in great need of assistance. The ability to quickly and accurately deliver aid to impacted areas is crucial toward not only saving time and money, but, most importantly, lives. We present a deep learning-based computer vision model to semantically infer the magnitude of damage to individual buildings after natural disasters using pre- and post-disaster satellite images. This model helps alleviate a major bottleneck in disaster management decision support by automating the analysis of the magnitude of damage to buildings post-disaster. In this paper, we will show our methods and results for how we were able to obtain a better performance than existing models, especially in moderate to significant magnitudes of damage, along with ablation studies to show our methods and results for the importance and impact of different training parameters in deep learning for satellite imagery. We were able to obtain an overall F1 score of 0.868 with our methods. Full article
Show Figures

Figure 1

Figure 1
<p>Pipeline for classifying damage magnitude to buildings from satellite images.</p>
Full article ">Figure 2
<p>From left to right: The before-disaster, after-disaster, and damage magnitude inference images from our pipeline.</p>
Full article ">Figure 3
<p>Training and testing cross-entropy loss of final ResNet model from <a href="#algorithms-13-00195-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 4
<p>ROC curves for all four damage magnitude levels.</p>
Full article ">
10 pages, 401 KiB  
Article
Graph Planarity by Replacing Cliques with Paths
by Patrizio Angelini, Peter Eades, Seok-Hee Hong, Karsten Klein, Stephen Kobourov, Giuseppe Liotta, Alfredo Navarra and Alessandra Tappini
Algorithms 2020, 13(8), 194; https://doi.org/10.3390/a13080194 - 13 Aug 2020
Cited by 7 | Viewed by 3749
Abstract
This paper introduces and studies the following beyond-planarity problem, which we call h-Clique2Path Planarity. Let G be a simple topological graph whose vertices are partitioned into subsets of size at most h, each inducing a clique. h [...] Read more.
This paper introduces and studies the following beyond-planarity problem, which we call h-Clique2Path Planarity. Let G be a simple topological graph whose vertices are partitioned into subsets of size at most h, each inducing a clique. h-Clique2Path Planarity asks whether it is possible to obtain a planar subgraph of G by removing edges from each clique so that the subgraph induced by each subset is a path. We investigate the complexity of this problem in relation to k-planarity. In particular, we prove that h-Clique2Path Planarity is NP-complete even when h=4 and G is a simple 3-plane graph, while it can be solved in linear time when G is a simple 1-plane graph, for any value of h. Our results contribute to the growing fields of hybrid planarity and of graph drawing beyond planarity. Full article
(This article belongs to the Special Issue Graph Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A non-planar graph <span class="html-italic">G</span>. Cliques are highlighted with bold edges. (<b>b</b>) A clique-planar drawing of <span class="html-italic">G</span>. (<b>c</b>) Replacing each clique by a path spanning its vertices. Note that, different from (<b>a</b>), in (<b>c</b>), the first vertex and the last vertex of each path have only one place to connect to edges, while the interior vertices have two places: this is what makes the problem non-trivial.</p>
Full article ">Figure 2
<p>A drawing produced by the reduction in [<a href="#B15-algorithms-13-00194" class="html-bibr">15</a>]. The yellow region contains a variable gadget, the blue region contains a splitting gadget, the orange region contains a wire gadget, and the violet region contains an inverter gadget.</p>
Full article ">Figure 3
<p>(<b>a</b>) The variable gadget <math display="inline"><semantics> <msub> <mi>G</mi> <mi>x</mi> </msub> </semantics></math> for a variable <span class="html-italic">x</span> is represented in the left dotted box. The clause gadget for a clause <span class="html-italic">c</span> is represented in the right dotted box. The chain connecting <math display="inline"><semantics> <msub> <mi>G</mi> <mi>x</mi> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>G</mi> <mi>c</mi> </msub> </semantics></math> is represented with lighter colors. The removed edges are dashed red. (<b>b</b>) All variables are <tt>False</tt>. (<b>c</b>) At least two variables are <tt>True</tt>.</p>
Full article ">Figure 4
<p>All possible 1-plane graphs involving one or more cliques of type <math display="inline"><semantics> <msub> <mi>K</mi> <mn>3</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>K</mi> <mn>4</mn> </msub> </semantics></math> admitting crossings edges. (<b>a</b>) and (<b>b</b>): two representations of a clique of type <math display="inline"><semantics> <msub> <mi>K</mi> <mn>4</mn> </msub> </semantics></math>; (<b>c</b>) and (<b>d</b>): two representations of two intersecting cliques of type <math display="inline"><semantics> <msub> <mi>K</mi> <mn>3</mn> </msub> </semantics></math>; (<b>e</b>) and (<b>f</b>): two representations of a clique of type <math display="inline"><semantics> <msub> <mi>K</mi> <mn>3</mn> </msub> </semantics></math> intersecting a clique of type <math display="inline"><semantics> <msub> <mi>K</mi> <mn>4</mn> </msub> </semantics></math>; (<b>g</b>): two intersecting cliques of type <math display="inline"><semantics> <msub> <mi>K</mi> <mn>4</mn> </msub> </semantics></math>.</p>
Full article ">
12 pages, 1660 KiB  
Article
Cross-Camera Erased Feature Learning for Unsupervised Person Re-Identification
by Shaojun Wu and Ling Gao
Algorithms 2020, 13(8), 193; https://doi.org/10.3390/a13080193 - 10 Aug 2020
Cited by 1 | Viewed by 2623
Abstract
Most supervised person re-identification methods show their excellent performance, but using labeled datasets is very expensive, which limits its application in practical scenarios. To solve the scalability problem, we propose a Cross-camera Erased Feature Learning (CEFL) framework for unsupervised person re-identification that learns [...] Read more.
Most supervised person re-identification methods show their excellent performance, but using labeled datasets is very expensive, which limits its application in practical scenarios. To solve the scalability problem, we propose a Cross-camera Erased Feature Learning (CEFL) framework for unsupervised person re-identification that learns discriminative features from image appearances without manual annotations, where both of the cross-camera global image appearance and the local details are explored. Specifically, for the global appearance, in order to bridge the gap between images with the same identities under different cameras, we generate style-transferred images. The network is trained to classify the original images, the style-transferred images and the negative samples. To learn the partial details of the images, we generate erased images and train the network to pull the similar erased images together and push the dissimilar ones away. In addition, we joint learn the discriminative global and local information to learn a more robust model. Global and erased features are used together in feature learning which are successful conjunction of BFENet. A large number of experiments show the superiority of CEFL in unsupervised pedestrian re-identification. Full article
Show Figures

Figure 1

Figure 1
<p>The details of the remaining areas are still similar after erasing part of the original image.</p>
Full article ">Figure 2
<p>The structure of our unsupervised Cross-camera Erased Feature Learning (CEFL) network. In the three branches from top to bottom, we used three different components and loss functions. Features is classified by minimizing intra-class gaps while maximizing inter-class gaps in cross-camera global feature learning. Similar erased feature vectors are pulled together and dissimilar features are pushed away in erased partial feature learning. Global features and erased features are used together in joint global and partial feature learning.</p>
Full article ">Figure 3
<p>Examples of images generated by Camstyle.</p>
Full article ">Figure 4
<p>Similar classes are drawn closer and dissimilar classes are pushed away.</p>
Full article ">Figure 5
<p>The effect of the parameters <math display="inline"><semantics> <mi>t</mi> </semantics></math> in loss function <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>G</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>The effect of the parameters <math display="inline"><semantics> <mi>k</mi> </semantics></math> in loss function <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>B</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">
12 pages, 3728 KiB  
Article
Local-Topology-Based Scaling for Distance Preserving Dimension Reduction Method to Improve Classification of Biomedical Data-Sets
by Karaj Khosla, Indra Prakash Jha, Ajit Kumar and Vibhor Kumar
Algorithms 2020, 13(8), 192; https://doi.org/10.3390/a13080192 - 10 Aug 2020
Cited by 3 | Viewed by 3392
Abstract
Dimension reduction is often used for several procedures of analysis of high dimensional biomedical data-sets such as classification or outlier detection. To improve the performance of such data-mining steps, preserving both distance information and local topology among data-points could be more useful than [...] Read more.
Dimension reduction is often used for several procedures of analysis of high dimensional biomedical data-sets such as classification or outlier detection. To improve the performance of such data-mining steps, preserving both distance information and local topology among data-points could be more useful than giving priority to visualization in low dimension. Therefore, we introduce topology-preserving distance scaling (TPDS) to augment a dimension reduction method meant to reproduce distance information in a higher dimension. Our approach involves distance inflation to preserve local topology to avoid collapse during distance preservation-based optimization. Applying TPDS on diverse biomedical data-sets revealed that besides providing better visualization than typical distance preserving methods, TPDS leads to better classification of data points in reduced dimension. For data-sets with outliers, the approach of TPDS also proves to be useful, even for purely distance-preserving method for achieving better convergence. Full article
(This article belongs to the Special Issue Advanced Data Mining: Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>Dimension reduction of Parkinson’s data set by Naranjo et al. [<a href="#B14-algorithms-13-00192" class="html-bibr">14</a>] using four different methods. The distance stress-based cost is represented here as MDS-cost.</p>
Full article ">Figure 2
<p>Dimension reduction of mouse protein data set. Visualization of reduction to two dimensions. The distance stress-based cost is represented as MDS-cost. The neighborhood proximity of data-points belonging to same class is better in TPDS output than t-SNE.</p>
Full article ">Figure 3
<p>Visualization of mouse protein data set after reduction to three dimensions. The trisomic and normal mice data-points are shown with two different colors. TPDS is able to show clear separation between trisomic and normal mice data-points.</p>
Full article ">Figure 4
<p>Dimension reduction of SCADI data set.The adjusted Rand Index and Normalized mutual information (NMI) were calculated after performing k-mean clustering using k = 7.</p>
Full article ">Figure 5
<p>Dimension reduction of single-cell expression data. The 7 types of cells are shown with different colors. The adjusted Rand Index (ARI) and Normalized mutual information (NMI) were calculated after k-mean clustering (k = 7). The outlier cells (shown in red color) seem to have different effect with each dimension reduction method. For demonstration purpose perplexity was set to 4 for t-SNE. The performance of TPDS is substantially better than Sammon mapping and non-metric MDS even in terms of MDS-cost minimization.</p>
Full article ">
30 pages, 743 KiB  
Article
Faster Algorithms for Mining Shortest-Path Distances from Massive Time-Evolving Graphs
by Mattia D’Emidio
Algorithms 2020, 13(8), 191; https://doi.org/10.3390/a13080191 - 4 Aug 2020
Cited by 6 | Viewed by 3517
Abstract
Computing shortest-path distances is a fundamental primitive in the context of graph data mining, since this kind of information is essential in a broad range of prominent applications, which include social network analysis, data routing, web search optimization, database design and route planning. [...] Read more.
Computing shortest-path distances is a fundamental primitive in the context of graph data mining, since this kind of information is essential in a broad range of prominent applications, which include social network analysis, data routing, web search optimization, database design and route planning. Standard algorithms for shortest paths (e.g., Dijkstra’s) do not scale well with the graph size, as they take more than a second or huge memory overheads to answer a single query on the distance for large-scale graph datasets. Hence, they are not suited to mine distances from big graphs, which are becoming the norm in most modern application contexts. Therefore, to achieve faster query answering, smarter and more scalable methods have been designed, the most effective of them based on precomputing and querying a compact representation of the transitive closure of the input graph, called the 2-hop-cover labeling. To use such approaches in realistic time-evolving scenarios, when the managed graph undergoes topological modifications over time, specific dynamic algorithms, carefully updating the labeling as the graph evolves, have been introduced. In fact, recomputing from scratch the 2-hop-cover structure every time the graph changes is not an option, as it induces unsustainable time overheads. While the state-of-the-art dynamic algorithm to update a 2-hop-cover labeling against incremental modifications (insertions of arcs/vertices, arc weights decreases) offers very fast update times, the only known solution for decremental modifications (deletions of arcs/vertices, arc weights increases) is still far from being considered practical, as it requires up to tens of seconds of processing per update in several prominent classes of real-world inputs, as experimentation shows. In this paper, we introduce a new dynamic algorithm to update 2-hop-cover labelings against decremental changes. We prove its correctness, formally analyze its worst-case performance, and assess its effectiveness through an experimental evaluation employing both real-world and synthetic inputs. Our results show that it improves, by up to several orders of magnitude, upon average update times of the only existing decremental algorithm, thus representing a step forward towards real-time distance mining in general, massive time-evolving graphs. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Networks)
Show Figures

Figure 1

Figure 1
<p>A sample graph and a corresponding 2-<span class="html-small-caps">hop</span>-<span class="html-small-caps">cover</span> labeling is shown. Vertex ordering is <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>4</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>6</mn> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Consider the graph of <a href="#algorithms-13-00191-f001" class="html-fig">Figure 1</a> (<b>left</b>) and the corresponding labeling (<b>right</b>). Assume arc <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> is removed. If Algorithm <span class="html-small-caps">bidir–2hc</span> is executed, vertices <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>6</mn> </mrow> </semantics></math> have their label sets scanned twice, once during the detection of affected vertices and once during the removal phase, since hub vertex for pairs <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>3</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>5</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>6</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> is vertex 4. However, no label entry is removed from <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">i</mi> <mi mathvariant="sans-serif">n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> nor from <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> (the same hold for label sets of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">i</mi> <mi mathvariant="sans-serif">n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">i</mi> <mi mathvariant="sans-serif">n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>). Our algorithm instead scans these label sets only once.</p>
Full article ">Figure 3
<p>(<b>a</b>): assume all arcs in this example have unitary weight and the vertex ordering is <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>}</mo> </mrow> </semantics></math>. Both <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>3</mn> </msub> </semantics></math> are in <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">F</mi> <mrow> <mi mathvariant="sans-serif">G</mi> </mrow> <mi mathvariant="sans-serif">L</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> since the hub vertex for pairs <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">x</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, in the labeling L induced by the above ordering, is vertex <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> </semantics></math>. However, if arc <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </semantics></math> is removed then <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">d</mi> <mi mathvariant="sans-serif">G</mi> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> changes while <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">d</mi> <mi mathvariant="sans-serif">G</mi> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> does not, since <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>3</mn> </msub> </semantics></math> has two shortest paths of the same weight to <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> </semantics></math>. (<b>b</b>): In the toy graph G shown here the only shortest path from x to h passes through arc <math display="inline"><semantics> <mrow> <mo>(</mo> <mi mathvariant="sans-serif">x</mi> <mo>,</mo> <mi mathvariant="sans-serif">y</mi> <mo>)</mo> </mrow> </semantics></math>. Therefore, if <math display="inline"><semantics> <mrow> <mo>(</mo> <mi mathvariant="sans-serif">x</mi> <mo>,</mo> <mi mathvariant="sans-serif">y</mi> <mo>)</mo> </mrow> </semantics></math> undergoes a decremental update, we have that <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">x</mi> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, and all vertices that are connected to <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> </semantics></math>, are in the forward cover graph (in blue). Symmetrically, <span class="html-italic">r</span> and h are in the backward cover graph (in red) while <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">k</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">k</mi> <mn>2</mn> </msub> </semantics></math> are in neither of the two cover graphs.</p>
Full article ">Figure 4
<p>Consider again the graph of <a href="#algorithms-13-00191-f001" class="html-fig">Figure 1</a> and a corresponding 2-<span class="html-small-caps">hop</span>-<span class="html-small-caps">cover</span> labeling. Assume arc <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> is removed. We have that vertex 3 is in <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">F</mi> <mrow> <mi mathvariant="sans-serif">G</mi> </mrow> <mi mathvariant="sans-serif">L</mi> </msubsup> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and vertex 0 is in <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">B</mi> <mrow> <mi mathvariant="sans-serif">G</mi> </mrow> <mi mathvariant="sans-serif">L</mi> </msubsup> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. However, there is no outdated label entry in <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> but the decremental operation on arc <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> causes the cover property to be broken for pair <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>3</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>(<b>a</b>): an example of the contradiction reached in the proof of Property 3. Suppose vertex ordering is <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>5</mn> </msub> <mo>}</mo> </mrow> </semantics></math> and all arcs weight 1 for the sake of the example. Therefore, in any minimal well-ordered 2-<span class="html-small-caps">hop</span>-<span class="html-small-caps">cover</span> labeling entry <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">δ</mi> <mrow> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> cannot belong to <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>. This allows Algorithm <span class="html-small-caps">clean</span> to avoid removing correct entries. (<b>b</b>): a case where a correct entry is preserved by Algorithm <span class="html-small-caps">clean</span>. Suppose the vertex ordering is <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>7</mn> </msub> <mo>}</mo> </mrow> </semantics></math> and all arcs weight 1 for the sake of the example. Therefore in any minimal well-ordered 2-<span class="html-small-caps">hop</span>-<span class="html-small-caps">cover</span> labeling, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> will contain <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">δ</mi> <mrow> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">δ</mi> <mrow> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>=</mo> <mi mathvariant="sans-serif">d</mi> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>. Clearly, also <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> will contain. similarly, entries <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>,</mo> <mn>3</mn> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>,</mo> <mn>3</mn> <mo>)</mo> </mrow> </semantics></math>, respectively. If arc <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>5</mn> </msub> <mo>)</mo> </mrow> </semantics></math> undergoes some decremental update, the (correct) entry in <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> is not removed thanks to the presence of the alternative shortest path through <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> </semantics></math> with the same weight as that through <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>(<b>a</b>): an example of the contradiction reached in the proof of Lemma 3. Suppose vertex ordering is <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>5</mn> </msub> <mo>}</mo> </mrow> </semantics></math> and all arcs weight 1. Therefore, in any minimal well-ordered 2-<span class="html-small-caps">hop</span>-<span class="html-small-caps">cover</span> labeling, entry <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="sans-serif">δ</mi> <mrow> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <msub> <mi mathvariant="sans-serif">x</mi> <mn>1</mn> </msub> </mrow> </msub> </mrow> </semantics></math> cannot belong to <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">L</mi> <mrow> <mi mathvariant="sans-serif">o</mi> <mi mathvariant="sans-serif">u</mi> <mi mathvariant="sans-serif">t</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="sans-serif">x</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>, due to the pruning test succeeding because of <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">x</mi> <mn>0</mn> </msub> </semantics></math>. Also, in this case no correct entry is removed by Algorithm <span class="html-small-caps">clean</span>. (<b>b</b>): restore phase for vertex <span class="html-italic">r</span>.</p>
Full article ">Figure 7
<p>Running times of <span class="html-small-caps">fs–2hc</span>, <span class="html-small-caps">bidir–2hc</span> and <span class="html-small-caps">queue–2hc</span> on some of the considered inputs. The y-axis is log-scaled to magnify the differences.</p>
Full article ">
11 pages, 362 KiB  
Article
On a Nonsmooth Gauss–Newton Algorithms for Solving Nonlinear Complementarity Problems
by Marek J. Śmietański
Algorithms 2020, 13(8), 190; https://doi.org/10.3390/a13080190 - 4 Aug 2020
Cited by 3 | Viewed by 3260
Abstract
In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. [...] Read more.
In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. We present two types of algorithms (usual and inexact), which have superlinear and global convergence for semismooth cases. These results can be applied to efficiently find all solutions of the nonlinear complementarity problems under some mild assumptions. The results of the numerical tests are attached as a complement of the theoretical considerations. Full article
Show Figures

Figure 1

Figure 1
<p>Number of iterations for various starting points (for Example 1).</p>
Full article ">Figure 2
<p>Number of iterations for various starting points (for Example 2.)</p>
Full article ">
20 pages, 1167 KiB  
Article
Node Placement Optimization of Wireless Sensor Networks Using Multi-Objective Adaptive Degressive Ary Number Encoded Genetic Algorithm
by Yijie Zhang and Mandan Liu
Algorithms 2020, 13(8), 189; https://doi.org/10.3390/a13080189 - 3 Aug 2020
Cited by 10 | Viewed by 4140
Abstract
The wireless sensor network (WSN) has the advantages of low cost, high monitoring accuracy, good fault tolerance, remote monitoring and convenient maintenance. It has been widely used in various fields. In the WSN, the placement of node sensors has a great impact on [...] Read more.
The wireless sensor network (WSN) has the advantages of low cost, high monitoring accuracy, good fault tolerance, remote monitoring and convenient maintenance. It has been widely used in various fields. In the WSN, the placement of node sensors has a great impact on its coverage, energy consumption and some other factors. In order to improve the convergence speed of a node placement optimization algorithm, the encoding method is improved in this paper. The degressive ary number encoding is further extended to a multi-objective optimization problem. Furthermore, the adaptive changing rule of ary number is proposed by analyzing the experimental results of the N-ary number encoded algorithm. Then a multi-objective optimization algorithm adopting the adaptive degressive ary number encoding method has been used in optimizing the node placement in WSN. The experiments show that the proposed adaptive degressive ary number encoded algorithm can improve both the optimization effect and search efficiency when solving the node placement problem. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>The examples of <span class="html-italic">N</span>-ary individuals.</p>
Full article ">Figure 2
<p>The optimal results for different values of <span class="html-italic">N</span> of node placement 1.</p>
Full article ">Figure 3
<p>The optimal results for different values of <span class="html-italic">N</span> of node placement 2.</p>
Full article ">Figure 4
<p>The flow chart of AD-ADENSGA.</p>
Full article ">Figure 5
<p>The optimal results for different values of <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> of node placement 3.</p>
Full article ">Figure 6
<p>The average ary number changing with the generation for different values of <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>The optimal results for different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> of node placement 4.</p>
Full article ">Figure 8
<p>The average ary number changing with the generation for different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>The average ary number changing with the generation for node placement 5.</p>
Full article ">Figure 10
<p>The optimal results of ADENSGA, D-ADENSGA and AD-ADENSGA for node placement 5.</p>
Full article ">Figure 11
<p>The optimal results of ADENSGA, D-ADENSGA and AD-ADENSGA for node placement 6.</p>
Full article ">Figure 12
<p>The optimal results of ADENSGA, D-ADENSGA and AD-ADENSGA for node placement 7.</p>
Full article ">
18 pages, 5832 KiB  
Article
Application of the Reed-Solomon Algorithm as a Remote Sensing Data Fusion Tool for Land Use Studies
by Piotr A. Werner
Algorithms 2020, 13(8), 188; https://doi.org/10.3390/a13080188 - 3 Aug 2020
Cited by 1 | Viewed by 3501
Abstract
The Reed-Solomon algorithm is well known in different fields of computer science. The novelty of this study lies in the different interpretation of the algorithm itself and its scope of application for remote sensing, especially at the preparatory stage, i.e., data fusion. A [...] Read more.
The Reed-Solomon algorithm is well known in different fields of computer science. The novelty of this study lies in the different interpretation of the algorithm itself and its scope of application for remote sensing, especially at the preparatory stage, i.e., data fusion. A short review of the attempts to use different data fusion approaches in geospatial technologies explains the possible usage of the algorithm. The rationale behind its application for data fusion is to include all possible information from all acquired spectral bands, assuming that complete composite information in the form of one compound image will improve both the quality of visualization and some aspects of further quantitative and qualitative analyses. The concept arose from an empirical, heuristic combination of geographic information systems (GIS), map algebra, and two-dimensional cellular automata. The challenges are related to handling big quantitative data sets and the awareness that these numbers are in fact descriptors of a real-world multidimensional view. An empirical case study makes it easier to understand the operationalization of the Reed-Solomon algorithm for land use studies. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Testing scene of RS algorithm: (<b>a</b>) part of Landsat ETM+ satellite image (ID = LE71880232000128EDC00); Red-Green-Blue (RGB) composition of bands (4, 5, 3). (<b>b</b>) Landsat ETM+ whole satellite image and testing scene (red)—location in Poland.</p>
Full article ">Figure 2
<p>Visualization of the resulting image from the RS algorithm application exported as a 24-Bit depth geotiff image (greyscale).</p>
Full article ">Figure 3
<p>The flattened histogram of the resulting rescaled image from the RS algorithm application (linear scale, class 0: omitted due to no data value; SW/left bottom corner/part of testing scene).</p>
Full article ">Figure 4
<p>(<b>a</b>) Visualization of a single band, greyscale image of the transformed and rescaled outcome of the RS algorithm application on an original Landsat ETM+ satellite image, involving information from all nine spectral band components. (<b>b</b>) Visualization of single band, pseudo-color, 256 classes of transformed and rescaled image, outcome of the RS algorithm application, involving information from all nine spectral bands components.</p>
Full article ">Figure 5
<p>Visualization of a single band, pseudo color and manually clustered (of adjacent values) image of the transformed and rescaled outcome of the RS algorithm application on an original Landsat ETM+ satellite image, involving information from all nine spectral band components. The colors are similar to the CLC legend. Displayed as a layer on top of a topographic map.</p>
Full article ">Figure 6
<p>CORINE Land Cover map of testing scene [<a href="#B82-algorithms-13-00188" class="html-bibr">82</a>].</p>
Full article ">Figure 7
<p>Visualization of the resulting transformation image using k-mean clustering of 24 LU/LC classes. The colors are similar to the CLC legend. Displayed as a layer on top of a topographic map.</p>
Full article ">
20 pages, 851 KiB  
Article
Constructing Reliable Computing Environments on Top of Amazon EC2 Spot Instances
by Altino M. Sampaio and Jorge G. Barbosa
Algorithms 2020, 13(8), 187; https://doi.org/10.3390/a13080187 - 31 Jul 2020
Cited by 4 | Viewed by 3588
Abstract
Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, [...] Read more.
Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, is that the delay in acquiring spots can be incredible high. Moreover, SIs may not always be available as they can be reclaimed by EC2 at any given time, with a two-minute interruption notice. In this paper, we propose a multi-workflow scheduling algorithm, allied with a container migration-based mechanism, to dynamically construct and readjust virtual clusters on top of non-reserved EC2 pricing model instances. Our solution leverages recent findings on performance and behavior characteristics of EC2 spots. We conducted simulations by submitting real-life workflow applications, constrained by user-defined deadline and budget quality of service (QoS) parameters. The results indicate that our solution improves the rate of completed tasks by almost 20%, and the rate of completed workflows by at least 30%, compared with other state-of-the-art algorithms, for a worse-case scenario. Full article
Show Figures

Figure 1

Figure 1
<p>Cloud workflow scheduling onto reliable computing environments.</p>
Full article ">Figure 2
<p>The performance of the scheduling algorithm and container-based migration strategy in terms of: (<b>a</b>) Task efficiency <math display="inline"><semantics> <msub> <mi>E</mi> <mi>T</mi> </msub> </semantics></math>. (<b>b</b>) Workflow efficiency <math display="inline"><semantics> <msub> <mi>E</mi> <mi>W</mi> </msub> </semantics></math>. (<b>c</b>) Planning efficiency <math display="inline"><semantics> <msub> <mi>E</mi> <mi>P</mi> </msub> </semantics></math>. (<b>d</b>) <math display="inline"><semantics> <msub> <mi>E</mi> <mi>C</mi> </msub> </semantics></math>, the ratio of the completion rate of tasks (or task efficiency) to the product of average task runtime and monetary costs (higher is better).</p>
Full article ">Figure 3
<p>Characterization of the schedules regarding the instance pricing model.</p>
Full article ">
23 pages, 451 KiB  
Article
A Review on Recent Advancements in FOREX Currency Prediction
by Md. Saiful Islam, Emam Hossain, Abdur Rahman, Mohammad Shahadat Hossain and Karl Andersson
Algorithms 2020, 13(8), 186; https://doi.org/10.3390/a13080186 - 30 Jul 2020
Cited by 21 | Viewed by 13845
Abstract
In recent years, the foreign exchange (FOREX) market has attracted quite a lot of scrutiny from researchers all over the world. Due to its vulnerable characteristics, different types of research have been conducted to accomplish the task of predicting future FOREX currency prices [...] Read more.
In recent years, the foreign exchange (FOREX) market has attracted quite a lot of scrutiny from researchers all over the world. Due to its vulnerable characteristics, different types of research have been conducted to accomplish the task of predicting future FOREX currency prices accurately. In this research, we present a comprehensive review of the recent advancements of FOREX currency prediction approaches. Besides, we provide some information about the FOREX market and cryptocurrency market. We wanted to analyze the most recent works in this field and therefore considered only those papers which were published from 2017 to 2019. We used a keyword-based searching technique to filter out popular and relevant research. Moreover, we have applied a selection algorithm to determine which papers to include in this review. Based on our selection criteria, we have reviewed 39 research articles that were published on “Elsevier”, “Springer”, and “IEEE Xplore” that predicted future FOREX prices within the stipulated time. Our research shows that in recent years, researchers have been interested mostly in neural networks models, pattern-based approaches, and optimization techniques. Our review also shows that many deep learning algorithms, such as gated recurrent unit (GRU) and long short term memory (LSTM), have been fully explored and show huge potential in time series prediction. Full article
Show Figures

Figure 1

Figure 1
<p>Organization of the research.</p>
Full article ">Figure 2
<p>Country-wise classification of studies on foreign exchange market prediction.</p>
Full article ">Figure 3
<p>Sources of reviewed research papers.</p>
Full article ">Figure 4
<p>Year-wise categorization of studies on the FOREX market.</p>
Full article ">Figure 5
<p>Distribution of papers according to the journals.</p>
Full article ">Figure 6
<p>Papers based on references.</p>
Full article ">Figure 7
<p>Methods based on citations.</p>
Full article ">Figure 8
<p>Popularity of the algorithms.</p>
Full article ">
29 pages, 692 KiB  
Article
Machine Learning-Guided Dual Heuristics and New Lower Bounds for the Refueling and Maintenance Planning Problem of Nuclear Power Plants
by Nicolas Dupin and El-Ghazali Talbi
Algorithms 2020, 13(8), 185; https://doi.org/10.3390/a13080185 - 30 Jul 2020
Cited by 6 | Viewed by 4171
Abstract
This paper studies the hybridization of Mixed Integer Programming (MIP) with dual heuristics and machine learning techniques, to provide dual bounds for a large scale optimization problem from an industrial application. The case study is the EURO/ROADEF Challenge 2010, to optimize the refueling [...] Read more.
This paper studies the hybridization of Mixed Integer Programming (MIP) with dual heuristics and machine learning techniques, to provide dual bounds for a large scale optimization problem from an industrial application. The case study is the EURO/ROADEF Challenge 2010, to optimize the refueling and maintenance planning of nuclear power plants. Several MIP relaxations are presented to provide dual bounds computing smaller MIPs than the original problem. It is proven how to get dual bounds with scenario decomposition in the different 2-stage programming MILP formulations, with a selection of scenario guided by machine learning techniques. Several sets of dual bounds are computable, improving significantly the former best dual bounds of the literature and justifying the quality of the best primal solution known. Full article
(This article belongs to the Special Issue Optimization Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>Demand profiles for the 30 stochastic scenarios of instance A5.</p>
Full article ">Figure 2
<p>Comparison of dual bound convergence using MILP computation of <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>B</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>B</mi> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </semantics></math> formulations for instance B6 (relatively easy) and for difficult instance B7.</p>
Full article ">
21 pages, 10437 KiB  
Article
Iterative Algorithm for Solving Scalar Fractional Differential Equations with Riemann–Liouville Derivative and Supremum
by Ravi Agarwal, Snezhana Hristova, Donal O’Regan and Kremena Stefanova
Algorithms 2020, 13(8), 184; https://doi.org/10.3390/a13080184 - 30 Jul 2020
Cited by 4 | Viewed by 3038
Abstract
The initial value problem for a special type of scalar nonlinear fractional differential equation with a Riemann–Liouville fractional derivative is studied. The main characteristic of the equation is the presence of the supremum of the unknown function over a previous time interval. This [...] Read more.
The initial value problem for a special type of scalar nonlinear fractional differential equation with a Riemann–Liouville fractional derivative is studied. The main characteristic of the equation is the presence of the supremum of the unknown function over a previous time interval. This type of equation is difficult to be solved explicitly and we need approximate methods for its solving. In this paper, initially, mild lower and mild upper solutions are defined. Then, based on these definitions and the application of the monotone-iterative technique, we present an algorithm for constructing two types of successive approximations. Both sequences are monotonically convergent from above and from below, respectively, to the mild solutions of the given problem. The suggested iterative scheme is applied to particular problems to illustrate its application. Full article
Show Figures

Figure 1

Figure 1
<p>Graphs of the Mittag–Leffer function and its bounds.</p>
Full article ">Figure 2
<p>Graph of the function <math display="inline"><semantics> <mrow> <mi>β</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mn>0.2</mn> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mn>0.5</mn> </mrow> </msup> <mo>+</mo> <mn>0.25</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Graph of the function <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>β</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mn>0.2</mn> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mn>0.5</mn> </mrow> </msup> <mo>+</mo> <mn>0.25</mn> <mo>)</mo> <mrow> <mo>(</mo> <mo>−</mo> <mn>0.0105</mn> <mo>)</mo> </mrow> <mo>+</mo> <mn>0.49</mn> <mi>β</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Graphs of the function <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and its bound in (<a href="#FD8-algorithms-13-00184" class="html-disp-formula">8</a>).</p>
Full article ">Figure 5
<p>Graphs of the function <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and its upper bound defined by (<a href="#FD9-algorithms-13-00184" class="html-disp-formula">9</a>).</p>
Full article ">Figure 6
<p>Graphs of the functions <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Example 2. Graph of the function <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Example 2. Graph of the function <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Example 2. Graphs of the bound <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0.1</mn> </mrow> </semantics></math> and the integral.</p>
Full article ">Figure 10
<p>Eaxmple 2. Graphs of the functions <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>,</mo> <mi>β</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>,</mo> <mi>ϕ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Example 2. Graphs of the functions <math display="inline"><semantics> <mrow> <msup> <mi>α</mi> <mi>i</mi> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>β</mi> <mi>i</mi> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">
23 pages, 675 KiB  
Article
Influence Maximization with Priority in Online Social Networks
by Canh V. Pham, Dung K. T. Ha, Quang C. Vu, Anh N. Su and Huan X. Hoang
Algorithms 2020, 13(8), 183; https://doi.org/10.3390/a13080183 - 29 Jul 2020
Cited by 5 | Viewed by 3575
Abstract
The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in [...] Read more.
The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in information propagation and social network analysis. However, previous studies ignored the constraint of priority that led to inefficient seed collections. In some real situations, companies or organizations often prioritize influencing potential users during their influence diffusion campaigns. With a new approach to these existing works, we propose a new problem called Influence Maximization with Priority (IMP) which finds out a set seed of k nodes in a social network to be able to influence the largest number of nodes subject to the influence spread to a specific set of nodes U (called priority set) at least a given threshold T in this paper. We show that the problem is NP-hard under well-known IC model. To find the solution, we propose two efficient algorithms, called Integrated Greedy (IG) and Integrated Greedy Sampling (IGS) with provable theoretical guarantees. IG provides a 1(11k)t-approximation solution with t is an outcome of algorithm and t1. The worst-case approximation ratio is obtained when t=1 and it is equal to 1/k. In addition, IGS is an efficient randomized approximation algorithm based on sampling method that provides a 1(11k)tϵ-approximation solution with probability at least 1δ with ϵ>0,δ(0,1) as input parameters of the problem. We conduct extensive experiments on various real networks to compare our IGS algorithm to the state-of-the-art algorithms in IM problem. The results indicate that our algorithm provides better solutions interns of influence on the priority sets when approximately give twice to ten times higher than threshold T while running time, memory usage and the influence spread also give considerable results compared to the others. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

Figure 1
<p>A toy example shows the difference between the influence maximization and our proposed problem.</p>
Full article ">Figure 2
<p><span class="html-italic">Comparisons of Influence Spreading</span> with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>100</mn> <mo>→</mo> <mn>500</mn> </mrow> </semantics></math>, T = 100 and U size = 200</p>
Full article ">Figure 3
<p>Comparisons about <span class="html-italic">Runtime (s)</span> with k varies from 150 to 200 between <math display="inline"><semantics> <mi mathvariant="sans-serif">IGS</mi> </semantics></math> and the others.</p>
Full article ">
23 pages, 620 KiB  
Article
Trajectory Clustering and k-NN for Robust Privacy Preserving k-NN Query Processing in GeoSpark
by Elias Dritsas, Andreas Kanavos, Maria Trigka, Gerasimos Vonitsanos, Spyros Sioutas and Athanasios Tsakalidis
Algorithms 2020, 13(8), 182; https://doi.org/10.3390/a13080182 - 28 Jul 2020
Cited by 4 | Viewed by 4045
Abstract
Privacy Preserving and Anonymity have gained significant concern from the big data perspective. We have the view that the forthcoming frameworks and theories will establish several solutions for privacy protection. The k-anonymity is considered a key solution that has been widely employed [...] Read more.
Privacy Preserving and Anonymity have gained significant concern from the big data perspective. We have the view that the forthcoming frameworks and theories will establish several solutions for privacy protection. The k-anonymity is considered a key solution that has been widely employed to prevent data re-identifcation and concerns us in the context of this work. Data modeling has also gained significant attention from the big data perspective. It is believed that the advancing distributed environments will provide users with several solutions for efficient spatio-temporal data management. GeoSpark will be utilized in the current work as it is a key solution that has been widely employed for spatial data. Specifically, it works on the top of Apache Spark, the main framework leveraged from the research community and organizations for big data transformation, processing and visualization. To this end, we focused on trajectory data representation so as to be applicable to the GeoSpark environment, and a GeoSpark-based approach is designed for the efficient management of real spatio-temporal data. Th next step is to gain deeper understanding of the data through the application of k nearest neighbor (k-NN) queries either using indexing methods or otherwise. The k-anonymity set computation, which is the main component for privacy preservation evaluation and the main issue of our previous works, is evaluated in the GeoSpark environment. More to the point, the focus here is on the time cost of k-anonymity set computation along with vulnerability measurement. The extracted results are presented into tables and figures for visual inspection. Full article
(This article belongs to the Special Issue Algorithmic Data Management)
Show Figures

Figure 1

Figure 1
<p>An Overview of Continuous Trajectory Point <span class="html-italic">k</span> Nearest Neighbor (<math display="inline"><semantics> <mrow> <mi>C</mi> <mi>T</mi> <mi>P</mi> <mi>k</mi> <mi>N</mi> <mi>N</mi> </mrow> </semantics></math>) Query.</p>
Full article ">Figure 2
<p>An Overview of Spatio-Temporal Data Partitioning and Indexing.</p>
Full article ">Figure 3
<p>An Overview of GeoSpark Layers.</p>
Full article ">Figure 4
<p>An Overview of 40 Trajectories through Zeppelin.</p>
Full article ">Figure 5
<p>Time Cost for <span class="html-italic">k</span>-Anonymity Set Computation with or without Indexing for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>80</mn> <mo>,</mo> <mn>500</mn> <mo>,</mo> <mn>2000</mn> </mrow> </semantics></math> Mobile Objects.</p>
Full article ">Figure 6
<p>Time Cost for <span class="html-italic">k</span>-Anonymity Set Computation with or without Indexing for 3 Cases of Total Input Data in Executor.</p>
Full article ">Figure 7
<p>Time Cost for Mobile Objects <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mo>{</mo> <mn>500</mn> <mo>,</mo> <mn>2000</mn> <mo>,</mo> <mn>8000</mn> <mo>,</mo> <mn>32.000</mn> <mo>}</mo> </mrow> </semantics></math> without Indexing for <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Spatial PointRDD Data Distribution for 4 Spatial Partition Techniques for 2000 Mobile Objects.</p>
Full article ">Figure 9
<p>(<b>a</b>) Euclidean Space and (<b>b</b>) Polar Space for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> Trajectories, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> Timestamps.</p>
Full article ">Figure 10
<p>Hough-<span class="html-italic">X</span> Space of <span class="html-italic">x</span> for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> Trajectories, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> Timestamps.</p>
Full article ">Figure 11
<p>Hough-<span class="html-italic">X</span> Space of <span class="html-italic">y</span> for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> Trajectories, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> Timestamps.</p>
Full article ">Figure 12
<p>Vulnerability Performance Comparison in Euclidean and Hough-<span class="html-italic">X</span> Space.</p>
Full article ">
26 pages, 3481 KiB  
Article
On the Optimal Calculation of the Rice Coding Parameter
by Fernando Solano Donado
Algorithms 2020, 13(8), 181; https://doi.org/10.3390/a13080181 - 27 Jul 2020
Cited by 4 | Viewed by 5026
Abstract
In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms [...] Read more.
In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms that partition the input sequence of data into sub-sequences, such that if each sub-sequence is coded with a different Rice parameter, the overall code length is minimised. An algorithm for finding the optimal partitioning solution for Rice codes is proposed, as well as fast heuristics, based on the understanding of the problem trade-offs. Full article
(This article belongs to the Special Issue Lossless Data Compression)
Show Figures

Figure 1

Figure 1
<p>Number of bits used to encode randomly generated data by using different Rice parameters <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>6</mn> <mo>]</mo> </mrow> </semantics></math> for the input data sequence presented in <a href="#algorithms-13-00181-t001" class="html-table">Table 1</a>. The total bit-length function <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </semantics></math> is only defined for integer values of <span class="html-italic">r</span> (marked with dots and squares in the figure). A continuous line was added between the points to improve visibility.</p>
Full article ">Figure 2
<p>Function <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </semantics></math> bounded by <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </semantics></math> for the input sequence of <a href="#sec1dot1-algorithms-13-00181" class="html-sec">Section 1.1</a>. The minimum value of each function is marked with a squared red dot. The total bit-length function <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </semantics></math> is only defined for integer values of <span class="html-italic">r</span>. A continuous line was added between the points to improve visibility.</p>
Full article ">Figure 3
<p>Estimation of the minimum bit-length after Rice coding.</p>
Full article ">Figure 4
<p>Estimation of the minimum bit-length after Rice coding when the first preprocessing method is used.</p>
Full article ">Figure 5
<p>Number of minimal bit-length solutions found by approximating the Rice parameter as given in (<a href="#FD20-algorithms-13-00181" class="html-disp-formula">20</a>). Labels “floor”, “ceiling”, and “ceiling + 1” should be understood as <math display="inline"><semantics> <mfenced open="&#x230A;" close="&#x230B;"> <mi>S</mi> </mfenced> </semantics></math>, <math display="inline"><semantics> <mfenced open="&#x2308;" close="&#x2309;"> <mi>S</mi> </mfenced> </semantics></math>, and <math display="inline"><semantics> <mrow> <mfenced open="&#x2308;" close="&#x2309;"> <mi>S</mi> </mfenced> <mo>+</mo> <mn>1</mn> </mrow> </semantics></math>, respectively. Each column represents the number of minimal bit-length solutions obtained by using the specified approximations, e.g., there were 144,216 minimal bit-length solutions found by approximating the parameter as <math display="inline"><semantics> <mfenced open="&#x230A;" close="&#x230B;"> <mi>S</mi> </mfenced> </semantics></math> and 189,589 minimal bit-length solutions found by approximating the parameter either as <math display="inline"><semantics> <mfenced open="&#x230A;" close="&#x230B;"> <mi>S</mi> </mfenced> </semantics></math> or <math display="inline"><semantics> <mfenced open="&#x2308;" close="&#x2309;"> <mi>S</mi> </mfenced> </semantics></math> (both are equal-cost solutions).</p>
Full article ">Figure 6
<p>Normalised compression factor for each data type as the batch size varies.</p>
Full article ">Figure 7
<p>Scaled sequential difference of 128 measurements of interval rain from the 63rd Street Weather Station in Chicago, USA, between 09:00:00 and 19:00:00 local time on 1 March 2017. The continuous (black) line represents the sequence of measurements over time. The shaded (coloured) boxes represent the way in which the given sequence of measurements can be partitioned into four sub-sequences in order to achieve shorter Rice codes.</p>
Full article ">Figure 8
<p>The directed acyclic bipartite graph for solving the multi-parameter Rice encoding problem. Each pair of <math display="inline"><semantics> <msub> <mi>u</mi> <mi>k</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>v</mi> <mi>k</mi> </msub> </semantics></math> vertices represents the start and end of a sub-sequence in the basis (see Algorithm 2). The edges represent all possible ways of grouping the input data into larger sub-sequences. Edge costs are equal to the minimum bit-length of the sub-sequence <math display="inline"><semantics> <msubsup> <mi>w</mi> <mi>k</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (see Algorithm 3) or to the overhead bit-length caused due to adding an extra sub-sequence to the solution (<math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>). The solution to the problem is given by the shortest path from <math display="inline"><semantics> <msub> <mi>v</mi> <mi>α</mi> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>u</mi> <mi>ω</mi> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>The directed acyclic bipartite graph for the previously provided example sequence, with weights (<span class="html-italic">w</span>) as given in <a href="#algorithms-13-00181-t004" class="html-table">Table 4</a>. When <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, the shortest path has a total cost of 57. This shortest path is marked using continuous lines as arcs between the vertices. Only the weight of the vertices for this shortest path and those equal to <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> are shown. When <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, the shortest path solution corresponds to the path <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>α</mi> </msub> <mo>→</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>→</mo> <msub> <mi>v</mi> <mn>10</mn> </msub> <mo>→</mo> <msub> <mi>u</mi> <mi>ω</mi> </msub> </mrow> </semantics></math> (not shown) with cost <math display="inline"><semantics> <mrow> <mn>64</mn> <mo>+</mo> <mi>ϵ</mi> <mo>=</mo> <mn>72</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Number of cases where multi-parameter Rice coding yields shorter codes than the single-parameter Rice coding approach.</p>
Full article ">Figure 11
<p>Relative improvement of the compression factor when multi-parameter Rice coding is used per data type.</p>
Full article ">Figure 12
<p>Number of minimal bit-length solutions found by the multi-parameter heuristic, as the parameter <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> varies. For <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>%</mo> </mrow> </semantics></math> of the tested batches, the heuristic was not able to find the optimal solution.</p>
Full article ">Figure 13
<p>Ratio of optimal Rice coding considering partitioning versus the best heuristic partitioning solution found. The mean is calculated considering only those heuristic solutions for which the optimal could not be found, which account for <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>%</mo> </mrow> </semantics></math> of the batches.</p>
Full article ">
15 pages, 1357 KiB  
Article
A Predictive Analysis on Emerging Technology Utilization in Industrialized Construction in the United States and China
by Bing Qi, Shuyu Qian and Aaron Costin
Algorithms 2020, 13(8), 180; https://doi.org/10.3390/a13080180 - 24 Jul 2020
Cited by 7 | Viewed by 3958
Abstract
Considering the increasing use of emerging technologies in industrialized construction in recent years, the primary objective of this article is to develop and validate predictive models to predict the emerging technology utilization level of industrialized construction industry practitioners. Our preliminary research results indicate [...] Read more.
Considering the increasing use of emerging technologies in industrialized construction in recent years, the primary objective of this article is to develop and validate predictive models to predict the emerging technology utilization level of industrialized construction industry practitioners. Our preliminary research results indicate that the company background and personal career profiles can significantly affect practitioners’ technology utilization level. Thus, our prediction model is based on four variables: company size, company type, working experience, and working position. The United States and China are selected as the case studies to validate the prediction model. First, a well-designed questionnaire survey is distributed to the industrialized construction industry practitioners from the two countries, which leads to 81 and 99 valid responses separately. Then, ordinal logistic regression is used to develop a set of models to predict the practitioners’ utilization level of the four main technology types. Finally, the external test dataset consisting of 16 cases indicates the prediction models have a high accuracy. The results also reflect some differences of the technology utilization status in the industrialized construction industry between the United States and China. The major contribution of this research is offering an efficient and accurate method to predict practitioners’ technology utilization level in industrialized construction. Significantly, the models are believed to have a wide application in promoting the emerging technologies in the actual industrialized construction. Full article
Show Figures

Figure 1

Figure 1
<p>Average technology utilization levels of the United States practitioners.</p>
Full article ">Figure 2
<p>Average technology utilization levels of the Chinese practitioners.</p>
Full article ">
16 pages, 451 KiB  
Article
Two-Component Bayesian Hierarchical Models for Cost-Benefit Analysis of Traffic Barrier Crash Count
by Mahdi Rezapour and Khaled Ksaibati
Algorithms 2020, 13(8), 179; https://doi.org/10.3390/a13080179 - 23 Jul 2020
Cited by 5 | Viewed by 2943
Abstract
Road departure crashes tend to be hazardous, especially in rural areas like Wyoming. Traffic barriers could be installed to mitigate the severity of those crashes. However, the severity of traffic barriers crashes still persists. Besides various drivers and environmental characteristics, the roadways and [...] Read more.
Road departure crashes tend to be hazardous, especially in rural areas like Wyoming. Traffic barriers could be installed to mitigate the severity of those crashes. However, the severity of traffic barriers crashes still persists. Besides various drivers and environmental characteristics, the roadways and barrier geometric characteristics play a critical role in the severity of barrier crashes. The Wyoming department of transportation (WYDOT) has initiated a project to identify and optimize the heights of those barriers that are below the design standard, while prioritizing them based on the monetary benefit. This is to optimize first barriers that need an immediate attention, considering the limited budget, and then all other barriers being under design. In order to account for both aspects of frequency and severity of crashes, equivalent property damage only (EPDO) was considered. The data of this type besides having an over-dispersion, exhibits excess amounts of zeroes. Thus, a two-component model was employed to provide a flexible way of addressing this problem. Beside this technique, one-component hierarchical modeling approach was considered for a comparison purpose. This paper presents an empirical cost-benefit analysis based on Bayesian hierarchical machine learning techniques. After identifying the best model in terms of the performance, deviance information criterion (DIC), the results were converted into an equation, and the equation was used for a purpose of machine learning technique. An automated method generated cost based on barriers’ current conditions, and then based on optimized barrier heights. The empirical analysis showed that cost-sensitive modeling and machine learning technique deployment could be used as an effective way for cost-benefit analysis. That could be achieved through measuring the associated costs of barriers’ enhancements, added benefits over years and consequently, barrier prioritization due to lack of available budget. A comprehensive discussion across the two-component models, zero-inflated and hurdle, is included in the manuscript. Full article
Show Figures

Figure 1

Figure 1
<p>Two-component model methodological approach, ZINB.</p>
Full article ">
16 pages, 2816 KiB  
Article
The Model Order Reduction Method as an Effective Way to Implement GPC Controller for Multidimensional Objects
by Sebastian Plamowski and Richard W Kephart
Algorithms 2020, 13(8), 178; https://doi.org/10.3390/a13080178 - 23 Jul 2020
Cited by 4 | Viewed by 3222
Abstract
The paper addresses issues associated with implementing GPC controllers in systems with multiple input signals. Depending on the method of identification, the resulting models may be of a high order and when applied to a control/regulation law, may result in numerical errors due [...] Read more.
The paper addresses issues associated with implementing GPC controllers in systems with multiple input signals. Depending on the method of identification, the resulting models may be of a high order and when applied to a control/regulation law, may result in numerical errors due to the limitations of representing values in double-precision floating point numbers. This phenomenon is to be avoided, because even if the model is correct, the resulting numerical errors will lead to poor control performance. An effective way to identify, and at the same time eliminate, this unfavorable feature is to reduce the model order. A method of model order reduction is presented in this paper that effectively mitigates these issues. In this paper, the Generalized Predictive Control (GPC) algorithm is presented, followed by a discussion of the conditions that result in high order models. Examples are included where the discussed problem is demonstrated along with the subsequent results after the reduction. The obtained results and formulated conclusions are valuable for industry practitioners who implement a predictive control in industry. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>MISO model as a set of SISO models.</p>
Full article ">Figure 2
<p>Step response for T<sub>1</sub> and T<sub>2</sub> transfer functions simulated for 900 time-steps.</p>
Full article ">Figure 3
<p>Step response of the 6th order model after reduction to a common denominator simulated for 900 time-steps.</p>
Full article ">Figure 4
<p>GPC with the 6th order internal model—closed loop simulation of 300 time-steps, dashed line is the set point, and the solid line is the CV.</p>
Full article ">Figure 5
<p>GPC with the 6th order internal model—closed loop simulation of 300 time-steps, trend for MVs signals.</p>
Full article ">Figure 6
<p>Step response for T<sub>1</sub>, T<sub>2</sub>, and T<sub>3</sub> transfer functions simulated for 900 time-steps.</p>
Full article ">Figure 7
<p>Step response of the 9th order model after reduction to common denominator simulated for 900 time-steps.</p>
Full article ">Figure 8
<p>GPC with the 9th order internal model—closed loop simulation of 300 time-steps; dashed line is a set point, solid is a CV, and control parameters as: <span class="html-italic">control horizon</span> <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <span class="html-italic">prediction horizon</span> <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, and the <span class="html-italic">R</span> matrix = <span class="html-italic">identity matrix</span>.</p>
Full article ">Figure 9
<p>GPC with the 9th order internal model—closed loop simulation of 300 time-steps, trend for MVs signals. Controller parameters: <span class="html-italic">control horizon</span> <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <span class="html-italic">prediction horizon</span> <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, and the <span class="html-italic">R</span> matrix = <span class="html-italic">identity matrix</span>.</p>
Full article ">Figure 10
<p>Comparison of original and T<sub>1</sub> and after order reduction (black line)—simulation of 900 time-steps.</p>
Full article ">Figure 11
<p>Comparison of original and T<sub>2</sub> and after order reduction (black line)—simulation of 900 time-steps.</p>
Full article ">Figure 12
<p>Comparison of original and T<sub>3</sub> and after order reduction (black line)—simulation of 900 time-steps.</p>
Full article ">Figure 13
<p>Comparison of the 6th and 9th order models—simulation of 900 time-steps (solid line—9th order model, dashed line—6th order model—model after order reduction).</p>
Full article ">Figure 14
<p>GPC with reduced internal model—closed loop simulation of 300 time-steps where the dashed line is the set point and the solid line is the CV. Controller parameters: <span class="html-italic">control horizon</span> <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <span class="html-italic">prediction horizon</span> <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, and the <span class="html-italic">R</span> matrix = <span class="html-italic">identity matrix</span>.</p>
Full article ">Figure 15
<p>GPC with reduced internal model closed loop simulation of 300 time-steps, trend for MVs signals (u<sub>1</sub>, u<sub>2</sub> and u<sub>3</sub>). Controller parameters: <span class="html-italic">control horizon</span> <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <span class="html-italic">prediction horizon</span> <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, and the <span class="html-italic">R</span> matrix = <span class="html-italic">identity matrix</span>.</p>
Full article ">
19 pages, 3050 KiB  
Article
Sphere Fitting with Applications to Machine Tracking
by Dror Epstein and Dan Feldman
Algorithms 2020, 13(8), 177; https://doi.org/10.3390/a13080177 - 22 Jul 2020
Cited by 8 | Viewed by 3802
Abstract
We suggest a provable and practical approximation algorithm for fitting a set P of n points in R d to a sphere. Here, a sphere is represented by its center x R d and radius r > 0 . The goal is [...] Read more.
We suggest a provable and practical approximation algorithm for fitting a set P of n points in R d to a sphere. Here, a sphere is represented by its center x R d and radius r > 0 . The goal is to minimize the sum p P p x r of distances to the points up to a multiplicative factor of 1 ± ε , for a given constant ε > 0 , over every such r and x. Our main technical result is a data summarization of the input set, called coreset, that approximates the above sum of distances on the original (big) set P for every sphere. Then, an accurate sphere can be extracted quickly via an inefficient exhaustive search from the small coreset. Most articles focus mainly on sphere identification (e.g., circles in 2 D image) rather than finding the exact match (in the sense of extent measures), and do not provide approximation guarantees. We implement our algorithm and provide extensive experimental results on both synthetic and real-world data. We then combine our algorithm in a mechanical pressure control system whose main bottleneck is tracking a falling ball. Full open source is also provided. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

Figure 1
<p>An RGB image that captured a flame of fire [<a href="#B12-algorithms-13-00177" class="html-bibr">12</a>]. The goal is to approximate the center of the flaming ring.</p>
Full article ">Figure 2
<p>Coreset’s size vs. approximation error <math display="inline"><semantics> <mi>ε</mi> </semantics></math>. Experiments on synthetic data points <span class="html-italic">P</span> of different sizes (in colors). The error (y-axis) decreases with coreset’s size (x-axis).</p>
Full article ">Figure 3
<p>Running time. Experiment on synthetic data sets for several expected value of error factor (<math display="inline"><semantics> <mi>ε</mi> </semantics></math>). The measured running time performed for the optimal solution on coreset vs. “RANSAC” algorithm on the full data set and “RANSAC” algorithm on the coreset (“Improved RANSAC”).</p>
Full article ">Figure 4
<p>The 2D experiments result. The following four graphs show comparison results between coreset sampling by our algorithm with uniform sampling and the same amount of sample data, compared to the RANSAC algorithm (on the full set and on the coreset) that runs at the same time. Each graph displays results for a differently-sized data set.</p>
Full article ">Figure 5
<p>The 3D experiments result. The following four graphs show comparison results between coreset sampling by our algorithm with uniform sampling and the same amount of sample data, compared to the RANSAC algorithm (on the full set and on the coreset) that runs at the same time. Each graph displays results for a differently-sized data set.</p>
Full article ">Figure 6
<p>The 4D experiments result. The following four graphs show comparison results between coreset sampling by our algorithm with uniform sampling with the same amount of sample data, compared to the RANSAC algorithm (on the full set and on the coreset) that runs at the same time. Each graph displays results for a differently-sized data set.</p>
Full article ">Figure 7
<p>(<b>Left)</b> An original <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>D</mi> </mrow> </semantics></math> images. (<b>green</b>) A result for circle detection over real <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>D</mi> </mrow> </semantics></math> images using our coreset algorithm. (<b>red</b>) A result for circle detection using “Improved RANSAC” algorithm. (<b>yellow</b>) A result for circle detection using “OpenCV”, a widespread open source library heuristic.</p>
Full article ">Figure 8
<p>Image of the experimental mechanical pressure control system.</p>
Full article ">Figure 9
<p>System illustration. (1) Cone for dropping the marble ball (2) marble (3) tested board (material) with pressure sensors below (4) laser device (5) leaser beams (6) white sticker.</p>
Full article ">Figure 10
<p>Time step illustration (for <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>t</mi> <mn>5</mn> </msub> </mrow> </semantics></math>) of the system state (<b>right</b>) and the corresponding depth vector <span class="html-italic">v</span> depending on the laser beam blocking (<b>left</b>).</p>
Full article ">Figure 10 Cont.
<p>Time step illustration (for <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>t</mi> <mn>5</mn> </msub> </mrow> </semantics></math>) of the system state (<b>right</b>) and the corresponding depth vector <span class="html-italic">v</span> depending on the laser beam blocking (<b>left</b>).</p>
Full article ">Figure 11
<p>Circle fitting: The following images show examples for fitting circles (<b>red</b>) to a set of points (<b>blue</b>) with high accuracy as required by the system. Although the samples contain noise and distortions, using our system, we were able to fit an accurate circle to all the samples.</p>
Full article ">
27 pages, 2513 KiB  
Article
Towards Cognitive Recommender Systems
by Amin Beheshti, Shahpar Yakhchi, Salman Mousaeirad, Seyed Mohssen Ghafari, Srinivasa Reddy Goluguri and Mohammad Amin Edrisi
Algorithms 2020, 13(8), 176; https://doi.org/10.3390/a13080176 - 22 Jul 2020
Cited by 63 | Viewed by 10361
Abstract
Intelligence is the ability to learn from experience and use domain experts’ knowledge to adapt to new situations. In this context, an intelligent Recommender System should be able to learn from domain experts’ knowledge and experience, as it is vital to know the [...] Read more.
Intelligence is the ability to learn from experience and use domain experts’ knowledge to adapt to new situations. In this context, an intelligent Recommender System should be able to learn from domain experts’ knowledge and experience, as it is vital to know the domain that the items will be recommended. Traditionally, Recommender Systems have been recognized as playlist generators for video/music services (e.g., Netflix and Spotify), e-commerce product recommenders (e.g., Amazon and eBay), or social content recommenders (e.g., Facebook and Twitter). However, Recommender Systems in modern enterprises are highly data-/knowledge-driven and may rely on users’ cognitive aspects such as personality, behavior, and attitude. In this paper, we survey and summarize previously published studies on Recommender Systems to help readers understand our method’s contributions to the field in this context. We discuss the current limitations of the state of the art approaches in Recommender Systems and the need for our new approach: A vision and a general framework for a new type of data-driven, knowledge-driven, and cognition-driven Recommender Systems, namely, Cognitive Recommender Systems. Cognitive Recommender Systems will be the new type of intelligent Recommender Systems that understand the user’s preferences, detect changes in user preferences over time, predict user’s unknown favorites, and explore adaptive mechanisms to enable intelligent actions within the compound and changing environments. We present a motivating scenario in banking and argue that existing Recommender Systems: (i) do not use domain experts’ knowledge to adapt to new situations; (ii) may not be able to predict the ratings or preferences a customer would give to a product (e.g., loan, deposit, or trust service); and (iii) do not support data capture and analytics around customers’ cognitive activities and use it to provide intelligent and time-aware recommendations. Full article
(This article belongs to the Special Issue Algorithms for Personalization Techniques and Recommender Systems)
Show Figures

Figure 1

Figure 1
<p>Three main Dimensions of a Cognitive Recommender System—(i) knowledge-driven [<a href="#B15-algorithms-13-00176" class="html-bibr">15</a>]—which enables mimicking the knowledge of domain experts using crowdsourcing techniques; (ii) data-driven [<a href="#B14-algorithms-13-00176" class="html-bibr">14</a>]: which enables leveraging Artificial Intelligence and Machine Learning technologies to understand the Big Data generated on Open, Private and Social platforms/systems to improve the accuracy of recommendations; and (iii) cognition-driven [<a href="#B17-algorithms-13-00176" class="html-bibr">17</a>]: which enables understanding the end-users personality and Analyze their behaviour and attitude over time.</p>
Full article ">Figure 2
<p>An overview of Recommender Systems’ related work.</p>
Full article ">Figure 3
<p>A General Framework for Cognitive Recommender Systems.</p>
Full article ">Figure 4
<p>Knowledge Lake Architecture [<a href="#B98-algorithms-13-00176" class="html-bibr">98</a>].</p>
Full article ">Figure 5
<p>Leveraging crowdsourcing to deal with biases and the cold-start problem.</p>
Full article ">Figure 6
<p>Recommender Systems: users’ dimensions in a banking scenario.</p>
Full article ">Figure 7
<p>A snapshot of a user’s personality graph.</p>
Full article ">Figure 8
<p>A Sample of Amazon Dataset.</p>
Full article ">Figure 9
<p>A Performance analysis on the Amazon dataset.</p>
Full article ">
Previous Issue
Back to TopTop