[go: up one dir, main page]

Next Issue
Volume 10, December
Previous Issue
Volume 10, October
 
 

Computers, Volume 10, Issue 11 (November 2021) – 21 articles

Cover Story (view full-size image): Master lectures of history are usually quite boring for students. Virtual and augmented reality and serious games can solve this problem. This article presents a playful virtual reality experience set in Ancient Rome that reproduces the different buildings and civil constructions of the time as accurately as possible, making it possible for the player to create Roman cities in a simple way. Once they are built, the user can visit them, accessing the buildings and being able to interact with the objects and characters that appear. Moreover, in order to learn more information about every building, users can visualize them using augmented reality using marker-based techniques. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 1048 KiB  
Article
Teaching an Algorithm How to Catalog a Book
by Ernesto William De Luca, Francesca Fallucchi and Roberto Morelato
Computers 2021, 10(11), 155; https://doi.org/10.3390/computers10110155 - 18 Nov 2021
Cited by 2 | Viewed by 4287
Abstract
This paper presents a study of a strategy for automated cataloging within an OPAC or for online bibliographic catalogs generally. The aim of the analysis is to offer a set of results, while searching in library catalogs, that goes further than the expected [...] Read more.
This paper presents a study of a strategy for automated cataloging within an OPAC or for online bibliographic catalogs generally. The aim of the analysis is to offer a set of results, while searching in library catalogs, that goes further than the expected one-to-one term correspondence. The goal is to understand how ontological structures can affect query search results. This analysis can also be applied to search functions other than in the library context, but in this case, cataloging relies on predefined rules and noncontrolled dictionary terms, which means that the results are meaningful in terms of knowledge organization. The approach was tested on an Edisco database, and we measured the system’s ability to detect whether a new incoming record belonged to a specific set of textbooks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

Figure 1
<p>Metadata used for Record description. DDC field is missing.</p>
Full article ">Figure 2
<p>Schematic approach for evaluating a new record.</p>
Full article ">Figure 3
<p>Records returned searching the term “dictionary”.</p>
Full article ">Figure 4
<p>The list of the first 20 over 40 records, related to the 4 authors in <a href="#computers-10-00155-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Structure of the classifier.</p>
Full article ">Figure 6
<p>Confusion matrix for the combined results of the two datasets.</p>
Full article ">Figure 7
<p>ROC curve for the Edisco class.</p>
Full article ">
26 pages, 872 KiB  
Article
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
by Alfonso Ortega, Julian Fierrez, Aythami Morales, Zilong Wang, Marina de la Cruz, César Luis Alonso and Tony Ribeiro
Computers 2021, 10(11), 154; https://doi.org/10.3390/computers10110154 - 17 Nov 2021
Cited by 11 | Viewed by 5241
Abstract
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a [...] Read more.
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed approach for generating an explanation of a given black-box classifier (1) using <b>PRIDE</b> (2) with a toy example (3). Note that the resulting explanations generated by <b>PRIDE</b> are in propositional logic.</p>
Full article ">Figure 2
<p>Experimental framework: <b>PRIDE</b> is fed with all the data available (train + test) for increasing the accuracy of the equivalence. In our experiments we consider the classifier (see [<a href="#B8-computers-10-00154" class="html-bibr">8</a>] for details) as a black box to perform regression from input resume attributes (atts.) to output labels (recruitment scores labelled by human resources experts). LFIT gets a digital twin to the neural network providing explainability (as human-readable white-box rules) to the neural network classifier.</p>
Full article ">Figure 3
<p>Structure of the experimental tests. There are 4 datasets for analysing gender (named <span class="html-italic">g</span>) and ethnicity (<span class="html-italic">e</span>) bias separately. Apart from gender and ethnicity, there are 12 other input attributes (named from <math display="inline"><semantics> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>i</mi> <mn>12</mn> </mrow> </semantics></math>). There is a couple of (biased and unbiased) datasets for each one: gender and ethnicity. We have studied the input attributes by increasing complexity starting with <math display="inline"><semantics> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </semantics></math> and adding one at each time. Thus, for each couple we considered 11 different scenarios (named from <math display="inline"><semantics> <mrow> <mi>s</mi> <mn>1</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>s</mi> <mn>11</mn> </mrow> </semantics></math>). This figure shows their structure (<math display="inline"><semantics> <msub> <mi>s</mi> <mi>i</mi> </msub> </semantics></math> is included in all <math display="inline"><semantics> <msub> <mi>s</mi> <mi>j</mi> </msub> </semantics></math> for which <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>&lt;</mo> <mi>j</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>Normalised frequency of attributes when studying ethnic biases.</p>
Full article ">Figure 5
<p>Normalised frequency of attributes when studying gender biases.</p>
Full article ">Figure 6
<p>Percentage of the absolute increment (comparing scores with and without bias for ethnicity) of each attribute for scenarios s1, s2, s3, s4, s5 and s6 (<math display="inline"><semantics> <mrow> <mi>A</mi> <mi>I</mi> <msub> <mi>P</mi> <mrow> <mi>u</mi> <mi>s</mi> <mn>1</mn> <mo>−</mo> <mn>6</mn> <mo>,</mo> <mi>e</mi> <mi>b</mi> <mi>s</mi> <mn>1</mn> <mo>−</mo> <mn>6</mn> </mrow> </msub> </mrow> </semantics></math>). The graphs link the points corresponding to all the input attributes considered in each scenario.</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <mi>A</mi> <mi>I</mi> <msub> <mi>P</mi> <mrow> <mi>u</mi> <mi>s</mi> <mn>7</mn> <mo>−</mo> <mn>11</mn> <mo>,</mo> <mi>e</mi> <mi>b</mi> <mi>s</mi> <mn>7</mn> <mo>−</mo> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p><math display="inline"><semantics> <mrow> <mi>A</mi> <mi>I</mi> <msub> <mi>P</mi> <mrow> <mi>u</mi> <mi>s</mi> <mn>1</mn> <mo>−</mo> <mn>6</mn> <mo>,</mo> <mi>e</mi> <mi>b</mi> <mi>s</mi> <mn>1</mn> <mo>−</mo> <mn>6</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <mrow> <mi>A</mi> <mi>I</mi> <msub> <mi>P</mi> <mrow> <mi>u</mi> <mi>s</mi> <mn>7</mn> <mo>−</mo> <mn>11</mn> <mo>,</mo> <mi>g</mi> <mi>b</mi> <mi>s</mi> <mn>7</mn> <mo>−</mo> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Normalised percentage of frequency in scenario s11 of each attribute: g, i1 to i11 (<math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>P</mi> <mrow> <mi>s</mi> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math>). No bias (blue), gender-biased scores (red).</p>
Full article ">Figure 11
<p>Normalised percentage of frequency in scenario s11 of each attribute: g, i1 to i11 (<math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>P</mi> <mrow> <mi>s</mi> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math>). No bias (blue), gender-biased scores (red).</p>
Full article ">Figure 12
<p>The normalised frequency of different <span class="html-italic">ethnicity</span> and <span class="html-italic">income</span>.</p>
Full article ">Figure 13
<p>The normalised frequency of different <span class="html-italic">sex</span> and <span class="html-italic">income</span>.</p>
Full article ">
24 pages, 1234 KiB  
Article
Enhancing Robots Navigation in Internet of Things Indoor Systems
by Yahya Tashtoush, Israa Haj-Mahmoud, Omar Darwish, Majdi Maabreh, Belal Alsinglawi, Mahmoud Elkhodr and Nasser Alsaedi
Computers 2021, 10(11), 153; https://doi.org/10.3390/computers10110153 - 15 Nov 2021
Cited by 2 | Viewed by 3300
Abstract
In this study, an effective local minima detection and definition algorithm is introduced for a mobile robot navigating through unknown static environments. Furthermore, five approaches are presented and compared with the popular approach wall-following to pull the robot out of the local minima [...] Read more.
In this study, an effective local minima detection and definition algorithm is introduced for a mobile robot navigating through unknown static environments. Furthermore, five approaches are presented and compared with the popular approach wall-following to pull the robot out of the local minima enclosure namely; Random Virtual Target, Reflected Virtual Target, Global Path Backtracking, Half Path Backtracking, and Local Path Backtracking. The proposed approaches mainly depend on changing the target location temporarily to avoid the original target’s attraction force effect on the robot. Moreover, to avoid getting trapped in the same location, a virtual obstacle is placed to cover the local minima enclosure. To include the most common shapes of deadlock situations, the proposed approaches were evaluated in four different environments; V-shaped, double U-shaped, C-shaped, and cluttered environments. The results reveal that the robot, using any of the proposed approaches, requires fewer steps to reach the destination, ranging from 59 to 73 m on average, as opposed to the wall-following strategy, which requires an average of 732 m. On average, the robot with a constant speed and reflected virtual target approach takes 103 s, whereas the identical robot with a wall-following approach takes 907 s to complete the tasks. Using a fuzzy-speed robot, the duration for the wall-following approach is greatly reduced to 507 s, while the reflected virtual target may only need up to 20% of that time. More results and detailed comparisons are embedded in the subsequent sections. Full article
Show Figures

Figure 1

Figure 1
<p>Channel identified by the readings of three sensors.</p>
Full article ">Figure 2
<p>The problem of partially occupied cells.</p>
Full article ">Figure 3
<p>The difference between the effectiveness in cell perception and point perception.</p>
Full article ">Figure 4
<p><span class="html-italic">Define_Deadlock</span> Method.</p>
Full article ">Figure 5
<p>Random virtual target selection.</p>
Full article ">Figure 6
<p>Virtual target by reflection of the real target.</p>
Full article ">Figure 7
<p>Three approaches to determine the backtracking stop point.</p>
Full article ">Figure 8
<p>The virtual path in the backtracking method.</p>
Full article ">Figure 9
<p>Deadlock detection and definition performance.</p>
Full article ">Figure 10
<p>C-shaped obstacle test case.</p>
Full article ">Figure 11
<p>Double U-shaped test case.</p>
Full article ">Figure 12
<p>V-shaped test case.</p>
Full article ">Figure 13
<p>Cluttered environment test case.</p>
Full article ">Figure 14
<p>Performance of the proposed approaches to overcome the local minima compared to the wall-following approach (measured by the number of steps).</p>
Full article ">Figure 15
<p>The difference in speed rates when the speed is controlled by the proposed fuzzy speed controller.</p>
Full article ">Figure 16
<p>Wall-following approach reaches very low-speed rates when the robot follows the walls.</p>
Full article ">Figure 17
<p>Wall-following performance in the C-shaped obstacle test case.</p>
Full article ">
26 pages, 105467 KiB  
Article
Two-Bit Embedding Histogram-Prediction-Error Based Reversible Data Hiding for Medical Images with Smooth Area
by Ching-Yu Yang and Ja-Ling Wu
Computers 2021, 10(11), 152; https://doi.org/10.3390/computers10110152 - 12 Nov 2021
Cited by 5 | Viewed by 2906
Abstract
During medical treatment, personal privacy is involved and must be protected. Healthcare institutions have to keep medical images or health information secret unless they have permission from the data owner to disclose them. Reversible data hiding (RDH) is a technique that embeds metadata [...] Read more.
During medical treatment, personal privacy is involved and must be protected. Healthcare institutions have to keep medical images or health information secret unless they have permission from the data owner to disclose them. Reversible data hiding (RDH) is a technique that embeds metadata into an image and can be recovered without any distortion after the hidden data have been extracted. This work aims to develop a fully reversible two-bit embedding RDH algorithm with a large hiding capacity for medical images. Medical images can be partitioned into regions of interest (ROI) and regions of noninterest (RONI). ROI is informative with semantic meanings essential for clinical applications and diagnosis and cannot tolerate subtle changes. Therefore, we utilize histogram shifting and prediction error to embed metadata into RONI. In addition, our embedding algorithm minimizes the side effect to ROI as much as possible. To verify the effectiveness of the proposed approach, we benchmarked three types of medical images in DICOM format, namely X-ray photography (X-ray), computer tomography (CT), and magnetic resonance imaging (MRI). Experimental results show that most of the hidden data have been embedded in RONI, and the performance achieves high capacity and leaves less visible distortion to ROIs. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the application scenario with data hiding.</p>
Full article ">Figure 2
<p>Frameworks of (<b>a</b>) cryptography and (<b>b</b>) steganography.</p>
Full article ">Figure 3
<p>Histograms of the Lena image: (<b>a</b>) original and (<b>b</b>) after HS.</p>
Full article ">Figure 4
<p>The four neighboring pixels, where P denotes the target embedding pixel.</p>
Full article ">Figure 5
<p>Effect of transforming an image’s original histogram to its PEH counterpart.</p>
Full article ">Figure 6
<p>‘X’ represents an even set, and ‘O’ represents an odd set.</p>
Full article ">Figure 7
<p>Example of two-bit embedding.</p>
Full article ">Figure 8
<p>Example of the proposed two-bit extraction and recovery.</p>
Full article ">Figure 9
<p>Definition of the proposed local complexity function: (<b>a</b>) original four-pixel version and (<b>b</b>) extended version of (<b>a</b>).</p>
Full article ">Figure 10
<p>Framework of the proposed encoder.</p>
Full article ">Figure 11
<p>Framework of the proposed decoder.</p>
Full article ">Figure 12
<p>Sampled images in the tested benchmark databases: (<b>a</b>) Breast-MRI-NACT-Pilot (breast), (<b>b</b>) ACRIN-DSC-MR-Brain (brain), (<b>c</b>) NIH (chest), (<b>d</b>) Lung-PET-CT-Dx (lung), (<b>e</b>) Prostate-MRI (prostate), and (<b>f</b>) Other grayscale standard images.</p>
Full article ">Figure 12 Cont.
<p>Sampled images in the tested benchmark databases: (<b>a</b>) Breast-MRI-NACT-Pilot (breast), (<b>b</b>) ACRIN-DSC-MR-Brain (brain), (<b>c</b>) NIH (chest), (<b>d</b>) Lung-PET-CT-Dx (lung), (<b>e</b>) Prostate-MRI (prostate), and (<b>f</b>) Other grayscale standard images.</p>
Full article ">
24 pages, 1311 KiB  
Article
Solution of the Optimal Reactive Power Flow Problem Using a Discrete-Continuous CBGA Implemented in the DigSILENT Programming Language
by David Lionel Bernal-Romero, Oscar Danilo Montoya and Andres Arias-Londoño
Computers 2021, 10(11), 151; https://doi.org/10.3390/computers10110151 - 12 Nov 2021
Cited by 9 | Viewed by 3125
Abstract
The problem of the optimal reactive power flow in transmission systems is addressed in this research from the point of view of combinatorial optimization. A discrete-continuous version of the Chu & Beasley genetic algorithm (CBGA) is proposed to model continuous variables such as [...] Read more.
The problem of the optimal reactive power flow in transmission systems is addressed in this research from the point of view of combinatorial optimization. A discrete-continuous version of the Chu & Beasley genetic algorithm (CBGA) is proposed to model continuous variables such as voltage outputs in generators and reactive power injection in capacitor banks, as well as binary variables such as tap positions in transformers. The minimization of the total power losses is considered as the objective performance indicator. The main contribution in this research corresponds to the implementation of the CBGA in the DigSILENT Programming Language (DPL), which exploits the advantages of the power flow tool at a low computational effort. The solution of the optimal reactive power flow problem in power systems is a key task since the efficiency and secure operation of the whole electrical system depend on the adequate distribution of the reactive power in generators, transformers, shunt compensators, and transmission lines. To provide an efficient optimization tool for academics and power system operators, this paper selects the DigSILENT software, since this is widely used for power systems for industries and researchers. Numerical results in three IEEE test feeders composed of 6, 14, and 39 buses demonstrate the efficiency of the proposed CBGA in the DPL environment from DigSILENT to reduce the total grid power losses (between 21.17% to 37.62% of the benchmark case) considering four simulation scenarios regarding voltage regulation bounds and slack voltage outputs. In addition, the total processing times for the IEEE 6-, 14-, and 39-bus systems were 32.33 s, 49.45 s, and 138.88 s, which confirms the low computational effort of the optimization methods directly implemented in the DPL environment. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Figure 1

Figure 1
<p>General flow diagram for the implementation of a CBGA in optimization problems: (<b>A</b>) Initial population, (<b>B</b>) selection operator, (<b>C</b>) recombination operator, (<b>D</b>) mutation operator, (<b>E</b>) mutated individuals, (<b>F</b>) selection of the winner individual, and (<b>G</b>) inclusion of the winner individual in the current population.</p>
Full article ">Figure 2
<p>Example of a DPL in the DigSILENT software to evaluate the Newton-Raphson power flow and report the total grid power losses.</p>
Full article ">Figure 3
<p>Electrical configuration of the IEEE 6-bus system.</p>
Full article ">Figure 4
<p>Electrical configuration of the IEEE 14-bus system.</p>
Full article ">Figure 5
<p>Electrical configuration of the IEEE 39-bus system.</p>
Full article ">Figure 6
<p>Power Losses for all the simulation scenarios in the IEEE 6-bus system.</p>
Full article ">Figure 7
<p>Voltage profile performance in the IEEE 6-bus system for all the simulation scenarios.</p>
Full article ">Figure 8
<p>Power Losses for all the simulation scenarios in the IEEE 14-bus system.</p>
Full article ">Figure 9
<p>Voltage profile performance in the IEEE 14-bus system for all the simulation scenarios.</p>
Full article ">Figure 10
<p>Power Losses with genetic algorithm in IEEE-39 bus-bar.</p>
Full article ">Figure 11
<p>Voltage profile performance in the IEEE 39-bus system for all the simulation scenarios.</p>
Full article ">
27 pages, 4012 KiB  
Article
Machine Learning Cybersecurity Adoption in Small and Medium Enterprises in Developed Countries
by Nisha Rawindaran, Ambikesh Jayal and Edmond Prakash
Computers 2021, 10(11), 150; https://doi.org/10.3390/computers10110150 - 10 Nov 2021
Cited by 35 | Viewed by 10812
Abstract
In many developed countries, the usage of artificial intelligence (AI) and machine learning (ML) has become important in paving the future path in how data is managed and secured in the small and medium enterprises (SMEs) sector. SMEs in these developed countries have [...] Read more.
In many developed countries, the usage of artificial intelligence (AI) and machine learning (ML) has become important in paving the future path in how data is managed and secured in the small and medium enterprises (SMEs) sector. SMEs in these developed countries have created their own cyber regimes around AI and ML. This knowledge is tested daily in how these countries’ SMEs run their businesses and identify threats and attacks, based on the support structure of the individual country. Based on recent changes to the UK General Data Protection Regulation (GDPR), Brexit, and ISO standards requirements, machine learning cybersecurity (MLCS) adoption in the UK SME market has become prevalent and a good example to lean on, amongst other developed nations. Whilst MLCS has been successfully applied in many applications, including network intrusion detection systems (NIDs) worldwide, there is still a gap in the rate of adoption of MLCS techniques for UK SMEs. Other developed countries such as Spain and Australia also fall into this category, and similarities and differences to MLCS adoptions are discussed. Applications of how MLCS is applied within these SME industries are also explored. The paper investigates, using quantitative and qualitative methods, the challenges to adopting MLCS in the SME ecosystem, and how operations are managed to promote business growth. Much like security guards and policing in the real world, the virtual world is now calling on MLCS techniques to be embedded like secret service covert operations to protect data being distributed by the millions into cyberspace. This paper will use existing global research from multiple disciplines to identify gaps and opportunities for UK SME small business cyber security. This paper will also highlight barriers and reasons for low adoption rates of MLCS in SMEs and compare success stories of larger companies implementing MLCS. The methodology uses structured quantitative and qualitative survey questionnaires, distributed across an extensive participation pool directed to the SMEs’ management and technical and non-technical professionals using stratify methods. Based on the analysis and findings, this study reveals that from the primary data obtained, SMEs have the appropriate cybersecurity packages in place but are not fully aware of their potential. Secondary data collection was run in parallel to better understand how these barriers and challenges emerged, and why the rate of adoption of MLCS was very low. The paper draws the conclusion that help through government policies and processes coupled together with collaboration could minimize cyber threats in combatting hackers and malicious actors in trying to stay ahead of the game. These aspirations can be reached by ensuring that those involved have been well trained and understand the importance of communication when applying appropriate safety processes and procedures. This paper also highlights important funding gaps that could help raise cyber security awareness in the form of grants, subsidies, and financial assistance through various public sector policies and training. Lastly, SMEs’ lack of understanding of risks and impacts of cybercrime could lead to conflicting messages between cross-company IT and cybersecurity rules. Trying to find the right balance between this risk and impact, versus productivity impact and costs, could lead to UK SMES getting over these hurdles in this cyberspace in the quest for promoting the usage of MLCS. UK and Wales governments can use the research conducted in this paper to inform and adapt their policies to help UK SMEs become more secure from cyber-attacks and compare them to other developed countries also on the same future path. Full article
(This article belongs to the Special Issue Sensors and Smart Cities 2023)
Show Figures

Figure 1

Figure 1
<p>Graphical representation of the literature review.</p>
Full article ">Figure 2
<p>Stratified flowchart of methodology used.</p>
Full article ">Figure 3
<p>Industry participants who took part in the survey representing UK SMEs.</p>
Full article ">Figure 4
<p>Education level of participants in %.</p>
Full article ">Figure 5
<p>Position in SME company.</p>
Full article ">Figure 6
<p>Two components reflecting age and identification of participants.</p>
Full article ">Figure 7
<p>Percentage of SMEs having cyber security software packages.</p>
Full article ">Figure 8
<p>Cyber security packages used in SMEs.</p>
Full article ">Figure 9
<p>Awareness of ML in cyber security software packages to detect cyber-attacks.</p>
Full article ">Figure 10
<p>ML algorithms that are supported and known to SMEs in CS packages.</p>
Full article ">Figure 11
<p>ML algorithms that are used and known to SMEs in CS packages.</p>
Full article ">Figure 12
<p>SME awareness of price for machine learning in cyber security packages.</p>
Full article ">Figure A1
<p>Google Scholar search.</p>
Full article ">Figure A2
<p>Advanced search with words fully in the title or anywhere in the article.</p>
Full article ">Figure A3
<p>Keyword search results after filtration.</p>
Full article ">
17 pages, 3388 KiB  
Article
Requirements Elicitation for an Assistance System for Complexity Management in Product Development of SMEs during COVID-19: A Case Study
by Jan-Phillip Herrmann, Sebastian Imort, Christoph Trojanowski and Andreas Deuter
Computers 2021, 10(11), 149; https://doi.org/10.3390/computers10110149 - 10 Nov 2021
Cited by 5 | Viewed by 3498
Abstract
Technological progress, upcoming cyber-physical systems, and limited resources confront small and medium-sized enterprises (SMEs) with the challenge of complexity management in product development projects spanning over the entire product lifecycle. SMEs require a solution for documenting and analyzing the functional relationships between multiple [...] Read more.
Technological progress, upcoming cyber-physical systems, and limited resources confront small and medium-sized enterprises (SMEs) with the challenge of complexity management in product development projects spanning over the entire product lifecycle. SMEs require a solution for documenting and analyzing the functional relationships between multiple domains such as products, software, and processes. The German research project FuPEP “Funktionsorientiertes Komplexitätsmanagement in allen Phasen der Produktentstehung” aims to address this issue by developing an assistance system that supports product developers by visualizing functional relationships. This paper presents the methodology and results of the assistance system’s requirements elicitation with two SMEs. Conducting the elicitation during a global pandemic, we discuss its application using specific techniques in light of COVID-19. We model problems and their effects regarding complexity management in product development in a system dynamics model. The most important requirements and use cases elicited are presented, and the requirements elicitation methodology and results are discussed. Additionally, we present a multilayer software architecture design of the assistance system. Our case study suggests a relationship between fear of a missing project focus among project participants and the restriction of requirements elicitation techniques to those possible via web conferencing tools. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>SDM of problems related to complexity management in Company A and Company B.</p>
Full article ">Figure 2
<p>Effects of unknown relationships on variables related to time, cost, and quality.</p>
Full article ">Figure 3
<p>Use case prioritization regarding priority and severity.</p>
Full article ">Figure 4
<p>Extract from the AAS Meta Model oriented at [<a href="#B43-computers-10-00149" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>Software Architecture.</p>
Full article ">
17 pages, 2832 KiB  
Article
Automatic Detection of Traffic Accidents from Video Using Deep Learning Techniques
by Sergio Robles-Serrano, German Sanchez-Torres and John Branch-Bedoya
Computers 2021, 10(11), 148; https://doi.org/10.3390/computers10110148 - 9 Nov 2021
Cited by 32 | Viewed by 13380
Abstract
According to worldwide statistics, traffic accidents are the cause of a high percentage of violent deaths. The time taken to send the medical response to the accident site is largely affected by the human factor and correlates with survival probability. Due to this [...] Read more.
According to worldwide statistics, traffic accidents are the cause of a high percentage of violent deaths. The time taken to send the medical response to the accident site is largely affected by the human factor and correlates with survival probability. Due to this and the wide use of video surveillance and intelligent traffic systems, an automated traffic accident detection approach becomes desirable for computer vision researchers. Nowadays, Deep Learning (DL)-based approaches have shown high performance in computer vision tasks that involve a complex features relationship. Therefore, this work develops an automated DL-based method capable of detecting traffic accidents on video. The proposed method assumes that traffic accident events are described by visual features occurring through a temporal way. Therefore, a visual features extraction phase, followed by a temporary pattern identification, compose the model architecture. The visual and temporal features are learned in the training phase through convolution and recurrent layers using built-from-scratch and public datasets. An accuracy of 98% is achieved in the detection of accidents in public traffic accident datasets, showing a high capacity in detection independent of the road structure. Full article
(This article belongs to the Special Issue Machine Learning for Traffic Modeling and Prediction)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Architecture for video analysis, visual feature extractor based on the InceptionV4 architecture (<b>top</b>) and temporal feature extractor (<b>bottom</b>).</p>
Full article ">Figure 2
<p>Examples of frames from videos in the datasets: (<b>a</b>) frames from positive class (accidents), (<b>b</b>) frames from negative class (no accident).</p>
Full article ">Figure 3
<p>Visual feature extractor experiment.</p>
Full article ">Figure 4
<p>Experimenting with the temporal feature extractor.</p>
Full article ">Figure 5
<p>Behavior of the model’s accuracy by epochs with the training set and the validation set.</p>
Full article ">
21 pages, 1417 KiB  
Article
On the Optimization of Self-Organization and Self-Management Hardware Resource Allocation for Heterogeneous Clouds
by Konstantinos M. Giannoutakis, Christos K. Filelis-Papadopoulos, George A. Gravvanis and Dimitrios Tzovaras
Computers 2021, 10(11), 147; https://doi.org/10.3390/computers10110147 - 9 Nov 2021
Viewed by 2347
Abstract
There is a tendency, during the last years, to migrate from the traditional homogeneous clouds and centralized provisioning of resources to heterogeneous clouds with specialized hardware governed in a distributed and autonomous manner. The CloudLightning architecture proposed recently introduced a dynamic way to [...] Read more.
There is a tendency, during the last years, to migrate from the traditional homogeneous clouds and centralized provisioning of resources to heterogeneous clouds with specialized hardware governed in a distributed and autonomous manner. The CloudLightning architecture proposed recently introduced a dynamic way to provision heterogeneous cloud resources, by shifting the selection of underlying resources from the end-user to the system in an efficient way. In this work, an optimized Suitability Index and assessment function are proposed, along with their theoretical analysis, for improving the computational efficiency, energy consumption, service delivery and scalability of the distributed orchestration. The effectiveness of the proposed scheme is being evaluated with the use of simulation, by comparing the optimized methods with the original approach and the traditional centralized resource management, on real and synthetic High Performance Computing applications. Finally, numerical results are presented and discussed regarding the improvements over the defined evaluation criteria. Full article
(This article belongs to the Special Issue Real-Time Systems in Emerging IoT-Embedded Applications)
Show Figures

Figure 1

Figure 1
<p>Warehouse scale computer abstract architecture [<a href="#B30-computers-10-00147" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>Abstract architecture of the CloudLightning system [<a href="#B31-computers-10-00147" class="html-bibr">31</a>,<a href="#B32-computers-10-00147" class="html-bibr">32</a>].</p>
Full article ">Figure 3
<p>Scalability of the Improved SOSM over SOSM by increasing the number of cells and the number of incoming tasks: (<b>a</b>) Average Processor Utilization over active servers, (<b>b</b>) Average Processor Utilization, (<b>c</b>) Average Memory Utilization over active servers, (<b>d</b>) Average Memory Utilization, (<b>e</b>) Average Network Utilization, (<b>f</b>) Average Storage Utilization over active servers, (<b>g</b>) Average Storage Utilization, (<b>h</b>) Average Accelerator Utilization over active servers, (<b>i</b>) Average Accelerator Utilization.</p>
Full article ">Figure 4
<p>Distribution of task characteristics for the synthetic applications: (<b>a</b>) Distribution of tasks with respect to their application type, (<b>b</b>) Distribution of the number of MIs required by tasks, (<b>c</b>) Distribution of the number of vCPUs required by tasks, (<b>d</b>) Distribution of the required Memory with respect to tasks, (<b>e</b>) Distribution of the required Storage with respect to tasks, (<b>f</b>) Distribution of the required Network with respect to tasks.</p>
Full article ">Figure 5
<p>Scalability of the three allocation mechanisms by increasing the number of cells and number of incoming tasks, for the synthetic inputs: (<b>a</b>) Average Processor Utilization over active servers, (<b>b</b>) Average Processor Utilization, (<b>c</b>) Average Memory Utilization over active servers, (<b>d</b>) Average Memory Utilization, (<b>e</b>) Average Network Utilization, (<b>f</b>) Average Storage Utilization over active servers, (<b>g</b>) Average Storage Utilization, (<b>h</b>) Average Accelerator Utilization over active servers, (<b>i</b>) Average Accelerator Utilization.</p>
Full article ">Figure 6
<p>Total energy consumption in MWh for traditional centralized and SOSM clouds, for the synthetic inputs.</p>
Full article ">Figure 7
<p>The number of accepted and rejected tasks for traditional centralized and SOSM clouds, for the synthetic inputs: (<b>a</b>) Total Number of accepted Tasks, (<b>b</b>) Total Number of rejected Tasks.</p>
Full article ">
19 pages, 26345 KiB  
Article
Learning History Using Virtual and Augmented Reality
by Inmaculada Remolar, Cristina Rebollo and Jon A. Fernández-Moyano
Computers 2021, 10(11), 146; https://doi.org/10.3390/computers10110146 - 8 Nov 2021
Cited by 29 | Viewed by 8322
Abstract
Master lectures of history are usually quite boring for the students, and to keep their attention requires a great effort from teachers. Virtual and Augmented Reality have a clear potential in education and can solve this problem. Serious games that use immersive technologies [...] Read more.
Master lectures of history are usually quite boring for the students, and to keep their attention requires a great effort from teachers. Virtual and Augmented Reality have a clear potential in education and can solve this problem. Serious games that use immersive technologies allow students to visit and interact with environments dated in different ages. Taking this in mind, this article presents a playful virtual reality experience set in Ancient Rome that allows the user to learn concepts from that age. The virtual experience reproduces as accurately as possible the different buildings and civil constructions of the time, making it possible for the player to create Roman cities in a simple way. Once built, the user can visit them, accessing the buildings and being able to interact with the objects and characters that appear. Moreover, in order to learn more information about every building, users can visualize them using Augmented Reality using marker-based techniques. Different information has been included related to every building, such as their main uses, characteristics, or even some images that represent them. In order to evaluate the effectiveness of the developed experience, several experiments have been carried out, taking as sample Secondary School students. Initially, the game’s quality and playability has been evaluated and, subsequently, the motivation of the virtual learning experience in history. The results obtained support on the one hand its gameplay and attractiveness, and on the other, the student’s increased interest in studying history, as well as the greater fixation of different concepts treated in a playful experience. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education)
Show Figures

Figure 1

Figure 1
<p>Housing in Ancient Rome. (<b>a</b>) Domus, (<b>b</b>) Villa, (<b>c</b>) Insulae.</p>
Full article ">Figure 2
<p>Model of the forum, with the details of an existing temple in it, and the representation of a Roman Arch. (<b>a</b>) Foro, (<b>b</b>) Detail of the Temple s interior located in the Forum, (<b>c</b>) Triumphal Arch.</p>
Full article ">Figure 3
<p>Leisure buildings in Ancient Rome. (<b>a</b>) Theatre, (<b>b</b>) Amphitheatre, (<b>c</b>) Roman Circus.</p>
Full article ">Figure 4
<p>Characteristic constructions of the Roman Age. (<b>a</b>) Basic structure of an aqueduct, (<b>b</b>) Doors, (<b>c</b>) Basic structure of a wall.</p>
Full article ">Figure 5
<p>Characteristic aqueduct and wall of the Roman Age. (<b>a</b>) Aqueduct, (<b>b</b>) Walls.</p>
Full article ">Figure 6
<p>City inhabitants. (<b>a</b>) Legionnaires, (<b>b</b>) Patricians, (<b>c</b>) Citizens.</p>
Full article ">Figure 7
<p>Representation of game elements with AR associated with some AR markers. (<b>a</b>) Domus, (<b>b</b>) Patricians.</p>
Full article ">Figure 8
<p>Initial user interface.</p>
Full article ">Figure 9
<p>Example of the user interface for city building. (<b>a</b>) Toolbar visible when the user selects the group of constructions for entertainment, (<b>b</b>) Construction mode view.</p>
Full article ">Figure 10
<p>City tour mode view.</p>
Full article ">Figure 11
<p>Interaction and movement options.</p>
Full article ">Figure 12
<p>Visiting the city. (<b>a</b>) Characters walking around the city, (<b>b</b>) Lecterns with historical information.</p>
Full article ">Figure 13
<p>Different way points created in the forum.</p>
Full article ">Figure 14
<p>Results of the data obtained in the first experiment in a graphical form. (<b>a</b>) Content, (<b>b</b>) Gameplay.</p>
Full article ">Figure 15
<p>Correct answers from students in each group.</p>
Full article ">
25 pages, 512 KiB  
Article
In-Depth Analysis of Ransom Note Files
by Yassine Lemmou, Jean-Louis Lanet and El Mamoun Souidi
Computers 2021, 10(11), 145; https://doi.org/10.3390/computers10110145 - 8 Nov 2021
Cited by 3 | Viewed by 4885
Abstract
During recent years, many papers have been published on ransomware, but to the best of our knowledge, no previous academic studies have been conducted on ransom note files. In this paper, we present the results of a depth study on filenames and the [...] Read more.
During recent years, many papers have been published on ransomware, but to the best of our knowledge, no previous academic studies have been conducted on ransom note files. In this paper, we present the results of a depth study on filenames and the content of ransom files. We propose a prototype to identify the ransom files. Then we explore how the filenames and the content of these files can minimize the risk of ransomware encryption of some specified ransomware or increase the effectiveness of some ransomware detection tools. To achieve these objectives, two approaches are discussed in this paper. The first uses Latent Semantic Analysis (LSA) to check similarities between the contents of files. The second uses some Machine Learning models to classify the filenames into two classes—ransom filenames and benign filenames. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Number of Ransomware Families for Each Extension.</p>
Full article ">Figure 2
<p>Term Frequency of the Most Used Terms in the Filenames.</p>
Full article ">Figure 3
<p>Variance Explained by Number of Singular Vectors.</p>
Full article ">Figure 4
<p>Variance Explained by the Number of Singular Vectors (k).</p>
Full article ">Figure 5
<p>Part of the Doc–Doc Similarities in the Six-dimensional LSA Space.</p>
Full article ">
15 pages, 4946 KiB  
Article
Design of CAN Bus Communication Interfaces for Forestry Machines
by Geoffrey Spencer, Frutuoso Mateus, Pedro Torres, Rogério Dionísio and Ricardo Martins
Computers 2021, 10(11), 144; https://doi.org/10.3390/computers10110144 - 8 Nov 2021
Cited by 14 | Viewed by 4815
Abstract
This paper presents the initial developments of new hardware devices targeted for CAN (Controller Area Network) bus communications in forest machines. CAN bus is a widely used protocol for communications in the automobile area. It is also applied in industrial vehicles and machines [...] Read more.
This paper presents the initial developments of new hardware devices targeted for CAN (Controller Area Network) bus communications in forest machines. CAN bus is a widely used protocol for communications in the automobile area. It is also applied in industrial vehicles and machines due to its robustness, simplicity, and operating flexibility. It is ideal for forestry machinery producers who need to couple their equipment to a machine that allows the transportation industry to recognize the importance of standardizing communications between tools and machines. One of the problems that producers sometimes face is a lack of flexibility in commercialized hardware modules; for example, in interfaces for sensors and actuators that guarantee scalability depending on the new functionalities required. The hardware device presented in this work is designed to overcome these limitations and provide the flexibility to standardize communications while allowing scalability in the development of new products and features. The work is being developed within the scope of the research project “SMARTCUT—Remote Diagnosis, Maintenance and Simulators for Operation Training and Maintenance of Forest Machines”, to incorporate innovative technologies in forest machines produced by the CUTPLANT S.A. It consists of an experimental system based on the PIC18F26K83 microcontroller to form a CAN node to transmit and receive digital and analog messages via CAN bus, tested and validated by the communication between different nodes. The main contribution of the paper focuses on the presentation of the development of new CAN bus electronic control units designed to enable remote communication between sensors and actuators, and the main controller of forest machines. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Harvester machine in operation equipped with a processor head.</p>
Full article ">Figure 2
<p>CAN bus modules implementation overview.</p>
Full article ">Figure 3
<p>CAN bus network architecture.</p>
Full article ">Figure 4
<p>Standard CAN.</p>
Full article ">Figure 5
<p>Extended CAN.</p>
Full article ">Figure 6
<p>CAN node established for the study.</p>
Full article ">Figure 7
<p>ECU CAN bus “IPCB SMARTCUT V00.2021”.</p>
Full article ">Figure 8
<p>Reverse polarity protection circuit.</p>
Full article ">Figure 9
<p>Short-circuit protection implemented.</p>
Full article ">Figure 10
<p>ECU CAN bus “IPCB SMARTCUT V02.2021”.</p>
Full article ">Figure 11
<p>Illustration of the experimental setup.</p>
Full article ">Figure 12
<p>Experimental setup.</p>
Full article ">Figure 13
<p>(<b>a</b>) Programming CAN transmission flowchart. (<b>b</b>) Programming CAN reception flowchart.</p>
Full article ">Figure 14
<p>Received data from inductive proximity sensor 1 CAN Analyzer.</p>
Full article ">Figure 15
<p>Received data from potentiometer CAN Analyzer.</p>
Full article ">Figure 16
<p>Received data from inductive proximity sensor 2 CAN Analyzer.</p>
Full article ">
23 pages, 4076 KiB  
Article
Estimating Interpersonal Distance and Crowd Density with a Single-Edge Camera
by Alem Fitwi, Yu Chen, Han Sun and Robert Harrod
Computers 2021, 10(11), 143; https://doi.org/10.3390/computers10110143 - 5 Nov 2021
Cited by 11 | Viewed by 3639
Abstract
For public safety and physical security, currently more than a billion closed-circuit television (CCTV) cameras are in use around the world. Proliferation of artificial intelligence (AI) and machine/deep learning (M/DL) technologies have gained significant applications including crowd surveillance. The state-of-the-art distance and area [...] Read more.
For public safety and physical security, currently more than a billion closed-circuit television (CCTV) cameras are in use around the world. Proliferation of artificial intelligence (AI) and machine/deep learning (M/DL) technologies have gained significant applications including crowd surveillance. The state-of-the-art distance and area estimation algorithms either need multiple cameras or a reference object as a ground truth. It is an open question to obtain an estimation using a single camera without a scale reference. In this paper, we propose a novel solution called E-SEC, which estimates interpersonal distance between a pair of dynamic human objects, area occupied by a dynamic crowd, and density using a single edge camera. The E-SEC framework comprises edge CCTV cameras responsible for capturing a crowd on video frames leveraging a customized YOLOv3 model for human detection. E-SEC contributes an interpersonal distance estimation algorithm vital for monitoring the social distancing of a crowd, and an area estimation algorithm for dynamically determining an area occupied by a crowd with changing size and position. A unified output module generates the crowd size, interpersonal distances, social distancing violations, area, and density per every frame. Experimental results validate the accuracy and efficiency of E-SEC with a range of different video datasets. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Figure 1
<p>Cloud-based Architecture of Video Surveillance System Comprising Cloud Servers, Surveillance Operation Center, Communication Channel, and Edge-cameras.</p>
Full article ">Figure 2
<p>Unified E-SEC Model for Human Detection and Estimation of Interpersonal Distance, Number of Social Distance (SD) Violations, Area, and Crowd Density.</p>
Full article ">Figure 3
<p>Geometrical relationship between actual and virtual human dimensions.</p>
Full article ">Figure 4
<p>Rectangular estimation of an area occupied by a crowd.</p>
Full article ">Figure 5
<p>A camera configured to see areas forward of the point directly below it up to a distance of 10.5 m.</p>
Full article ">Figure 6
<p>A camera setup to see areas forward of a mark at 2 m from the point directly below it up to infinity.</p>
Full article ">Figure 7
<p>Experimental Setup for obtaining relationship between widths of people and corresponding interpersonal distances.</p>
Full article ">Figure 8
<p>File structure for Experimental Analysis and Testing of the Methods proposed in this paper.</p>
Full article ">Figure 9
<p>Experimental Analysis: a pair of people at least 2 m apart from each other at a distance of (<b>a</b>) 15 m, (<b>b</b>) 13 m, (<b>c</b>) 11 m, (<b>d</b>) 9 m, (<b>e</b>) 7 m, and (<b>f</b>) 5 m from the camera perched on a 3 m tall pole.</p>
Full article ">Figure 10
<p>Number of people violating social distancing: 0, Total number of people in the frame: 2, Estimated area: 1.61 m<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>, &amp; Density: <math display="inline"><semantics> <mrow> <mn>1.24</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Number of people violating social distancing: 10, Total number of people in a frame: 13, Estimated area: 92.01 m<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>, &amp; Density: 0.14.</p>
Full article ">Figure 12
<p>Number of people violating social distancing: 0, Total number of people in a frame: 3, Estimated area: 10.62 m<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>, Density: 0.28.</p>
Full article ">Figure 13
<p>Number of people violating social distancing: 4, Total number of people in a frame: 7, Estimated area: 47.83 m<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>, Density: 0.15.</p>
Full article ">
14 pages, 2664 KiB  
Article
An Architecture for Distributed Electronic Documents Storage in Decentralized Blockchain B2B Applications
by Obadah Hammoud, Ivan Tarkhanov and Artyom Kosmarski
Computers 2021, 10(11), 142; https://doi.org/10.3390/computers10110142 - 4 Nov 2021
Cited by 6 | Viewed by 2508
Abstract
This paper investigates the problem of distributed storage of electronic documents (both metadata and files) in decentralized blockchain-based b2b systems (DApps). The need to reduce the cost of implementing such systems and the insufficient elaboration of the issue of storing big data in [...] Read more.
This paper investigates the problem of distributed storage of electronic documents (both metadata and files) in decentralized blockchain-based b2b systems (DApps). The need to reduce the cost of implementing such systems and the insufficient elaboration of the issue of storing big data in DLT are considered. An approach for building such systems is proposed, which allows optimizing the size of the required storage (by using Erasure coding) and simultaneously providing secure data storage in geographically distributed systems of a company, or within a consortium of companies. The novelty of this solution is that we are the first who combine enterprise DLT with distributed file storage, in which the availability of files is controlled. The results of our experiment demonstrate that the speed of the described DApp is comparable to known b2c torrent projects, and subsequently justify the choice of Hyperledger Fabric and Ethereum Enterprise for its use. Obtained test results show that public blockchain networks are not suitable for creating such a b2b system. The proposed system solves the main challenges of distributed data storage by grouping data into clusters and managing them with a load balancer, while preventing data tempering using a blockchain network. The considered DApps storage methodology easily scales horizontally in terms of distributed file storage and can be deployed on cloud computing technologies, while minimizing the required storage space. We compare this approach with known methods of file storage in distributed systems, including central storage, torrents, IPFS, and Storj. The reliability of this approach is calculated and the result is compared to traditional solutions based on full backup. Full article
(This article belongs to the Special Issue Integration of Cloud Computing and IoT)
Show Figures

Figure 1

Figure 1
<p>Conceptual architecture of the electronic document exchange system.</p>
Full article ">Figure 2
<p>Requesting file algorithm.</p>
Full article ">Figure 3
<p>Proposed and full-backup systems reliability schemas.</p>
Full article ">Figure 4
<p>File upload/download delay in Hyperleger and Ethereum Enterprise (Hyperledger Besu).</p>
Full article ">Figure 5
<p>File upload/download delay in Ethereum Ropsten.</p>
Full article ">
11 pages, 1989 KiB  
Article
Employee Attrition Prediction Using Deep Neural Networks
by Salah Al-Darraji, Dhafer G. Honi, Francesca Fallucchi, Ayad I. Abdulsada, Romeo Giuliano and Husam A. Abdulmalik
Computers 2021, 10(11), 141; https://doi.org/10.3390/computers10110141 - 3 Nov 2021
Cited by 27 | Viewed by 8943
Abstract
Decision-making plays an essential role in the management and may represent the most important component in the planning process. Employee attrition is considered a well-known problem that needs the right decisions from the administration to preserve high qualified employees. Interestingly, artificial intelligence is [...] Read more.
Decision-making plays an essential role in the management and may represent the most important component in the planning process. Employee attrition is considered a well-known problem that needs the right decisions from the administration to preserve high qualified employees. Interestingly, artificial intelligence is utilized extensively as an efficient tool for predicting such a problem. The proposed work utilizes the deep learning technique along with some preprocessing steps to improve the prediction of employee attrition. Several factors lead to employee attrition. Such factors are analyzed to reveal their intercorrelation and to demonstrate the dominant ones. Our work was tested using the imbalanced dataset of IBM analytics, which contains 35 features for 1470 employees. To get realistic results, we derived a balanced version from the original one. Finally, cross-validation is implemented to evaluate our work precisely. Extensive experiments have been conducted to show the practical value of our work. The prediction accuracy using the original dataset is about 91%, whereas it is about 94% using a synthetic dataset. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Figure 1
<p>Correlation Heat-map.</p>
Full article ">Figure 2
<p>Relation among several features. (<b>a</b>) correlation between JobLevel and MonthlyIncome. (<b>b</b>) correlation between PerformanceRating and SalaryHike. (<b>c</b>) correlation between TotalWorkingYears and JobLevel. (<b>d</b>) correlation between TotalWorkingYears and MonthlyIncome. (<b>e</b>) correlation between YearsAtCompany and YearsInCurrentRole. (<b>f</b>) correlation between YearsAtCompany and YearsWithCurrentManager. (<b>g</b>) correlation between TotalWorkingYears and Age. (<b>h</b>) correlation between YearsAtCompany and YearsSinceLastPromotion. (<b>i</b>) correlation between YearsAtCompany and TotalWorkingYears.</p>
Full article ">Figure 2 Cont.
<p>Relation among several features. (<b>a</b>) correlation between JobLevel and MonthlyIncome. (<b>b</b>) correlation between PerformanceRating and SalaryHike. (<b>c</b>) correlation between TotalWorkingYears and JobLevel. (<b>d</b>) correlation between TotalWorkingYears and MonthlyIncome. (<b>e</b>) correlation between YearsAtCompany and YearsInCurrentRole. (<b>f</b>) correlation between YearsAtCompany and YearsWithCurrentManager. (<b>g</b>) correlation between TotalWorkingYears and Age. (<b>h</b>) correlation between YearsAtCompany and YearsSinceLastPromotion. (<b>i</b>) correlation between YearsAtCompany and TotalWorkingYears.</p>
Full article ">Figure 3
<p>Features importance.</p>
Full article ">Figure 4
<p>Imbalanced and balanced dataset. (<b>a</b>) Original imbalanced dataset. (<b>b</b>) Synthetic balanced dataset.</p>
Full article ">Figure 5
<p>The proposed network architecture.</p>
Full article ">Figure 6
<p>Activation functions. (<b>a</b>) softplus function. (<b>b</b>) sigmoid function.</p>
Full article ">
12 pages, 2175 KiB  
Article
A Cognitive Diagnostic Module Based on the Repair Theory for a Personalized User Experience in E-Learning Software
by Akrivi Krouska, Christos Troussas and Cleo Sgouropoulou
Computers 2021, 10(11), 140; https://doi.org/10.3390/computers10110140 - 29 Oct 2021
Cited by 12 | Viewed by 2752
Abstract
This paper presents a novel cognitive diagnostic module which is incorporated in e-learning software for the tutoring of the markup language HTML. The system is responsible for detecting the learners’ cognitive bugs and delivering personalized guidance. The novelty of this approach is that [...] Read more.
This paper presents a novel cognitive diagnostic module which is incorporated in e-learning software for the tutoring of the markup language HTML. The system is responsible for detecting the learners’ cognitive bugs and delivering personalized guidance. The novelty of this approach is that it is based on the Repair theory that incorporates additional features, such as student negligence and test completion times, in its diagnostic mechanism; also, it employs a recommender module that suggests students optimal learning paths based on their misconceptions using descriptive test feedback and adaptability of learning content. Considering the Repair theory, the diagnostic mechanism uses a library of error correction rules to explain the cause of errors observed by the student during the assessment. This library covers common errors, creating a hypothesis space in that way. Therefore, the test items are expanded, so that they belong to the hypothesis space. Both the system and the cognitive diagnostic tool were evaluated with promising results, showing that they offer a personalized experience to learners. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

Figure 1
<p>Entity-Relationship model.</p>
Full article ">Figure 2
<p>Feedback to Student A on third Test.</p>
Full article ">Figure 3
<p>Feedback to Student B on third Test.</p>
Full article ">Figure 4
<p>Evaluation results.</p>
Full article ">
17 pages, 8197 KiB  
Article
Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture
by Anuja Arora, Ambikesh Jayal, Mayank Gupta, Prakhar Mittal and Suresh Chandra Satapathy
Computers 2021, 10(11), 139; https://doi.org/10.3390/computers10110139 - 28 Oct 2021
Cited by 30 | Viewed by 9990
Abstract
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important [...] Read more.
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A sample set of T1, T1-GD, T2, and T2 Flair images (Source: BraTS 2018 Dataset).</p>
Full article ">Figure 2
<p>Deep learning-based tumor detection architecture.</p>
Full article ">Figure 3
<p>Watershed algorithm outcome. (<b>a</b>) Original image; (<b>b</b>) Segmented image.</p>
Full article ">Figure 4
<p>U-Net architecture (example for 32 × 32 pixels in the lowest resolution) (Figure taken from [<a href="#B41-computers-10-00139" class="html-bibr">41</a>]).</p>
Full article ">Figure 5
<p>Performance Evaluation Score and Loss Curve of the U-net Model for Brain Tumor Segmentation. (<b>a</b>) Dice coefficient and loss curve of HGG-1 Subset; (<b>b</b>) Dice coefficient and loss curve of HGG-2 Subset; (<b>c</b>) Dice coefficient and loss curve of HGG-3 Subset; (<b>d</b>) Dice coefficient and loss curve of LGG-1 Subset.</p>
Full article ">
16 pages, 2938 KiB  
Article
Evaluating GraphQL and REST API Services Performance in a Massive and Intensive Accessible Information System
by Armin Lawi, Benny L. E. Panggabean and Takaichi Yoshida
Computers 2021, 10(11), 138; https://doi.org/10.3390/computers10110138 - 27 Oct 2021
Cited by 18 | Viewed by 9084
Abstract
Currently, most middleware application developers have two choices when designing or implementing Application Programming Interface (API) services; i.e., they can either stick with Representational State Transfer (REST) or explore the emerging GraphQL technology. Although REST is widely regarded as the standard method for [...] Read more.
Currently, most middleware application developers have two choices when designing or implementing Application Programming Interface (API) services; i.e., they can either stick with Representational State Transfer (REST) or explore the emerging GraphQL technology. Although REST is widely regarded as the standard method for API development, GraphQL is believed to be revolutionary in overcoming the main drawbacks of REST, especially data-fetching issues. Nevertheless, doubts still remain, as there are no investigations with convincing results in evaluating the performance of the two services. This paper proposes a new research methodology to evaluate the performance of REST and GraphQL API services with two main ideas as novelties. The first novel method is the evaluation of the two services is performed on the real ongoing operation of the management information system, where massive and intensive query transactions take place on a complex database with many relationships. The second is the fair and independent performance evaluation results obtained by distributing client requests and synchronizing the service responses on the two virtually separated parallel execution paths for each API service, respectively. The performance evaluation was investigated using basic measures of QoS (Quality of Services), i.e., response time, throughput, CPU load, and memory usage. We use the term efficiency in comparing the evaluation results to capture differences in their performance measures. The statistical hypothesis parameters test using the two-tails paired t-test, and boxplot visualization was also given to confirm the significance of the comparison results. The results showed REST is still faster up to 50.50% in response time and 37.16% for throughput, while GraphQL is very efficient in resource utilization, i.e., 37.26% for CPU load and 39.74% for memory utilization. Therefore, GraphQL is the right choice when data requirements change frequently, and resource utilization is the most important consideration. REST is used when some data are frequently accessed and called by multiple requests. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the evaluated system architecture of the SIM-LP2M.</p>
Full article ">Figure 2
<p>Illustration of the difference between REST and GraphQL architectures.</p>
Full article ">Figure 3
<p>An illustration of data model for testing.</p>
Full article ">Figure 4
<p>An example of request–response cycle fragment of REST implementation.</p>
Full article ">Figure 5
<p>An example of request–response cycle fragment of query implementation in GraphQL.</p>
Full article ">Figure 6
<p>Response time average with 100 requests in each testing trial.</p>
Full article ">Figure 7
<p>Experimental results of throughput (the number of handled requests).</p>
Full article ">Figure 8
<p>Percentage result of CPU load.</p>
Full article ">Figure 9
<p>Experimental results for memory utilization.</p>
Full article ">Figure 10
<p>Boxplot of evaluation results for the performance measurement on REST and GraphQL services; (<b>A</b>) response time (ms), (<b>B</b>) throughput (handled requests), (<b>C</b>) CPU Load (%), and (<b>D</b>) memory utilization (MB).</p>
Full article ">
15 pages, 288 KiB  
Article
Affecting Young Children’s Knowledge, Attitudes, and Behaviors for Ultraviolet Radiation Protection through the Internet of Things: A Quasi-Experimental Study
by Sotiroula Theodosi and Iolie Nicolaidou
Computers 2021, 10(11), 137; https://doi.org/10.3390/computers10110137 - 25 Oct 2021
Cited by 9 | Viewed by 3445
Abstract
Prolonged exposure to ultraviolet (UV) radiation is linked to skin cancer. Children are more vulnerable to UV harmful effects compared to adults. Children’s active involvement in using Internet of Things (IoT) devices to collect and analyze real-time UV radiation data is suggested to [...] Read more.
Prolonged exposure to ultraviolet (UV) radiation is linked to skin cancer. Children are more vulnerable to UV harmful effects compared to adults. Children’s active involvement in using Internet of Things (IoT) devices to collect and analyze real-time UV radiation data is suggested to increase their awareness of UV protection. This quasi-experimental pre-test post-test control group study implemented light sensors in a STEM inquiry-based learning environment focusing on UV radiation and protection in primary education. This exploratory, small-scale study investigated the effect of a STEM environment implementing IoT devices on 6th graders’ knowledge, attitudes, and behaviors about UV radiation and protection. Participants were 31 primary school students. Experimental group participants (n = 15) attended four eighty-minute inquiry-based lessons on UV radiation and protection and used sensors to measure and analyze UV radiation in their school. Data sources included questionnaires on UV knowledge, attitudes, and behaviors administered pre- and post-intervention. Statistically significant learning gains were found only for the experimental group (t14 = ?3.64, p = 0.003). A statistically significant positive behavioral change was reported for experimental group participants six weeks post-intervention. The study adds empirical evidence suggesting the value of real-time data-driven approaches implementing IoT devices to positively influence students’ knowledge and behaviors related to socio-scientific problems affecting their health. Full article
18 pages, 2561 KiB  
Article
B-MFO: A Binary Moth-Flame Optimization for Feature Selection from Medical Datasets
by Mohammad H. Nadimi-Shahraki, Mahdis Banaie-Dezfouli, Hoda Zamani, Shokooh Taghian and Seyedali Mirjalili
Computers 2021, 10(11), 136; https://doi.org/10.3390/computers10110136 - 25 Oct 2021
Cited by 105 | Viewed by 4935
Abstract
Advancements in medical technology have created numerous large datasets including many features. Usually, all captured features are not necessary, and there are redundant and irrelevant features, which reduce the performance of algorithms. To tackle this challenge, many metaheuristic algorithms are used to select [...] Read more.
Advancements in medical technology have created numerous large datasets including many features. Usually, all captured features are not necessary, and there are redundant and irrelevant features, which reduce the performance of algorithms. To tackle this challenge, many metaheuristic algorithms are used to select effective features. However, most of them are not effective and scalable enough to select effective features from large medical datasets as well as small ones. Therefore, in this paper, a binary moth-flame optimization (B-MFO) is proposed to select effective features from small and large medical datasets. Three categories of B-MFO were developed using S-shaped, V-shaped, and U-shaped transfer functions to convert the canonical MFO from continuous to binary. These categories of B-MFO were evaluated on seven medical datasets and the results were compared with four well-known binary metaheuristic optimization algorithms: BPSO, bGWO, BDA, and BSSA. In addition, the convergence behavior of the B-MFO and comparative algorithms were assessed, and the results were statistically analyzed using the Friedman test. The experimental results demonstrate a superior performance of B-MFO in solving the feature selection problem for different medical datasets compared to other comparative algorithms. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>S-shaped, V-shaped, and U-shaped transfer functions.</p>
Full article ">Figure 2
<p>The flowchart of B-MFO.</p>
Full article ">Figure 3
<p>Average accuracy obtained by B-MFO and comparative algorithms on large datasets.</p>
Full article ">Figure 4
<p>Average features’ number selected by B-MFO and comparative algorithms on large datasets.</p>
Full article ">Figure 5
<p>The convergence curves of winner versions of B-MFO and comparative algorithms.</p>
Full article ">
8 pages, 205 KiB  
Editorial
Blockchain and Recordkeeping: Editorial
by Victoria L. Lemieux
Computers 2021, 10(11), 135; https://doi.org/10.3390/computers10110135 - 20 Oct 2021
Cited by 8 | Viewed by 4078
Abstract
Distributed ledger technologies (DLT), including blockchains, combine the use of cryptography and distributed networks to achieve a novel form of records creation and keeping designed for tamper-resistance and immutability. Over the past several years, these capabilities have made DLTs, including blockchains, increasingly popular [...] Read more.
Distributed ledger technologies (DLT), including blockchains, combine the use of cryptography and distributed networks to achieve a novel form of records creation and keeping designed for tamper-resistance and immutability. Over the past several years, these capabilities have made DLTs, including blockchains, increasingly popular as a general-purpose technology used for recordkeeping in a variety of sectors and industry domains, yet many open challenges and issues, both theoretical and applied, remain. This editorial introduces the Special Issue of Computers focusing on exploring the frontiers of blockchain/distributed ledger technology and recordkeeping. Full article
(This article belongs to the Special Issue Blockchain Technology and Recordkeeping)
Previous Issue
Next Issue
Back to TopTop