[go: up one dir, main page]

Next Issue
Volume 11, November
Previous Issue
Volume 11, September
 
 

Information, Volume 11, Issue 10 (October 2020) – 36 articles

Cover Story (view full-size image): We investigated the ability of deep learning neural networks to provide mapping between the features of a parallel distributed discrete-event simulation (PDDES) system (software and hardware) to a time synchronization scheme to optimize speed-up performance. Deep belief networks (DBNs) were used, which due to their multiple layers with feature detectors at the lower layers and a supervised scheme at the higher layers, can provide nonlinear mapping. The mapping mechanism works by considering simulation constructs, hardware, and software intricacies such as simulation objects, concurrency, iterations, routines, and messaging rates with an importance level based on a cognitive approach. The result is a synchronization scheme with breathing time buckets, breathing time warp, and time warp to optimize the speed-up. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1952 KiB  
Article
A Neural-Network-Based Approach to Chinese–Uyghur Organization Name Translation
by Aishan Wumaier, Cuiyun Xu, Zaokere Kadeer, Wenqi Liu, Yingbo Wang, Xireaili Haierla, Maihemuti Maimaiti, ShengWei Tian and Alimu Saimaiti
Information 2020, 11(10), 492; https://doi.org/10.3390/info11100492 - 21 Oct 2020
Cited by 1 | Viewed by 3087
Abstract
The recognition and translation of organization names (ONs) is challenging due to the complex structures and high variability involved. ONs consist not only of common generic words but also names, rare words, abbreviations and business and industry jargon. ONs are a sub-class of [...] Read more.
The recognition and translation of organization names (ONs) is challenging due to the complex structures and high variability involved. ONs consist not only of common generic words but also names, rare words, abbreviations and business and industry jargon. ONs are a sub-class of named entity (NE) phrases, which convey key information in text. As such, the correct translation of ONs is critical for machine translation and cross-lingual information retrieval. The existing Chinese–Uyghur neural machine translation systems have performed poorly when applied to ON translation tasks. As there are no publicly available Chinese–Uyghur ON translation corpora, an ON translation corpus is developed here, which includes 191,641 ON translation pairs. A word segmentation approach involving characterization, tagged characterization, byte pair encoding (BPE) and syllabification is proposed here for ON translation tasks. A recurrent neural network (RNN) attention framework and transformer are adapted here for ON translation tasks with different sequence granularities. The experimental results indicate that the transformer model not only outperforms the RNN attention model but also benefits from the proposed word segmentation approach. In addition, a Chinese–Uyghur ON translation system is developed here to automatically generate new translation pairs. This work significantly improves Chinese–Uyghur ON translation and can be applied to improve Chinese–Uyghur machine translation and cross-lingual information retrieval. It can also easily be extended to other agglutinative languages. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>A comparison of Chinese and Uyghur organization name lengths with varying granularities.</p>
Full article ">Figure 2
<p>Transformer architecture. This figure illustrates the character-level Chinese input as the source language and the syllable-level Uyghur input as the target language. The input Chinese sequence is “在新疆大学” (CPA: “zài xīn jiāng dà xué,” meaning “at Xinjiang University”) and “shin jang da shö” is the target predicted syllable sequence. The gray column of the target input vectors indicates a masked vector, which has not yet been predicted.</p>
Full article ">Figure 3
<p>A flow chart for the proposed Chinese–Uyghur ON translation pair generation system.</p>
Full article ">Figure 4
<p>Diagram of the top 15 organization name suffixes, representing business areas and the organization categories.</p>
Full article ">Figure 5
<p>Comparison of the transformer-based Chinese–Uyghur ON translation model with varying segment granularities.</p>
Full article ">Figure 6
<p>Comparison of the transformer-based Uyghur–Chinese ON translation model with varying segment granularities.</p>
Full article ">
20 pages, 2992 KiB  
Article
Data-Driven Critical Tract Variable Determination for European Portuguese
by Samuel Silva, Nuno Almeida, Conceição Cunha, Arun Joseph, Jens Frahm and António Teixeira
Information 2020, 11(10), 491; https://doi.org/10.3390/info11100491 - 21 Oct 2020
Cited by 3 | Viewed by 2630
Abstract
Technologies, such as real-time magnetic resonance (RT-MRI), can provide valuable information to evolve our understanding of the static and dynamic aspects of speech by contributing to the determination of which articulators are essential (critical) in producing specific sounds and how (gestures). While a [...] Read more.
Technologies, such as real-time magnetic resonance (RT-MRI), can provide valuable information to evolve our understanding of the static and dynamic aspects of speech by contributing to the determination of which articulators are essential (critical) in producing specific sounds and how (gestures). While a visual analysis and comparison of imaging data or vocal tract profiles can already provide relevant findings, the sheer amount of available data demands and can strongly profit from unsupervised data-driven approaches. Recent work, in this regard, has asserted the possibility of determining critical articulators from RT-MRI data by considering a representation of vocal tract configurations based on landmarks placed on the tongue, lips, and velum, yielding meaningful results for European Portuguese (EP). Advancing this previous work to obtain a characterization of EP sounds grounded on Articulatory Phonology, important to explore critical gestures and advance, for example, articulatory speech synthesis, entails the consideration of a novel set of tract variables. To this end, this article explores critical variable determination considering a vocal tract representation aligned with Articulatory Phonology and the Task Dynamics framework. The overall results, obtained considering data for three EP speakers, show the applicability of this approach and are consistent with existing descriptions of EP sounds. Full article
(This article belongs to the Special Issue Selected Papers from PROPOR 2020)
Show Figures

Figure 1

Figure 1
<p>Overall steps of the method to determine the critical articulators from real-time MRI (RT-MRI) images of the vocal tract. After MRI acquisition and audio annotation, the data is uploaded to our speech studies platform, under development [<a href="#B35-information-11-00491" class="html-bibr">35</a>], and its processing and analysis are carried out resulting in a list of critical tract variables per phone. Refer to the text for additional details.</p>
Full article ">Figure 2
<p>Illustrative examples of midsagittal real-time MRI images of the vocal tract, for different speakers and sounds.</p>
Full article ">Figure 3
<p>Midsagital real-time MRI image sequence of speaker 8545 articulating /p/ as in the nonsense word [<b>p</b>ɐnetɐ]. The images have been automatically identified considering the corresponding time interval annotated based on the audio recorded during the acquisition. Note the closed lips, throughout and their opening, in the last frame, to produce the following /ɐ/.</p>
Full article ">Figure 3 Cont.
<p>Midsagital real-time MRI image sequence of speaker 8545 articulating /p/ as in the nonsense word [<b>p</b>ɐnetɐ]. The images have been automatically identified considering the corresponding time interval annotated based on the audio recorded during the acquisition. Note the closed lips, throughout and their opening, in the last frame, to produce the following /ɐ/.</p>
Full article ">Figure 4
<p>Illustrative examples of the automatically segmented vocal tract contours represented over the corresponding midsagittal real-time MRI images for three speakers uttering /p/, on the top row, and /n/ on the bottom row.</p>
Full article ">Figure 5
<p>Illustrative vocal tract representation depicting the main aspects of the considered tract variables: tongue tip constriction (TTC, defined by degree and location); tongue body constriction (TBC, defined by degree and location), computed considering both the pharyngeal wall and hard palate; velum (V, defined by the extent of the velopharyngeal and orovelar passages); and lips (LIPS, defined by aperture and protrusion). The point <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msub> </semantics></math> is used as a reference for computing constriction angular locations. Please refer to the text for further details.</p>
Full article ">Figure 6
<p>Correlation among the different components of the considered tract variables (1D correlation) for the three speakers 8458, 8460 and 8545, and for the speaker gathering the normalized data. <b>Tract variables for 1D correlation: LIPSa</b>: lip aperture; <b>LIPSp</b>: lip protrusion; <b>TTCd</b>: tongue tip constriction distance; <b>TTCl</b>: tongue tip constriction location; <b>TBCd</b>: tongue body constriction distance; <b>TBCl</b>: tongue body constriction location; <b>Vp</b>: velar port distance; <b>Vt</b>: orovelar port distance.</p>
Full article ">Figure 7
<p>Correlation matrices for previous results [<a href="#B25-information-11-00491" class="html-bibr">25</a>] considering Articulatory Phonology aligned tract variables for two of the speakers also considered in this work (8458 and 8460). In this previous work, we considered less data samples per speaker and represented the velum by the <span class="html-italic">x</span> and <span class="html-italic">y</span> coordinates of a landmark positioned at its back. Please refer to <a href="#information-11-00491-f006" class="html-fig">Figure 6</a> for the corresponding matrices obtained in the current work. <b>Tract variables for 1D correlation (previous work):</b><b>LIPSa</b>: lip aperture; <b>LIPSp</b>: lip protrusion; <b>TTCd</b>: tongue tip constriction distance; <b>TTCl</b>: tongue tip constriction location; <b>TBCd</b>: tongue body constriction distance; <b>TBCl</b>: tongue body constriction location; <b>Vx</b>: velar landmark <span class="html-italic">x</span>; <b>Vy</b>: velar landmark <span class="html-italic">y</span>.</p>
Full article ">
13 pages, 850 KiB  
Article
Factors Affecting Decision-Making Processes in Virtual Teams in the UAE
by Vida Davidaviciene, Khaled Al Majzoub and Ieva Meidute-Kavaliauskiene
Information 2020, 11(10), 490; https://doi.org/10.3390/info11100490 - 21 Oct 2020
Cited by 7 | Viewed by 6597
Abstract
Organizational reliance on virtual teams (VTs) is increasing tremendously due to the significant benefits they offer, such as efficiently reaching objectives and increasing organizational performance. However, VTs face a lot of challenges that, if overlooked, will prevent them from yielding the required benefits. [...] Read more.
Organizational reliance on virtual teams (VTs) is increasing tremendously due to the significant benefits they offer, such as efficiently reaching objectives and increasing organizational performance. However, VTs face a lot of challenges that, if overlooked, will prevent them from yielding the required benefits. One of the major issues that hinders the effectiveness of VTs is the decision-making process. There is a lack of scientific research that attempts to understand the factors affecting decision making processes in VTs. Studies in this area have only been done in the United States and Europe. However, such research has not been conducted in the Middle East, where specific scientific solutions are still required to improve the performance of VTs. Therefore, this study is conducted in the Middle East, namely in the United Arab Emirates, to gain scientific knowledge on this region’s specificity. An online questionnaire (Google forms) was used to obtain the necessary data. Hypotheses were developed to test the influence of ICT (Information and communications technologies), language, information sharing, and trust on the decision-making processes, and the effect of decision making on team performance. Structural equational model (SEM) methodology was used to test our proposed model. The results showed that factors such as trust, ICT, and information sharing have a direct effect on decision-making processes, while language has no effect on decision making, and decision-making processes have a direct effect on the performance of the VTs. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

Figure 1
<p>Factors Affecting VT Decision Making and Performance.</p>
Full article ">Figure 2
<p>Structural Equational model for Decision Making.</p>
Full article ">
17 pages, 1614 KiB  
Article
A GARCH Model with Artificial Neural Networks
by Wing Ki Liu and Mike K. P. So
Information 2020, 11(10), 489; https://doi.org/10.3390/info11100489 - 20 Oct 2020
Cited by 14 | Viewed by 7143
Abstract
In this paper, we incorporate a GARCH model into an artificial neural network (ANN) for financial volatility modeling and estimate the parameters in Tensorflow. Our goal was to better predict stock volatility. We evaluate the performance of the models using the mean absolute [...] Read more.
In this paper, we incorporate a GARCH model into an artificial neural network (ANN) for financial volatility modeling and estimate the parameters in Tensorflow. Our goal was to better predict stock volatility. We evaluate the performance of the models using the mean absolute errors of powers of the out-of-sample returns between 2 March 2018 and 28 February 2020. Our results show that our modeling procedure with an ANN can outperform the standard GARCH(1,1) model with standardized Student’s t distribution. Our variable importance analysis shows that Net Debt/EBITA is among the six most important predictor variables in all of the neural network models we have examined. The main contribution of this paper is that we propose a Long Short-Term Memory (LSTM) model with a GARCH framework because LSTM can systematically take into consideration potential nonlinearity in volatility structure at different time points. One of the advantages of our research is that the proposed models are easy to implement because our proposed models can be run in Tensorflow, a Python package that enables fast and automatic optimization. Another advantage is that the proposed models enable variable importance analysis. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture with Long Short-Term Memory (LSTM) cells [<a href="#B21-information-11-00489" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Heat map of the correlation matrix of data after normalization. The colors indicate the magnitudes of the correlations.</p>
Full article ">Figure 3
<p>Architectures to be tested. The first dimension in each layer indicate a batch size. The implementation of neural network does not require the input of the batch size; therefore, the first dimension for each layer is a question mark. For example, the first layers of models are input layers and input matrices are of the shape (20, 17). The input and output for input layer are (?, 20, 17). As the first layers are the same and the outputs of the final layers are of dimension one for all models, the architectures are named according to output units of layers excluding the first and final layer. For <a href="#information-11-00489-f003" class="html-fig">Figure 3</a>c, after removing the input layer and the final layer, the LSTM layer with 32 output units is followed by a dense layer with 16 output units and a dense layer with 8 output units. Therefore, it is named LSTM(32)+dense(16-8).</p>
Full article ">Figure 4
<p>Negative loglikelihood over training. To observe whether overfitting occurs during training, we plotted the negative loglikelihood over training. Except for LSTM(32)+dense(64-32-16-8), the negative loglikelikehood for testing instances decreases over training.</p>
Full article ">Figure 5
<p>Actual <math display="inline"><semantics> <msub> <mi>r</mi> <mi>t</mi> </msub> </semantics></math> and the predicted standard deviation. To visualize the model performances, we plotted actual <math display="inline"><semantics> <msub> <mi>r</mi> <mi>t</mi> </msub> </semantics></math> and the predicted standard deviation in the same graphs. Except for LSTM(32)+dense(32), when actual <math display="inline"><semantics> <msub> <mi>r</mi> <mi>t</mi> </msub> </semantics></math> fluctuates more rapidly, the predicted standard deviation increases.</p>
Full article ">Figure 6
<p>Actual <math display="inline"><semantics> <msubsup> <mi>r</mi> <mi>t</mi> <mn>2</mn> </msubsup> </semantics></math> and the predicted variance. To visualize the model performances, we plotted actual <math display="inline"><semantics> <msubsup> <mi>r</mi> <mi>t</mi> <mn>2</mn> </msubsup> </semantics></math> and the predicted variance in the same graphs. Except for LSTM(32)+dense(32), actual <math display="inline"><semantics> <msubsup> <mi>r</mi> <mi>t</mi> <mn>2</mn> </msubsup> </semantics></math> and the predicted variance are very close</p>
Full article ">Figure 7
<p>Variable importance. To visualize variable importance, we plotted variables importance for each model. The higher the variable importance is, the more important the variable is. For all models, Net Debt/EBITDA is very important.</p>
Full article ">
16 pages, 2496 KiB  
Article
A Two-Stage Particle Swarm Optimization Algorithm for Wireless Sensor Nodes Localization in Concave Regions
by Yinghui Meng, Qianying Zhi, Qiuwen Zhang and Ni Yao
Information 2020, 11(10), 488; https://doi.org/10.3390/info11100488 - 20 Oct 2020
Cited by 3 | Viewed by 2135
Abstract
At present, range-free localization algorithm is the mainstream of node localization method, which has made tremendous achievements. However, there are few algorithms that can be used in concave regions, and the existing algorithms have defects such as hop distance error, excessive time complexity [...] Read more.
At present, range-free localization algorithm is the mainstream of node localization method, which has made tremendous achievements. However, there are few algorithms that can be used in concave regions, and the existing algorithms have defects such as hop distance error, excessive time complexity and so on. To solve these problems, this paper proposes a two-stage PSO (Particle Swarm Optimization) algorithm for wireless sensor nodes localization in “concave regions”. In the first stage, it proposes a method of distance measuring based on similar path search and intersection ratio, and completes the initial localization of unknown nodes based on maximum likelihood estimation. In the second stage, the improved PSO algorithm is used to optimize the initial localization results in the previous stage. The experimental result shows that the localization error of this algorithm is always within 10% and the execution time is maintained at about 20 s when the communication radius and beacon node ratio is changing. Therefore, the algorithm can obtain high localization accuracy in wireless sensor network with “concave regions”, requiring low computing power for nodes, and energy consumption. Given this, it can greatly extend the service life of sensor nodes. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

Figure 1
<p>Comparison diagram of concave and convex areas. (<b>a</b>,<b>b</b>) are concave regions; (<b>c</b>,<b>d</b>) are convex regions.</p>
Full article ">Figure 2
<p>Schematic diagram of hop error.</p>
Full article ">Figure 3
<p>Schematic diagram of calculating the distance between neighbor nodes.</p>
Full article ">Figure 4
<p>Schematic diagram of the multi-hop shortest path between nodes.</p>
Full article ">Figure 5
<p>Schematic diagram of similar path search.</p>
Full article ">Figure 6
<p>Schematic diagram of distance calculation.</p>
Full article ">Figure 7
<p>Relationship between beacon node ratio and localization results.</p>
Full article ">Figure 8
<p>Relationship between beacon node ratio and execution time of algorithm.</p>
Full article ">Figure 9
<p>Relationship between node communication radius and localization results.</p>
Full article ">Figure 10
<p>Relationship between communication radius and execution time of algorithm.</p>
Full article ">
19 pages, 1778 KiB  
Article
Models for Internet of Things Environments—A Survey
by Ana Cristina Franco da Silva and Pascal Hirmer
Information 2020, 11(10), 487; https://doi.org/10.3390/info11100487 - 20 Oct 2020
Cited by 9 | Viewed by 3898
Abstract
Today, the Internet of Things (IoT) is an emerging topic in research and industry. Famous examples of IoT applications are smart homes, smart cities, and smart factories. Through highly interconnected devices, equipped with sensors and actuators, context-aware approaches can be developed to enable, [...] Read more.
Today, the Internet of Things (IoT) is an emerging topic in research and industry. Famous examples of IoT applications are smart homes, smart cities, and smart factories. Through highly interconnected devices, equipped with sensors and actuators, context-aware approaches can be developed to enable, e.g., monitoring and self-organization. To achieve context-awareness, a large amount of environment models have been developed for the IoT that contain information about the devices of an environment, their attached sensors and actuators, as well as their interconnection. However, these models highly differ in their content, the format being used, for example ontologies or relational models, and the domain to which they are applied. In this article, we present a comparative survey of models for IoT environments. By doing so, we describe and compare the selected models based on a deep literature research. The result is a comparative overview of existing state-of-the-art IoT environment models. Full article
(This article belongs to the Special Issue Data Processing in the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Layers of the Internet of Things.</p>
Full article ">Figure 2
<p>Example of an ontology-based model for IoT environments based on [<a href="#B11-information-11-00487" class="html-bibr">11</a>].</p>
Full article ">Figure 3
<p>Research methodology.</p>
Full article ">Figure 4
<p>IoT-Lite ontology (based on [<a href="#B23-information-11-00487" class="html-bibr">23</a>]).</p>
Full article ">Figure 5
<p>IoT models mapped on layers of the Internet of Things.</p>
Full article ">
16 pages, 2650 KiB  
Article
Joint Sentiment Part Topic Regression Model for Multimodal Analysis
by Mengyao Li, Yonghua Zhu, Wenjing Gao, Meng Cao and Shaoxiu Wang
Information 2020, 11(10), 486; https://doi.org/10.3390/info11100486 - 19 Oct 2020
Cited by 1 | Viewed by 2369
Abstract
The development of multimodal media compensates for the lack of information expression in a single modality and thus gradually becomes the main carrier of sentiment. In this situation, automatic assessment for sentiment information in multimodal contents is of increasing importance for many applications. [...] Read more.
The development of multimodal media compensates for the lack of information expression in a single modality and thus gradually becomes the main carrier of sentiment. In this situation, automatic assessment for sentiment information in multimodal contents is of increasing importance for many applications. To achieve this, we propose a joint sentiment part topic regression model (JSP) based on latent Dirichlet allocation (LDA), with a sentiment part, which effectively utilizes the complementary information between the modalities and strengthens the relationship between the sentiment layer and multimodal content. Specifically, a linear regression module is developed to share implicit variables between image–text pairs, so that one modality can predict the other. Moreover, a sentiment label layer is added to model the relationship between sentiment distribution parameters and multimodal contents. Experimental results on several datasets verify the feasibility of our proposed approach for multimodal sentiment analysis. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Bayesian network of latent Dirichlet allocation (LDA) [<a href="#B47-information-11-00486" class="html-bibr">47</a>].</p>
Full article ">Figure 2
<p>The structure of proposed method.</p>
Full article ">Figure 3
<p>The example of predicted values assigned.</p>
Full article ">Figure 4
<p>The diagram of proposed method.</p>
Full article ">Figure 5
<p>The processing of the input data.</p>
Full article ">Figure 6
<p>(<b>a</b>) Comparison experiment on Flickr; (<b>b</b>) comparison experiment on Twitter.</p>
Full article ">Figure 7
<p>The <math display="inline"><semantics> <mi>η</mi> </semantics></math> impact on perplexity.</p>
Full article ">Figure 8
<p>The <math display="inline"><semantics> <mi>γ</mi> </semantics></math> impact on perplexity.</p>
Full article ">
32 pages, 11582 KiB  
Article
The Effects of Facial Expressions on Face Biometric System’s Reliability
by Hind A. Alrubaish and Rachid Zagrouba
Information 2020, 11(10), 485; https://doi.org/10.3390/info11100485 - 17 Oct 2020
Cited by 5 | Viewed by 5819
Abstract
The human mood has a temporary effect on the face shape due to the movement of its muscles. Happiness, sadness, fear, anger, and other emotional conditions may affect the face biometric system’s reliability. Most of the current studies on facial expressions are concerned [...] Read more.
The human mood has a temporary effect on the face shape due to the movement of its muscles. Happiness, sadness, fear, anger, and other emotional conditions may affect the face biometric system’s reliability. Most of the current studies on facial expressions are concerned about the accuracy of classifying the subjects based on their expressions. This study investigated the effect of facial expressions on the reliability of a face biometric system to find out which facial expression puts the biometric system at greater risk. Moreover, it identified a set of facial features that have the lowest facial deformation caused by facial expressions to be generalized during the recognition process, regardless of which facial expression is presented. In order to achieve the goal of this study, an analysis of 22 facial features between the normal face and six universal facial expressions is obtained. The results show that the face biometric systems are affected by facial expressions where the disgust expression achieved the most dissimilar score, while the sad expression achieved the lowest dissimilar score. Additionally, the study identified the five and top ten facial features that have the lowest facial deformations on the face shape in all facial expressions. Besides that, the relativity score showed less variances between the sample using the top facial features. The obtained results of this study minimized the false rejection rate in the face biometric system and subsequently the ability to raise the system’s acceptance threshold to maximize the intrusion detection rate without affecting the user convenience. Full article
(This article belongs to the Special Issue Emotions Detection through Facial Recognitions)
Show Figures

Figure 1

Figure 1
<p>FGNet annotation [<a href="#B12-information-11-00485" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>FGNet annotation [<a href="#B12-information-11-00485" class="html-bibr">12</a>].</p>
Full article ">Figure 3
<p>5-point features [<a href="#B13-information-11-00485" class="html-bibr">13</a>].</p>
Full article ">Figure 4
<p>68-points features [<a href="#B13-information-11-00485" class="html-bibr">13</a>].</p>
Full article ">Figure 5
<p>Selected features by Banerrjee [<a href="#B15-information-11-00485" class="html-bibr">15</a>]. (<b>a</b>) distance between eyes, (<b>b</b>) distance between ears, (<b>c</b>) distance between the nose and forehead, (<b>d</b>) width of the leap, in addition to the following angles where the sum is 180°; (<b>e</b>) angles between eyes and nose, (<b>f</b>) angles between ears and mouth. The following were used to measure the distance between the face objects: Euclidian distance (EU), city block metric, Minkowski distance, Chebyshev distance and cosine distance.</p>
Full article ">Figure 6
<p>Vector distances in [<a href="#B16-information-11-00485" class="html-bibr">16</a>].</p>
Full article ">Figure 7
<p>Triangles in [<a href="#B16-information-11-00485" class="html-bibr">16</a>].</p>
Full article ">Figure 8
<p>Selected features in [<a href="#B17-information-11-00485" class="html-bibr">17</a>].</p>
Full article ">Figure 9
<p>Selected Features in [<a href="#B18-information-11-00485" class="html-bibr">18</a>].</p>
Full article ">Figure 10
<p>Subject 1 in IMPA-FACES3D [<a href="#B37-information-11-00485" class="html-bibr">37</a>] shows the following expressions: (<b>a</b>) neutral, (<b>b</b>) happy, (<b>c</b>) sadness, (<b>d</b>) surprise, (<b>e</b>) anger, (<b>f</b>) disgust, (<b>g</b>) fear [<a href="#B40-information-11-00485" class="html-bibr">40</a>].</p>
Full article ">Figure 11
<p>Template image for face’s landmark detection using 68-points for a frontal view.</p>
Full article ">Figure 12
<p>Face alignment.</p>
Full article ">Figure 13
<p>Illustration of the 22 facial features: (<b>a</b>) left eye width; (<b>b</b>) right eye width; (<b>c</b>) left eye position; (<b>d</b>) right eye position; (<b>e</b>) mouth width; (<b>f</b>) mouth position; (<b>g</b>) nose width; (<b>h</b>) nose position; (<b>i</b>) chin width; (<b>j</b>) chin position; (<b>k</b>) forehead width; (<b>l</b>) forehead position; (<b>m</b>) distance between eyes; (<b>n</b>) distance between left eye and nose; (<b>o</b>) distance between right eye and nose; (<b>p</b>) distance between left eye and mouth; (<b>q</b>) distance between right eye and mouth; (<b>r</b>) distance between left eye and eyebrow; (<b>s</b>) distance between right eye and eyebrow; (<b>t</b>) distance between nose and forehead; (<b>u</b>) distance between left ear and mouth; (<b>v</b>) distance between right ear and mouth.</p>
Full article ">Figure 13 Cont.
<p>Illustration of the 22 facial features: (<b>a</b>) left eye width; (<b>b</b>) right eye width; (<b>c</b>) left eye position; (<b>d</b>) right eye position; (<b>e</b>) mouth width; (<b>f</b>) mouth position; (<b>g</b>) nose width; (<b>h</b>) nose position; (<b>i</b>) chin width; (<b>j</b>) chin position; (<b>k</b>) forehead width; (<b>l</b>) forehead position; (<b>m</b>) distance between eyes; (<b>n</b>) distance between left eye and nose; (<b>o</b>) distance between right eye and nose; (<b>p</b>) distance between left eye and mouth; (<b>q</b>) distance between right eye and mouth; (<b>r</b>) distance between left eye and eyebrow; (<b>s</b>) distance between right eye and eyebrow; (<b>t</b>) distance between nose and forehead; (<b>u</b>) distance between left ear and mouth; (<b>v</b>) distance between right ear and mouth.</p>
Full article ">Figure 13 Cont.
<p>Illustration of the 22 facial features: (<b>a</b>) left eye width; (<b>b</b>) right eye width; (<b>c</b>) left eye position; (<b>d</b>) right eye position; (<b>e</b>) mouth width; (<b>f</b>) mouth position; (<b>g</b>) nose width; (<b>h</b>) nose position; (<b>i</b>) chin width; (<b>j</b>) chin position; (<b>k</b>) forehead width; (<b>l</b>) forehead position; (<b>m</b>) distance between eyes; (<b>n</b>) distance between left eye and nose; (<b>o</b>) distance between right eye and nose; (<b>p</b>) distance between left eye and mouth; (<b>q</b>) distance between right eye and mouth; (<b>r</b>) distance between left eye and eyebrow; (<b>s</b>) distance between right eye and eyebrow; (<b>t</b>) distance between nose and forehead; (<b>u</b>) distance between left ear and mouth; (<b>v</b>) distance between right ear and mouth.</p>
Full article ">Figure 14
<p>The means of RSS between happy expression and neutral mode of facial features for 36 subjects.</p>
Full article ">Figure 15
<p>The means of RSS between sad expression and neutral mode of facial features for 36 subjects.</p>
Full article ">Figure 16
<p>The means of RSS between surprise expression and neutral mode of facial features for 36 subjects.</p>
Full article ">Figure 17
<p>The means of RSS between anger expression and neutral mode of facial features for 36 subjects.</p>
Full article ">Figure 18
<p>The means of RSS between disgust expression and neutral mode of facial features for 36 subjects.</p>
Full article ">Figure 19
<p>The means RSS between fear expression and neutral mode of 22 facial features for 36 subjects.</p>
Full article ">Figure 20
<p>The similarity score (SS) for all six-expression using all features, top five, top ten, worst ten.</p>
Full article ">Figure 21
<p>SS means plot of the 6 FE of 36 subjects in comparison to the neutral mode.</p>
Full article ">Figure 22
<p>The mean of RSS of 22 facial features for 36 subjects on all expressions.</p>
Full article ">Figure 23
<p>The SS with respect to all expressions using all 22 features, top five, top ten, worst ten.</p>
Full article ">Figure 24
<p>SS for top five features with respect to each expression vs. all expressions.</p>
Full article ">Figure 25
<p>SS for top ten features with respect to each expression vs. all expressions.</p>
Full article ">Figure 26
<p>SS for worst ten features with respect to each expression vs. all expressions.</p>
Full article ">Figure 27
<p>Top ten vs. top five features.</p>
Full article ">
19 pages, 383 KiB  
Article
Benchmarking Natural Language Inference and Semantic Textual Similarity for Portuguese
by Pedro Fialho, Luísa Coheur and Paulo Quaresma
Information 2020, 11(10), 484; https://doi.org/10.3390/info11100484 - 15 Oct 2020
Cited by 5 | Viewed by 2885
Abstract
Two sentences can be related in many different ways. Distinct tasks in natural language processing aim to identify different semantic relations between sentences. We developed several models for natural language inference and semantic textual similarity for the Portuguese language. We took advantage of [...] Read more.
Two sentences can be related in many different ways. Distinct tasks in natural language processing aim to identify different semantic relations between sentences. We developed several models for natural language inference and semantic textual similarity for the Portuguese language. We took advantage of pre-trained models (BERT); additionally, we studied the roles of lexical features. We tested our models in several datasets—ASSIN, SICK-BR and ASSIN2—and the best results were usually achieved with ptBERT-Large, trained in a Brazilian corpus and tuned in the latter datasets. Besides obtaining state-of-the-art results, this is, to the best of our knowledge, the most all-inclusive study about natural language inference and semantic textual similarity for the Portuguese language. Full article
(This article belongs to the Special Issue Selected Papers from PROPOR 2020)
Show Figures

Figure 1

Figure 1
<p>Top 100 examples of ASSIN2 with greater distance between predicted and true values of the STS task, where such distance is greater than 0.5.</p>
Full article ">Figure 2
<p>Top 100 examples of ASSIN-PTBR with greater distance between the prediction and true values of the STS task, where such distance is greater than 0.5</p>
Full article ">
31 pages, 1775 KiB  
Article
TechTeach—An Innovative Method to Increase the Students Engagement at Classrooms
by Filipe Portela
Information 2020, 11(10), 483; https://doi.org/10.3390/info11100483 - 14 Oct 2020
Cited by 18 | Viewed by 7099
Abstract
Higher education is changing, and a new normal is coming. Students are even more demanding, and professors need to follow the evolution of technology and try to increase student engagement in the classrooms (presential or virtual). Higher education students recognise that the introduction [...] Read more.
Higher education is changing, and a new normal is coming. Students are even more demanding, and professors need to follow the evolution of technology and try to increase student engagement in the classrooms (presential or virtual). Higher education students recognise that the introduction of new tools and learning methods can improve the teaching quality and increase the motivation to learn. Regarding a question about which type of classes students preferred, ninety-one point ninety-nine per cent (91.99%) of the students wanted interactive classes over traditional. Having this concern in mind over the past years, a professor explored a set of methods, strategies and tools and designed a new and innovative paradigm using gamification. This approach is denominated TechTeach and explores a set of trending concepts and interactive tools to teach computer science subjects. It was designed to run in a B-learning environment. The paradigm uses flipped classrooms, bring your own device (BYOD), gamification, training of soft-skills and quizzes and surveys to increase the student’s engagement and provide the best learning environment to students. Currently, COVID-19 is bringing about new challenges, and TechTeach was improved in order to be more suitable for this new way of teaching (from 0% to 100% online classes). This article details this method and shows how it can be applied in a real environment. A case study was used to prove the functionality and relevance of this approach, and the achieved results are motivating. During the semester, more than a hundred students experienced this new way of teaching and assessment. In the end, more than eighty-one per cent (81%) of the students gave a positive grade to the approach, and more than ninety-five per cent (95.65%) of the students approved the use of the concept of BYOD in the classroom. With TechTeach, the classroom is not a boring place anymore; it is a place to learn and enjoy regardless of being physical or not. Full article
(This article belongs to the Special Issue Computer Programming Education)
Show Figures

Figure 1

Figure 1
<p>Main difference between traditional and flipped methods. Retrieved from [<a href="#B23-information-11-00483" class="html-bibr">23</a>].</p>
Full article ">Figure 2
<p>Main concepts of the method.</p>
Full article ">Figure 3
<p>Week plan at Web Programming.</p>
Full article ">Figure 4
<p>Pratical classes strategy.</p>
Full article ">Figure 5
<p>Positive aspects of the CUnit.</p>
Full article ">Figure 6
<p>Improvements needed at the CUnit.</p>
Full article ">Figure 7
<p>Students’ suggestions/opinions.</p>
Full article ">
10 pages, 253 KiB  
Article
The Effects of Social Media on Sporting Event Satisfaction and Word of Mouth Communication: An Empirical Study of a Mega Sports Event
by Juan Du, Mei-Yen Chen and Yu-Feng Wu
Information 2020, 11(10), 482; https://doi.org/10.3390/info11100482 - 14 Oct 2020
Cited by 4 | Viewed by 5572
Abstract
This study examines the impact of word of mouth (WOM) communication through social media and how it affects satisfaction with the Summer Universiade in Taipei. This study hopes to understand the usage characteristics of social media among university students and the implementation of [...] Read more.
This study examines the impact of word of mouth (WOM) communication through social media and how it affects satisfaction with the Summer Universiade in Taipei. This study hopes to understand the usage characteristics of social media among university students and the implementation of social media and their effectiveness as a marketing strategy for sport organization. The hypotheses were verified using a survey of 572 university students from four universities that hosted competitions for the Summer Universiade Games. Data were analyzed using t test, Pearson’s correlation analysis and two-way ANOVA analysis. The results indicated that WOM has impacted satisfaction via social media, and the level of understanding of sporting events was significantly affected by WOM communication and overall satisfaction. Moreover, gender showed no significant differences in WOM communication and overall satisfaction with sporting events. However, male participants had significantly higher value in WOM dissemination than female respondents. In addition, the spectators’ understanding of the sporting event on WOM communication and overall satisfaction was not affected by the continued use of social media. Suggestions are provided, including sufficient sports marketing and service quality from the organizers, in order to maintain good sports events and enhance spectators’ feelings. Full article
(This article belongs to the Special Issue Data Analytics and Consumer Behavior)
10 pages, 587 KiB  
Article
Perspectives of Platform Operators, Content Producers, and Information Receivers toward Health and Fitness Apps
by Ching Li, Chia-Wen Lee, Tzu-Chun Huang and Wei-Shiang Lai
Information 2020, 11(10), 481; https://doi.org/10.3390/info11100481 - 14 Oct 2020
Cited by 1 | Viewed by 2597
Abstract
The interactive mechanism among platform operators, content producers, and information receivers is increasingly complex in human–computer symbiosis. The purpose of this study is to identify the interactive value among platform operators, content producers, and information receivers with regard to information through the health [...] Read more.
The interactive mechanism among platform operators, content producers, and information receivers is increasingly complex in human–computer symbiosis. The purpose of this study is to identify the interactive value among platform operators, content producers, and information receivers with regard to information through the health and fitness apps by adopting an advanced Analytic Hierarchy Process (AHP) method derived from professional perspectives of app users and operators, key opinion leaders, scholars, and officers. The AHP method was allocated weightings to the evaluation criteria from the twelve panelists from three groups of platform operators, content producers, and information receivers. After focus group interviews were conducted, four dimensions and twelve sub-dimensions of the initial health and fitness apps were obtained as follows: Content category: Monitoring, exercise, journaling, and sleeping; (2) User reviews: Fuctionality, interactivity, and criticism; (3) Content updates: New feature, correctness, and new language; (4) Platform terms: Privacy, accuracy, ownership, and right of use. The study integrated the panelists’ opinions toward health and fitness apps and analyzed the weight of each indicator according to their importance by Power Choice V2.5. The results revealed that the weights of dimensions of health and fitness apps were sorted by content category, user review, platform terms, and content update, as well as that the weights of the top six sub-dimensions were followed: monitoring, exercise, functionality, interactivity, privacy, and accuracy. Content producers suggested increasing the popularity of their products by adding new features, whereas information receivers preferred to correct problems. Content producers and information receivers graded platform terms as less essential, whereas platform operators rated platform terms higher. This study can contribute to assisting the health and fitness industry and the overall strategic operative process by identifying how the effectiveness in the procedures, estimative process, and cost-down can enhance competitiveness to further improve users experience and satisfaction. Full article
(This article belongs to the Special Issue Selected Papers from IIKII 2020 Conferences)
Show Figures

Figure 1

Figure 1
<p>Priority Assessment in the Platform Ecosystem.</p>
Full article ">Figure A1
<p>Processing of Power Choice v2.5 of the Analytic Hierarchy Process (AHP) questionnaire.</p>
Full article ">
24 pages, 10731 KiB  
Review
A Systematic Review of the Multi-Resolution Modeling (MRM) for Integration of Live, Virtual, and Constructive Systems
by Kyungeun Lee, Gene Lee and Luis Rabelo
Information 2020, 11(10), 480; https://doi.org/10.3390/info11100480 - 14 Oct 2020
Cited by 8 | Viewed by 3955
Abstract
Multi-Resolution Modeling (MRM) is a modeling technology that creates a model that expresses the same phenomenon at more than two different resolutions. Since the advent of distributed simulation systems, the MRM study began in the military field, where the modeling and simulation (M&S) [...] Read more.
Multi-Resolution Modeling (MRM) is a modeling technology that creates a model that expresses the same phenomenon at more than two different resolutions. Since the advent of distributed simulation systems, the MRM study began in the military field, where the modeling and simulation (M&S) was most actively developed and was recognized as an essential area in the integrated system of live, virtual and constructive (LVC) simulations. Models of the various resolutions had already been built based on the characteristics and training purposes of each weapon system, and the interoperability of these models was a primary task in the M&S community. In this study, we report the results from a systematic review of the MRM to address two questions: (1) What research has been done towards the MRM for integrating LVC system? (2) What are the research and technology challenges for the MRM implementation in the future? In total, 22 papers have been identified and studied in this review by following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The structures of the significant 20 MRM implementation experiments in those papers are analyzed based on the relationship between the MRM and integrating the LVC system being implemented in the military. We explored the various issues related to the MRM. Then, we discussed the direction in which the MRM should move forward, comparing civilian modeling techniques with those being used in the military. Full article
(This article belongs to the Special Issue Distributed Simulation 2020)
Show Figures

Figure 1

Figure 1
<p>Concept of Multi-Resolution Modeling.</p>
Full article ">Figure 2
<p>Functional view of an HLA federation.</p>
Full article ">Figure 3
<p>PRISMA flow diagram.</p>
Full article ">Figure 4
<p>Comparing the experiments for each force.</p>
Full article ">Figure 5
<p>Integrated Eagle/BDS-D Network Configuration (adapted from [<a href="#B33-information-11-00480" class="html-bibr">33</a>,<a href="#B34-information-11-00480" class="html-bibr">34</a>]).</p>
Full article ">Figure 6
<p>Eagle II architecture block diagram (adapted from [<a href="#B35-information-11-00480" class="html-bibr">35</a>]).</p>
Full article ">Figure 7
<p>The architecture for the BBS/SIMNET integration in Europe (adapted from [<a href="#B36-information-11-00480" class="html-bibr">36</a>]).</p>
Full article ">Figure 8
<p>System architecture of SOFNET and JCM (Adapted from [<a href="#B37-information-11-00480" class="html-bibr">37</a>]).</p>
Full article ">Figure 9
<p>CLCGF simulation engine components and interfaces (adapted from [<a href="#B38-information-11-00480" class="html-bibr">38</a>,<a href="#B39-information-11-00480" class="html-bibr">39</a>]).</p>
Full article ">Figure 10
<p>ADU structure (adapted from [<a href="#B40-information-11-00480" class="html-bibr">40</a>]).</p>
Full article ">Figure 11
<p>Eagle/ITEMS network configuration and implementation (adapted from [<a href="#B41-information-11-00480" class="html-bibr">41</a>]).</p>
Full article ">Figure 12
<p>DMIF test/evaluation conceptual architecture (adapted from [<a href="#B42-information-11-00480" class="html-bibr">42</a>]).</p>
Full article ">Figure 13
<p>The federation of the TYR, FBSIM, and ARTEVA (adapted from [<a href="#B43-information-11-00480" class="html-bibr">43</a>]).</p>
Full article ">Figure 14
<p>Battlespace federation of the ARES and ModSAF (adapted from [<a href="#B44-information-11-00480" class="html-bibr">44</a>]).</p>
Full article ">Figure 15
<p>JTLS-JCATS federation (adapted from [<a href="#B45-information-11-00480" class="html-bibr">45</a>]).</p>
Full article ">Figure 16
<p>ACTF-MRM architecture (adapted from [<a href="#B46-information-11-00480" class="html-bibr">46</a>]).</p>
Full article ">Figure 17
<p>Federation diagram (adapted from [<a href="#B47-information-11-00480" class="html-bibr">47</a>]).</p>
Full article ">Figure 18
<p>NATO training federation (Adapted from [<a href="#B48-information-11-00480" class="html-bibr">48</a>]).</p>
Full article ">Figure 19
<p>Disaggregation and aggregation implementation.</p>
Full article ">Figure 20
<p>Interoperability through HLA (adapted from [<a href="#B50-information-11-00480" class="html-bibr">50</a>]).</p>
Full article ">Figure 21
<p>Terrain coherency of COTS simulation tools (adapted from [<a href="#B10-information-11-00480" class="html-bibr">10</a>]).</p>
Full article ">Figure 22
<p>Interconnection view of the LVC system (adapted from [<a href="#B22-information-11-00480" class="html-bibr">22</a>]).</p>
Full article ">Figure 23
<p>MRM federation configuration.</p>
Full article ">Figure 24
<p>Overall hardware structure (adapted from [<a href="#B51-information-11-00480" class="html-bibr">51</a>]).</p>
Full article ">Figure 25
<p>(<b>a</b>)The LRM screenshots of VBS4, (<b>b</b>) The HRM screenshots of VBS4 [<a href="#B64-information-11-00480" class="html-bibr">64</a>].</p>
Full article ">
13 pages, 9787 KiB  
Article
Attentional Colorization Networks with Adaptive Group-Instance Normalization
by Yuzhen Gao, Youdong Ding, Fei Wang and Huan Liang
Information 2020, 11(10), 479; https://doi.org/10.3390/info11100479 - 13 Oct 2020
Cited by 2 | Viewed by 2567
Abstract
We propose a novel end-to-end image colorization framework which integrates attention mechanism and a learnable adaptive normalization function. In contrast to previous colorization methods that directly generate the whole image, we believe that the color of the significant area determines the quality of [...] Read more.
We propose a novel end-to-end image colorization framework which integrates attention mechanism and a learnable adaptive normalization function. In contrast to previous colorization methods that directly generate the whole image, we believe that the color of the significant area determines the quality of the colorized image. The attention mechanism uses the attention map which is obtained by the auxiliary classifier to guide our framework to produce more subtle content and visually pleasing color in salient visual regions. Furthermore, we apply Adaptive Group Instance Normalization (AGIN) function to promote our framework to generate vivid colorized images flexibly, under the circumstance that we consider colorization as a particular style transfer task. Experiments show that our model is superior to previous the state-of-the-art models in coloring foreground objects. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The architecture of our framework, and the details are covered in <a href="#sec3dot1-information-11-00479" class="html-sec">Section 3.1</a>.</p>
Full article ">Figure 2
<p>Colorized results and their visualization of the attention maps: (<b>a</b>) Ground truth, (<b>b</b>) Targets, (<b>c</b>) Attention maps, (<b>d</b>) Our results.</p>
Full article ">Figure 3
<p>Comparison on different colorization methods: (<b>a</b>) Ground truth, (<b>b</b>) Targets, (<b>c</b>) Zhang et al. [<a href="#B22-information-11-00479" class="html-bibr">22</a>], (<b>d</b>) Larsson et al. [<a href="#B20-information-11-00479" class="html-bibr">20</a>], (<b>e</b>) Iizuka et al. [<a href="#B21-information-11-00479" class="html-bibr">21</a>], (<b>f</b>) Ours.</p>
Full article ">Figure 4
<p>Comparison on different images translation methods: (<b>a</b>) Ground truth, (<b>b</b>) Targets, (<b>c</b>) CycleGAN, (<b>d</b>) Pix2Pix, (<b>e</b>) Ours.</p>
Full article ">Figure 5
<p>Colorized results on the CAM ablation experiment: (<b>a</b>) Ground truth, (<b>b</b>) Targets, (<b>c</b>) Attention maps, (<b>d</b>) Our results without CAM, (<b>e</b>) Our results with CAM.</p>
Full article ">Figure 6
<p>Colorized results on the CAM ablation experiment: (<b>a</b>) Ground truth, (<b>b</b>) Targets, (<b>c</b>) Results using GN only, (<b>d</b>) Results using IN only, (<b>e</b>) Results using AGIN.</p>
Full article ">Figure 7
<p>Failure cases: (<b>a</b>) Ground truth, (<b>b</b>) Targets, (<b>c</b>) Our results.</p>
Full article ">
12 pages, 694 KiB  
Article
Examining the Effects of eWOM, Trust Inclination, and Information Adoption on Purchase Intentions in an Accelerated Digital Marketing Context
by Muddasar Ghani Khwaja, Saqib Mahmood and Umer Zaman
Information 2020, 11(10), 478; https://doi.org/10.3390/info11100478 - 13 Oct 2020
Cited by 48 | Viewed by 12703
Abstract
The study focuses on the canvas of online information transmission that has expanded exponentially. Especially due to social media networks, consumers have been exposed to significant amounts of disinformation, misinformation and actual information. Electronic word-of-mouth (eWOM) on social media networks has been facilitating [...] Read more.
The study focuses on the canvas of online information transmission that has expanded exponentially. Especially due to social media networks, consumers have been exposed to significant amounts of disinformation, misinformation and actual information. Electronic word-of-mouth (eWOM) on social media networks has been facilitating swift information spread. Henceforth, it has become increasingly problematic for consumers to adopt authentic information and differentiate between marketers-generated content and user-generated content. The study aims to unfold the factors that lead to the information adoption that consequently motivates consumers to purchase products and services. The research study provides a comprehensive framework to re-configure factors that lead to consumers’ purchase intentions in the digital economy. Respondents of the study were those individuals who have been buying products online. The theoretically knitted causal relationships were estimated using a structural equation modelling (SEM) technique. The results indicate that trust inclination and information adoption sequentially mediate relationships between information quality, information usefulness, perceived risk and argument quality with purchase intentions. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptual Framework.</p>
Full article ">Figure 2
<p>Structural Path Analysis on Analysis of Moment Structures (AMOS).</p>
Full article ">
24 pages, 3031 KiB  
Article
Research on Power Demand Side Information Quality Indicators and Evaluation Based on Grounded Theory Approach
by Yiping Zhu and Zan Zhou
Information 2020, 11(10), 477; https://doi.org/10.3390/info11100477 - 12 Oct 2020
Cited by 1 | Viewed by 2835
Abstract
High-quality power demand side information is necessary for scientific decision-making of power grid construction projects. Literature research shows that the current demand side management (DSM) information quality theories and methods need to be improved, and the information quality indicators and evaluation work are [...] Read more.
High-quality power demand side information is necessary for scientific decision-making of power grid construction projects. Literature research shows that the current demand side management (DSM) information quality theories and methods need to be improved, and the information quality indicators and evaluation work are essential. In this paper, based on the grounded theory, about 250 copies of relevant literatures and interview records are reviewed. Through open coding, spindle coding, and selective coding, 105 initial concepts are finally extracted to 35 categories and 10 main categories. On this basis, four information dimensions including load extraction, monitoring, management, and government planning are summarized. An index system containing 34 indicators for DSM information quality evaluation on the power demand side is constructed. Finally, using matter-element extension evaluation method, a case study in China is performed to verify the feasibility and scientificity of the indexes. The results show that DSM information quality evaluation indexes are effective, and the evaluation method is also applicable. The establishment of DSM information quality indicators and the evaluation methods in this paper can provide a reference for similar information quality evaluation work in power systems. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

Figure 1
<p>Main content of demand side management (DSM) information.</p>
Full article ">Figure 2
<p>Grounded theory approach research process.</p>
Full article ">Figure 3
<p>Coding process.</p>
Full article ">Figure 4
<p>Process of open coding.</p>
Full article ">Figure 5
<p>Relational structure model of influencing factors of power demand side information quality.</p>
Full article ">
17 pages, 8463 KiB  
Article
Underwater Fish Body Length Estimation Based on Binocular Image Processing
by Ruoshi Cheng, Caixia Zhang, Qingyang Xu, Guocheng Liu, Yong Song, Xianfeng Yuan and Jie Sun
Information 2020, 11(10), 476; https://doi.org/10.3390/info11100476 - 12 Oct 2020
Cited by 10 | Viewed by 3962
Abstract
Recently, the information analysis technology of underwater has developed rapidly, which is beneficial to underwater resource exploration, underwater aquaculture, etc. Dangerous and laborious manual work is replaced by deep learning-based computer vision technology, which has gradually become the mainstream. The binocular cameras based [...] Read more.
Recently, the information analysis technology of underwater has developed rapidly, which is beneficial to underwater resource exploration, underwater aquaculture, etc. Dangerous and laborious manual work is replaced by deep learning-based computer vision technology, which has gradually become the mainstream. The binocular cameras based visual analysis method can not only collect seabed images but also construct the 3D scene information. The parallax of the binocular image was used to calculate the depth information of the underwater object. A binocular camera based refined analysis method for underwater creature body length estimation was constructed. A fully convolutional network (FCN) was used to segment the corresponding underwater object in the image to obtain the object position. A fish’s body direction estimation algorithm is proposed according to the segmentation image. The semi-global block matching (SGBM) algorithm was used to calculate the depth of the object region and estimate the object body length according to the left and right views of the object. The algorithm has certain advantages in time and accuracy for interest object analysis by the combination of FCN and SGBM. Experiment results show that this method effectively reduces unnecessary information, improves efficiency and accuracy compared to the original SGBM algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of object body length analysis.</p>
Full article ">Figure 2
<p>Relationship between coordinate systems.</p>
Full article ">Figure 3
<p>Transposed convolution.</p>
Full article ">Figure 4
<p>Fully convolutional network.</p>
Full article ">Figure 5
<p>Depth prediction with a stereo camera.</p>
Full article ">Figure 6
<p>Length prediction.</p>
Full article ">Figure 7
<p>Estimation of fish orientation.</p>
Full article ">Figure 8
<p>Long side definition.</p>
Full article ">Figure 9
<p>Head point of the object.</p>
Full article ">Figure 10
<p>Chessboard image.</p>
Full article ">Figure 11
<p>Calibration results.</p>
Full article ">Figure 12
<p>Results of the fish4knowledge project data set on the model M1.</p>
Full article ">Figure 13
<p>Results of the self-made data set on the model M1.</p>
Full article ">Figure 13 Cont.
<p>Results of the self-made data set on the model M1.</p>
Full article ">Figure 14
<p>Results on the model M2.</p>
Full article ">Figure 15
<p>Loss diagram.</p>
Full article ">Figure 16
<p>Accuracy, cross-entropy, and loss diagram.</p>
Full article ">Figure 17
<p>Results of filters.</p>
Full article ">Figure 17 Cont.
<p>Results of filters.</p>
Full article ">Figure 18
<p>Length of the object in the image.</p>
Full article ">Figure 19
<p>Box plot of results.</p>
Full article ">Figure 20
<p>Time in the method of semi-global block matching (SGBM) and combination of SGBM and fully convolutional network (FCN).</p>
Full article ">
16 pages, 4659 KiB  
Article
Traffic Sign Detection Method Based on Improved SSD
by Shuai You, Qiang Bi, Yimu Ji, Shangdong Liu, Yujian Feng and Fei Wu
Information 2020, 11(10), 475; https://doi.org/10.3390/info11100475 - 9 Oct 2020
Cited by 22 | Viewed by 4954
Abstract
Due to changes in illumination, adverse weather conditions, and interference from signs similar to real traffic signs, the false detection of traffic signs is possible. Nevertheless, in order to improve the detection effect of small targets, baseline SSD (single shot multibox detector) adopts [...] Read more.
Due to changes in illumination, adverse weather conditions, and interference from signs similar to real traffic signs, the false detection of traffic signs is possible. Nevertheless, in order to improve the detection effect of small targets, baseline SSD (single shot multibox detector) adopts a multi-scale feature detection method to improve the detection effect to some extent. The detection effect of small targets is improved, but the number of calculations needed for the baseline SSD network is large. To this end, we propose a lightweight SSD network algorithm. This method uses some 1 × 1 convolution kernels to replace some of the 3 × 3 convolution kernels in the baseline network and deletes some convolutional layers to reduce the calculation load of the baseline SSD network. Then the color detection algorithm based on the phase difference method and the connected component calculation are used to further filter the detection results, and finally, the data enhancement strategy based on the image appearance transformation is used to improve the balance of the dataset. The experimental results show that the proposed method is 3% more accurate than the baseline SSD network, and more importantly, the detection speed is also increased by 1.2 times. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

Figure 1
<p>Flow chart of lightweight SSD-based traffic sign detection.</p>
Full article ">Figure 2
<p>Baseline SSD network structure.</p>
Full article ">Figure 3
<p>Lightweight SSD network structure.</p>
Full article ">Figure 4
<p>The signs in the yellow, red, and blue boxes are warning, prohibition, and mandatory signs respectively.</p>
Full article ">Figure 5
<p>RGB detection results of three types of traffic signs.</p>
Full article ">Figure 6
<p>Flow chart of color detection based on the phase difference method.</p>
Full article ">Figure 7
<p>Color detection effect diagram.</p>
Full article ">Figure 8
<p>Statistical graph of connected components in binary images.</p>
Full article ">Figure 9
<p>Flow chart of connected component detection based on the two-pass algorithm.</p>
Full article ">Figure 10
<p>Data enhancement strategy.</p>
Full article ">Figure 11
<p>Change curve of iterative loss value of lightweight SSD network.</p>
Full article ">
11 pages, 697 KiB  
Article
Online At-Risk Student Identification using RNN-GRU Joint Neural Networks
by Yanbai He, Rui Chen, Xinya Li, Chuanyan Hao, Sijiang Liu, Gangyao Zhang and Bo Jiang
Information 2020, 11(10), 474; https://doi.org/10.3390/info11100474 - 9 Oct 2020
Cited by 53 | Viewed by 5245
Abstract
Although online learning platforms are gradually becoming commonplace in modern society, learners’ high dropout rates and serious academic performance require more attention within the virtual learning environment (VLE). This study aims to predict students’ performance in a specific course as it is continuously [...] Read more.
Although online learning platforms are gradually becoming commonplace in modern society, learners’ high dropout rates and serious academic performance require more attention within the virtual learning environment (VLE). This study aims to predict students’ performance in a specific course as it is continuously running, using the statistic personal biographical information and sequential behavior data with VLE. To achieve this goal, a novel recurrent neural network (RNN)-gated recurrent unit (GRU) joint neural network is proposed to fit both static and sequential data, where the data completion mechanism is also adopted to fill the missing stream data. To incorporate the sequential relationship of learning data, three kinds of time-series deep neural network algorithms: simple RNN, GRU, and LSTM are first taken into consideration as baseline models. Their performances are compared in identifying at-risk students. Experimental results on Open University Learning Analytics Dataset (OULAD) show that simple methods like GRU and simple RNN have better results than the relatively complex LSTM model. The results also reveal that different models have different peak performance time, which results in the proposed joint model that achieves over 80% prediction accuracy of at-risk students at the end of the semester. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications for Education)
Show Figures

Figure 1

Figure 1
<p>The distribution of virtual learning environment (VLE) click sum and weighted assessment score for students, click sum means the total number of clicks on VLE per students. (<b>a</b>): The VLE click sum and weighted assessment score for students who failed the course; (<b>b</b>): the VLE click sum and weighted assessment score for students who passed the course.</p>
Full article ">Figure 2
<p>Overview of the proposed approach.</p>
Full article ">Figure 3
<p>Averaged prediction results over all courses for all weeks. (<b>a</b>): Averaged testing accuracy for all models across weeks; (<b>b</b>): averaged loss value of all models across 250 epochs.</p>
Full article ">Figure 4
<p>Averaged week-wise testing metric. (<b>a</b>): Averaged precision score of compared models across weeks; (<b>b</b>): averaged recall score of compared models across weeks.</p>
Full article ">Figure 5
<p>Comparison between models across different data sources.</p>
Full article ">
19 pages, 4051 KiB  
Article
Decision-Making Process Regarding the Use of Mobile Phones in Romania Taking into Consideration Sustainability and Circular Economy
by Cristian Bogdan Onete, Sandra Diana Chița, Vanesa Madalina Vargas and Sonia Budz
Information 2020, 11(10), 473; https://doi.org/10.3390/info11100473 - 7 Oct 2020
Cited by 6 | Viewed by 5019
Abstract
Nowadays, the use of smartphones has become essential for daily activities that have either a personal or professional purpose. A large number of resources is necessary for both the production and the use of these devices, which means that solutions in terms of [...] Read more.
Nowadays, the use of smartphones has become essential for daily activities that have either a personal or professional purpose. A large number of resources is necessary for both the production and the use of these devices, which means that solutions in terms of sustainability are needed. The purpose of this research is to highlight the concept of sustainability when talking about smartphones, as well to underline the possibilities that exist for the consumers. This study examines the habits of young consumers in Romania, the reasons behind a mobile phone replacement, and the factors that influence the purchase decision. The methodology section follows quantitative and qualitative market research. An analysis was performed in order to have a deep understanding of trends in terms of mobile phone ownership and preferred brands. This study also provides a general view on the neglectfulness of the young population of Romania regarding the dangers to which the environment is exposed because of the purchasing habits that go against sustainability. For accomplishing this purpose, important results have been discovered through the analysis of the data obtained from self-administered questionnaires and interviews. The results show that people are usually using only one mobile phone at a time and they change it once every two years for sustainability and financial reasons. The same applies when it comes to choosing a certain brand. The reasons behind the purchase of a new phone and the decisions regarding an old one are based on healthy principles of the circular economy and sustainability. The preferences in terms of technology and design, and the decision process are correlated with incomes. Full article
(This article belongs to the Special Issue Green Marketing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Number of owned mobile phones (%). Source: Authors’ own research.</p>
Full article ">Figure 2
<p>How often respondents change their mobile phones (%). Source: Authors’ own research.</p>
Full article ">Figure 3
<p>Preferred mobile phones (%). Source: Authors’ own research.</p>
Full article ">Figure 4
<p>Opinions about purchasing second hand phones (%) (1-Total disagreement; 5-Total agreement). Source: Authors’ own research.</p>
Full article ">Figure 5
<p>Buying a phone in the next 12 months (%). Source: Authors’ own research.</p>
Full article ">Figure 6
<p>Most often utilized device to access social media networks (%). Source: Authors’ own research.</p>
Full article ">Figure 7
<p>How many times do the respondents charge their mobile phones within 24 h (%). Source: Authors’ own research.</p>
Full article ">
14 pages, 1201 KiB  
Review
A Systematic Review of Indicators for Evaluating the Effectiveness of Digital Public Services
by Glauco Vitor Pedrosa, Ricardo A. D. Kosloski, Vitor G. de Menezes, Gabriela Y. Iwama, Wander C. M. P. da Silva and Rejane M. da C. Figueiredo
Information 2020, 11(10), 472; https://doi.org/10.3390/info11100472 - 6 Oct 2020
Cited by 13 | Viewed by 7572
Abstract
Effectiveness is a key feature of good governance, as the public sector must make the best use of resources to comply with the needs of the population. Several indicators can be analyzed to evaluate the effectiveness of a service. This study analyzes theoretical [...] Read more.
Effectiveness is a key feature of good governance, as the public sector must make the best use of resources to comply with the needs of the population. Several indicators can be analyzed to evaluate the effectiveness of a service. This study analyzes theoretical references and presents a systematic research of indicators to assess the effectiveness of digital public services in the perspective of the user. First, a literature review was carried out to identify the most common indicators employed to evaluate effectiveness in the public sector; then, the perception of academics and professionals regarding digital government was assessed to analyze the relevance of these indicators. As a result, two groups of indicators were found: technical factors based on service quality and usefulness of the service. This work contributes to enrich the discussion on how to create an effective model to evaluate the effectiveness of public services to guarantee quality standards and comply with the expectations of users. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

Figure 1
<p>Evolution of the number of papers retrieved from the SCOPUS database with the string in <a href="#information-11-00472-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 2
<p>Top ten authors with the highest number of documents retrieved.</p>
Full article ">Figure 3
<p>Classification of the documents retrieved by (<b>a</b>) type of publication and (<b>b</b>) area.</p>
Full article ">Figure 4
<p>Co-occurrence of words in the documents retrieved.</p>
Full article ">Figure 5
<p>Co-citation of publications in the documents retrieved.</p>
Full article ">Figure 6
<p>Technology Acceptance Model (TAM).</p>
Full article ">Figure 7
<p>Average scores obtained for each indicator.</p>
Full article ">
17 pages, 1613 KiB  
Article
Improving Cybersafety Maturity of South African Schools
by Elmarie Kritzinger
Information 2020, 11(10), 471; https://doi.org/10.3390/info11100471 - 4 Oct 2020
Cited by 8 | Viewed by 3211
Abstract
This research investigated the current maturity levels of cybersafety in South African schools. The maturity level indicates if schools are prepared to assist relevant role players (teachers and learners) in establishing a cybersafety culture within the school environment. The research study measured the [...] Read more.
This research investigated the current maturity levels of cybersafety in South African schools. The maturity level indicates if schools are prepared to assist relevant role players (teachers and learners) in establishing a cybersafety culture within the school environment. The research study measured the maturity levels of cybersafety in 24 South African schools by evaluating the four main elements that are needed to improve cybersafety within schools. These elements are (1) leadership and policies, (2) infrastructure, (3) education, and (4) standards and inspection. The study used a UK-approved measurement tool (360safe) to measure the cybersafety maturity of schools within South Africa, using five levels of compliance (Level 1: full compliance, to Level 5: no compliance). The data analysis clearly indicated that all the schools that participated in the study had a significantly low level of cybersafety maturity and compliance. Schools are starting to adopt technology as part of their educational and social approach to prepare learners for the future, but there is a clear lack of supporting cybersafety awareness, policies, practices and procedures within South African schools. The research proposed a step-by-step approach involving a ten-phase cybersafety plan to empower schools to create and grow their own cybersafety culture. Full article
(This article belongs to the Special Issue Cyber Resilience)
Show Figures

Figure 1

Figure 1
<p>Strands 1, 2 and 3 of Element A.</p>
Full article ">Figure 2
<p>Strands 1 and 2 of Element B.</p>
Full article ">Figure 3
<p>Strands 1 to 5 of Element C.</p>
Full article ">Figure 4
<p>Strand 1 of Element D.</p>
Full article ">Figure 5
<p>Overview of averages of all four elements.</p>
Full article ">Figure 6
<p>Ten-phase approach to cybersafety awareness.</p>
Full article ">Figure 7
<p>Flow diagram of cybersafety implementation.</p>
Full article ">
18 pages, 4806 KiB  
Article
The BioVisualSpeech Corpus of Words with Sibilants for Speech Therapy Games Development
by Sofia Cavaco, Isabel Guimarães, Mariana Ascensão, Alberto Abad, Ivo Anjos, Francisco Oliveira, Sofia Martins, Nuno Marques, Maxine Eskenazi, João Magalhães and Margarida Grilo
Information 2020, 11(10), 470; https://doi.org/10.3390/info11100470 - 2 Oct 2020
Cited by 5 | Viewed by 3690
Abstract
In order to develop computer tools for speech therapy that reliably classify speech productions, there is a need for speech production corpora that characterize the target population in terms of age, gender, and native language. Apart from including correct speech productions, in order [...] Read more.
In order to develop computer tools for speech therapy that reliably classify speech productions, there is a need for speech production corpora that characterize the target population in terms of age, gender, and native language. Apart from including correct speech productions, in order to characterize the target population, the corpora should also include samples from people with speech sound disorders. In addition, the annotation of the data should include information on the correctness of the speech productions. Following these criteria, we collected a corpus that can be used to develop computer tools for speech and language therapy of Portuguese children with sigmatism. The proposed corpus contains European Portuguese children’s word productions in which the words have sibilant consonants. The corpus has productions from 356 children from 5 to 9 years of age. Some important characteristics of this corpus, that are relevant to speech and language therapy and computer science research, are that (1) the corpus includes data from children with speech sound disorders; and (2) the productions were annotated according to the criteria of speech and language pathologists, and have information about the speech production errors. These are relevant features for the development and assessment of speech processing tools for speech therapy of Portuguese children. In addition, as an illustration on how to use the corpus, we present three speech therapy games that use a convolutional neural network sibilants classifier trained with data from this corpus and a word recognition module trained on additional children data and calibrated and evaluated with the collected corpus. Full article
(This article belongs to the Special Issue Selected Papers from PROPOR 2020)
Show Figures

Figure 1

Figure 1
<p>Main places of articulation in the vocal tract, adapted from Reference [<a href="#B5-information-11-00470" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>Equipment used for the recordings: a digital audio tape (DAT), a microphone, and acoustic foam.</p>
Full article ">Figure 3
<p>Stimulus used to suggest the word <span class="html-italic">mochila</span>.</p>
Full article ">Figure 4
<p>The BioVisualSpeech games for sigmatism. (<b>a</b>) Child playing the isolated sibilants game. (<b>b</b>) Child playing the pairs game with the help of an adult.</p>
Full article ">Figure 5
<p>The BioVisualSpeech isolated sibilants game. The scenario for the <tt>[<math display="inline"><semantics> <mo>ʒ</mo> </semantics></math>]</tt> sibilant in (<b>a</b>) plain mode and in (<b>b</b>) vocal tract feedback mode.</p>
Full article ">Figure 6
<p>The BioVisualSpeech pairs game. (<b>a</b>) The game starts with all cards facing down. The girl image is a character that the child can choose. (<b>b</b>) The level for <span class="html-italic">matching cards</span>. (The instructions in the figure are: <span class="html-italic">Find the cards with the same image and name that image.</span>) (<b>c</b>) The level for <span class="html-italic">matching sounds</span>. (The instructions in the figure are: <span class="html-italic">Find the cards with the same sound and say that sound</span>).</p>
Full article ">Figure 7
<p>The BioVisualSpeech word naming game. (<b>a</b>) Fairy scenario; the host is providing a textual hint. (<b>b</b>) Basketball shootout scenario; the host is providing a follow-up hint after synonym answer. (<b>c</b>) Penalty shootout scenario; the answer is right. (<b>d</b>) Reward screen.</p>
Full article ">Figure 8
<p>Representation of the 1D convolutional neural network (CNN) four class sibilants classifier.</p>
Full article ">
17 pages, 6122 KiB  
Review
Multiple Resolution Modeling: A Particular Case of Distributed Simulation
by Mario Marin, Gene Lee and Jaeho Kim
Information 2020, 11(10), 469; https://doi.org/10.3390/info11100469 - 2 Oct 2020
Viewed by 2765
Abstract
Multiple resolution modeling (MRM) is the future of distributed simulation. This article describes different definitions and notions related to MRM. MRM is a relatively new research area, and there is a demand for simulator integration from a modeling complexity point of view. This [...] Read more.
Multiple resolution modeling (MRM) is the future of distributed simulation. This article describes different definitions and notions related to MRM. MRM is a relatively new research area, and there is a demand for simulator integration from a modeling complexity point of view. This article also analyzes a taxonomy based on the experience of the researchers in detail. Finally, an example that uses the high-level architecture (HLA) is explained to illustrate the above definitions and, in particular, to look at the problems that are common to these distributed simulation configurations. The steps required to build an MRM distributed simulation system are introduced. The conclusions describe the lessons learned for this unique form of distributed simulation. Full article
(This article belongs to the Special Issue Distributed Simulation 2020)
Show Figures

Figure 1

Figure 1
<p>Different types of simulation models in the military.</p>
Full article ">Figure 2
<p>An MRM system to model the decision levels of the military should be hierarchical (modified and adapted from Department of the Army (2008)).</p>
Full article ">Figure 3
<p>Different degrees of resolution and its relationship with aggregation (modified and adapted from Mullen et al. [<a href="#B9-information-11-00469" class="html-bibr">9</a>].</p>
Full article ">Figure 4
<p>First group.</p>
Full article ">Figure 5
<p>Second Group.</p>
Full article ">Figure 6
<p>Third Group.</p>
Full article ">Figure 7
<p>Decide simulations (Step 1).</p>
Full article ">Figure 8
<p>Federation architecture (Step 2).</p>
Full article ">Figure 9
<p>Design regulator (Step 3).</p>
Full article ">Figure 10
<p>Arrangement of computers in UCF SIL.</p>
Full article ">Figure 11
<p>Operation environment for MRM (low-resolution simulation).</p>
Full article ">Figure 12
<p>Operation environment for MRM (high-resolution simulation).</p>
Full article ">Figure 13
<p>MRM federation configuration.</p>
Full article ">Figure 14
<p>How to set the geographical trigger (with a polygon) in MASA Sword.</p>
Full article ">Figure 15
<p>How to set a time trigger in MASA Sword.</p>
Full article ">Figure 16
<p>Time-based MRM schematic implementation scenario.</p>
Full article ">
20 pages, 10949 KiB  
Article
Prevention of Unintended Appearance in Photos Based on Human Behavior Analysis
by Yuhi Kaihoko, Phan Xuan Tan and Eiji Kamioka
Information 2020, 11(10), 468; https://doi.org/10.3390/info11100468 - 2 Oct 2020
Cited by 1 | Viewed by 2403
Abstract
Nowadays, with smartphones, people can easily take photos, post photos to any social networks, and use the photos for various purposes. This leads to a social problem that unintended appearance in photos may threaten the facial privacy of photographed people. Some solutions to [...] Read more.
Nowadays, with smartphones, people can easily take photos, post photos to any social networks, and use the photos for various purposes. This leads to a social problem that unintended appearance in photos may threaten the facial privacy of photographed people. Some solutions to protect facial privacy in photos have already been proposed. However, most of them rely on different techniques to de-identify photos which can be done only by photographers, giving no choice to photographed person. To deal with that, we propose an approach that allows a photographed person to proactively detect whether someone is intentionally/unintentionally trying to take pictures of him. Thereby, he can have appropriate reaction to protect the facial privacy. In this approach, we assume that the photographed person uses a wearable camera to record the surrounding environment in real-time. The skeleton information of likely photographers who are captured in the monitoring video is then extracted and put into the calculation of dynamic programming score which is eventually compared with a threshold for recognition of photo-taking behavior. Experimental results demonstrate that by using the proposed approach, the photo-taking behavior is precisely recognized with high accuracy of 92.5%. Full article
Show Figures

Figure 1

Figure 1
<p>A scenario of photo-taking behavior detection and its notification.</p>
Full article ">Figure 2
<p>“BODY_25” human skeleton estimation model.</p>
Full article ">Figure 3
<p>Focusing parts in proposed approach.</p>
Full article ">Figure 4
<p>Calculation of the arm’s length and angle of the bending arm. (<b>a</b>) Calculation of the arm’s length from two coordinates by using the distance between two points <math display="inline"><semantics> <mrow> <mi>K</mi> <msub> <mi>P</mi> <mi>n</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>K</mi> <msub> <mi>P</mi> <mi>m</mi> </msub> </mrow> </semantics></math>. This method is applied to calculate the length of ①, ②, ④, ⑤ in <a href="#information-11-00468-t001" class="html-table">Table 1</a>; (<b>b</b>) Calculation of the angle of the bending arm from three coordinates which are indexed by ③, ⑥ in <a href="#information-11-00468-t001" class="html-table">Table 1</a> by using the inner product.</p>
Full article ">Figure 5
<p>Definition of DP path. To calculate the accumulated distance, (<b>a</b>) to (<b>c</b>) indicates a pattern of distance in <span class="html-italic">i-j</span> coordinates space. Each number shown in (<b>a</b>) to (<b>c</b>) expresses the weighted score for calculating the distance by the following Equation (8).</p>
Full article ">Figure 6
<p>Ideal FRR–FAR curves image and EER crossing point.</p>
Full article ">Figure 7
<p>Experimental environment where photographed person records the video of the photographer, while the photographer is performing either photo-taking behavior or net-surfing behavior with a smartphone.</p>
Full article ">Figure 8
<p>Visual skeleton information of photo-taking behavior extracted from OpenPose. (<b>a</b>) Initial position of subject’s arm; (<b>b</b>) before subject start taking photo; (<b>c</b>) when subject is taking photo (during photo-taking behavior); (<b>d</b>) when subject finishes photo-taking behavior.</p>
Full article ">Figure 9
<p>Visual skeleton information of net-surfing behavior. (<b>a</b>) Initial position of subject’s arm; (<b>b</b>) before subject starts the net-surfing behavior; (<b>c</b>) when subject is surfing the Internet on smartphone (during net-surfing behavior); (<b>d</b>) when subject finishes net-surfing behavior.</p>
Full article ">Figure 10
<p>Arm’s length and angle of bending arms of subject 1 when taking photo. (<b>a</b>) Right upper and right lower arm’s lengths; (<b>b</b>) left upper and left lower arm’s lengths; (<b>c</b>) angles of right bending and left bending arms. In (<b>a</b>,<b>b</b>), vertical axis represents distance between joints (length) in pixel<sup>2</sup>. The horizontal axis indicates frame number it means time (in frames). In (<b>c</b>), the vertical axis represent angle in degree. The horizontal axis indicates frame number it means time (in frames).</p>
Full article ">Figure 11
<p>Arm’s length and angle of bending arms of subject 1 when performing net-surfing. (<b>a</b>) Right upper and right lower arm’s lengths; (<b>b</b>) left upper and left lower arm’s lengths; (<b>c</b>) angles of right bending and left bending arms.</p>
Full article ">Figure 12
<p>Example of non-detection frame.</p>
Full article ">Figure 13
<p>Example 1 of misdetection generated by OpenPose (taken from P1). (<b>a</b>) Left upper and left lower arm’s lengths; (<b>b</b>) angles of right bending and left bending arms; (<b>c</b>) Example frame (135th, 139th, and 142nd).</p>
Full article ">Figure 14
<p>Example 2 of misdetection generated by OpenPose (taken from P5). (<b>a</b>) Misdetection frame (white line presents an expected detection result); (<b>b</b>) result of the right upper and lower arms’ lengths and the angle of right/left bending arms. In (<b>b</b>), first vertical axis indicates distance between joints (length) (pixel<sup>2</sup>). Second vertical axis indicates angle (degree). Horizontal axis indicates frame number corresponding to time (frames).</p>
Full article ">Figure 15
<p>Sample P1 data after applying LPF.</p>
Full article ">Figure 16
<p>Average DP score for each behavior obtained from the results in <a href="#information-11-00468-t003" class="html-table">Table 3</a> (reference data: P1).</p>
Full article ">Figure 17
<p>Examples of FRR–FAR curves in the cases where: (<b>a</b>) reference data is P1; (<b>b</b>) reference data is P6. In each graph, the horizontal axis indicates assigned DP thresholds. The vertical axis indicates error rate.</p>
Full article ">Figure 18
<p>EER distribution obtained from all FRR–FAR curves by cross-validation for right upper arms (<span class="html-italic">f</span><sub>c</sub> = 40 Hz). The horizontal axis indicates the assigned DP thresholds. The vertical axis indicates the error rate. The legend shows photo-taking behavior data used as reference data.</p>
Full article ">
21 pages, 4783 KiB  
Article
Design of Distributed Discrete-Event Simulation Systems Using Deep Belief Networks
by Edwin Cortes, Luis Rabelo, Alfonso T. Sarmiento and Edgar Gutierrez
Information 2020, 11(10), 467; https://doi.org/10.3390/info11100467 - 1 Oct 2020
Cited by 4 | Viewed by 3430
Abstract
In this research study, we investigate the ability of deep learning neural networks to provide a mapping between features of a parallel distributed discrete-event simulation (PDDES) system (software and hardware) to a time synchronization scheme to optimize speedup performance. We use deep belief [...] Read more.
In this research study, we investigate the ability of deep learning neural networks to provide a mapping between features of a parallel distributed discrete-event simulation (PDDES) system (software and hardware) to a time synchronization scheme to optimize speedup performance. We use deep belief networks (DBNs). DBNs, which due to their multiple layers with feature detectors at the lower layers and a supervised scheme at the higher layers, can provide nonlinear mappings. The mapping mechanism works by considering simulation constructs, hardware, and software intricacies such as simulation objects, concurrency, iterations, routines, and messaging rates with a particular importance level based on a cognitive approach. The result of the mapping is a synchronization scheme such as breathing time buckets, breathing time warp, and time warp to optimize speedup. The simulation-optimization technique outlined in this research study is unique. This new methodology could be realized within the current parallel and distributed simulation modeling systems to enhance performance. Full article
(This article belongs to the Special Issue Distributed Simulation 2020)
Show Figures

Figure 1

Figure 1
<p>Fixed time buckets allow events to be scheduled and processed asynchronously using the concept of a global lookahead.</p>
Full article ">Figure 2
<p>The implementation of rollback produced by straggler messages and antimessages in time warp (TW).</p>
Full article ">Figure 3
<p>The event horizon for a single node and the insertion of events on the list.</p>
Full article ">Figure 4
<p>Example of the breathing time warp (BTW) event-processing cycle with a TW phase, a breathing time buckets (BTB) phase, computing of global virtual time (GVT), and the corresponding commitment of events in five nodes.</p>
Full article ">Figure 5
<p>Example of handwritten digits from the MNIST handwritten digits database.</p>
Full article ">Figure 6
<p>Detection by comparison of signals using nominal patterns as the basis to contrast with off-nominal patterns.</p>
Full article ">Figure 7
<p>Simulation scenario (case study) using two classes of simulations objects (SOs) with their respective events and trajectories. These SOs are radars and aircraft.</p>
Full article ">Figure 8
<p>Unified modeling language (UML) schematics of the development with two types of simulation objects (<span class="html-italic">Aircraft</span> and <span class="html-italic">Radar</span>) and two events (i.e., <span class="html-italic">Scan</span> and <span class="html-italic">TestUpdateAttribute</span>). (The symbol * means: many).</p>
Full article ">Figure 9
<p>Example of a theater of operations as defined by the rectangle with vertices (A–D).</p>
Full article ">Figure 10
<p>Different methods in the C programming language adapted to WarpIV to program the case study of <a href="#information-11-00467-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 11
<p>Examples of node configurations with cores and distributed computing elements for the experiments.</p>
Full article ">Figure 12
<p>Speedup chart for different time and synchronization schemes (BTW, BTB, and TW) for the distributed configurations. It is essential to observe the differences in performance due to the configuration and the time and synchronization scheme for the case study—this graph will be different for other performance measures.</p>
Full article ">Figure 13
<p>Calculation of the cognitive weights for a program.</p>
Full article ">Figure 14
<p>Root mean square error and cross-entropy error—training curve for the DBN developed with 21 inputs, 50 neurons in the first hidden layer, 50 neurons in the second hidden layer, 50 neurons in the third hidden layer, and 3 output neurons.</p>
Full article ">Figure 15
<p>The testing performance of the DBNs built using 21 inputs, 50 neurons in the first hidden layer, 50 neurons in the second hidden layer, 50 neurons in the third hidden layer, and 3 output neurons.</p>
Full article ">
15 pages, 1304 KiB  
Review
A Systematic Review of the Application of Maturity Models in Universities
by Esteban Tocto-Cano, Sandro Paz Collado, Javier Linkolk López-Gonzales and Josué E. Turpo-Chaparro
Information 2020, 11(10), 466; https://doi.org/10.3390/info11100466 - 1 Oct 2020
Cited by 18 | Viewed by 6994
Abstract
A maturity model is a widely used tool in software engineering and has mostly been extended to domains such as education, health, energy, finance, government, and general use. It is valuable for evaluations and continuous improvement of business processes or certain aspects of [...] Read more.
A maturity model is a widely used tool in software engineering and has mostly been extended to domains such as education, health, energy, finance, government, and general use. It is valuable for evaluations and continuous improvement of business processes or certain aspects of organizations, as it represents a more organized and systematic way of doing business. In this paper, we only focus on college higher education. For this reason, we present a novel approach that allows detecting some gaps in the existing maturity models for universities, as they are not models that address the dimensions in their entirety. To identify these models and their validities, as well as a classification of models that were identified in universities, we carried out a systematic literature review on 27,289 articles retrieved with respect to maturity models and published in peer-reviewed journals between 2007 and 2020. We found 23 articles that find maturity models applied in universities, through exclusion and inclusion criteria. We then grouped these items into nine categories with specific purposes. We concluded that maturity models used in Universities move towards agility, which is supported by the semantic web. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Figure 1
<p>Methodological protocol.</p>
Full article ">Figure 2
<p>Inclusion criteria. In this condition, articles, conference papers, reviews and conference reviews were selected, all in English.</p>
Full article ">Figure 3
<p>Exclusion criteria.</p>
Full article ">Figure 4
<p>Maturity model selection procedure: (<b>a</b>) Stage 1—Execution of search string in the selected Databases; (<b>b</b>) Stage 2—Those who met conditions 1-i, 2-i and 4-i were considered pre-candidates. The resulting articles were exported to Excel for further refinement; (<b>c</b>) Stage 3—The articles were analyzed based on two conditions: 2-e, 3-e; (<b>d</b>) Stage 4—Articles were analyzed based on two conditions (3-i and 1-e), implying a review of the abstract and conclusions of all articles contemplated in stage 3.</p>
Full article ">Figure 5
<p>Frequency of publications.</p>
Full article ">Figure 6
<p>Maturity scale. Adapted from Silva and Cabral (2010).</p>
Full article ">
19 pages, 4298 KiB  
Article
Technological Aspects of Blockchain Application for Vehicle-to-Network
by Vasiliy Elagin, Anastasia Spirkina, Mikhail Buinevich and Andrei Vladyko
Information 2020, 11(10), 465; https://doi.org/10.3390/info11100465 - 30 Sep 2020
Cited by 37 | Viewed by 4450
Abstract
Over the past decade, wireless communication technologies have developed significantly for intelligent applications in road transport. This paper provides an overview of telecommunications-based intelligent transport systems with a focus on ensuring system safety and resilience. In vehicle-to-everything, these problems are extremely acute due [...] Read more.
Over the past decade, wireless communication technologies have developed significantly for intelligent applications in road transport. This paper provides an overview of telecommunications-based intelligent transport systems with a focus on ensuring system safety and resilience. In vehicle-to-everything, these problems are extremely acute due to the specifics of the operation of transport networks, which requires the use of special protection mechanisms. In this regard, it was decided to use blockchain as a system platform to support the needs of transport systems for secure information exchange. This paper describes the technological aspects of implementing blockchain technology in vehicle-to-network; the features of such technology are presented, as well as the features of their interaction. The authors considered various network characteristics and identified the parameters that have a primary impact on the operation of the vehicle-to-network (V2N) network when implementing the blockchain. In the paper, an experiment was carried out that showed the numerical characteristics for the allocation of resources on devices involved in organizing V2N communication and conclusions were drawn from the results of the study. Full article
(This article belongs to the Special Issue Vehicle-To-Everything (V2X) Communication)
Show Figures

Figure 1

Figure 1
<p>Data exchange scenario between full and light nodes [<a href="#B36-information-11-00465" class="html-bibr">36</a>].</p>
Full article ">Figure 2
<p>Model vehicle-to-network network.</p>
Full article ">Figure 3
<p>Vehicle-to-everything (V2X) network architecture after blockchain implementation.</p>
Full article ">Figure 4
<p>Blockchain algorithm.</p>
Full article ">Figure 5
<p>Intensity of loading channels between nodes 1 and 5 (experiment 1): before the blockchain works (<b>left</b>) and during the blockchain operation (<b>right</b>).</p>
Full article ">Figure 6
<p>Intensity of loading channels between nodes 1 and 5 (experiment 2): before the blockchain works (<b>left</b>) and during the blockchain operation (<b>right</b>).</p>
Full article ">Figure 7
<p>Intensity of loading channels between nodes 1 and 5 (experiment 3): before the blockchain works (<b>left</b>) and during the blockchain operation (<b>right</b>).</p>
Full article ">Figure 8
<p>Intensity of loading channels between nodes 1 and 5 (experiment 4): before the blockchain works (<b>left</b>) and during the blockchain operation (<b>right</b>).</p>
Full article ">
17 pages, 2621 KiB  
Article
Successive Collaborative SLAM: Towards Reliable Inertial Pedestrian Navigation
by Susanna Kaiser
Information 2020, 11(10), 464; https://doi.org/10.3390/info11100464 - 30 Sep 2020
Cited by 5 | Viewed by 2488
Abstract
In emergency scenarios, such as a terrorist attack or a building on fire, it is desirable to track first responders in order to coordinate the operation. Pedestrian tracking methods solely based on inertial measurement units in indoor environments are candidates for such operations [...] Read more.
In emergency scenarios, such as a terrorist attack or a building on fire, it is desirable to track first responders in order to coordinate the operation. Pedestrian tracking methods solely based on inertial measurement units in indoor environments are candidates for such operations since they do not depend on pre-installed infrastructure. A very powerful indoor navigation method represents collaborative simultaneous localization and mapping (collaborative SLAM), where the learned maps of several users can be combined in order to help indoor positioning. In this paper, maps are estimated from several similar trajectories (multiple users) or one user wearing multiple sensors. They are combined successively in order to obtain a precise map and positioning. For reducing complexity, the trajectories are divided into small portions (sliding window technique) and are partly successively applied to the collaborative SLAM algorithm. We investigate successive combinations of the map portions of several pedestrians and analyze the resulting position accuracy. The results depend on several parameters, e.g., the number of users or sensors, the sensor drifts, the amount of revisited area, the number of iterations, and the windows size. We provide a discussion about the choice of the parameters. The results show that the mean position error can be reduced to ≈0.5 m when applying partly successive collaborative SLAM. Full article
(This article belongs to the Special Issue Indoor Navigation in Smart Cities)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of partly successive FeetSLAM.</p>
Full article ">Figure 2
<p>Six Xsens sensors mounted on the foot of the pedestrian; (<b>a</b>) top view and (<b>b</b>) front view of the 6 sensors.</p>
Full article ">Figure 3
<p>Building layout of the ground floor of our office building. The ground truth points (GTP) used are marked in different colors. Several starting points—marked in red—are used to emulate a group of pedestrians or emergency forces entering the building. The dotted arrows/lines represent the starting/walking direction.</p>
Full article ">Figure 4
<p>Error performance of partly successive FeetSLAM for one walk with 6 sensors mounted on the foot. The window size was 3 and the number of iteration was 3. The results are always below one meter and the average result over all six data sets was <math display="inline"><semantics> <mrow> <mn>0.34</mn> </mrow> </semantics></math> m.</p>
Full article ">Figure 5
<p>Resulting tracks after performing partly successive FeetSLAM for one walk with 6 sensors mounted on the foot. The window size was 3 and the number of iteration was 3.</p>
Full article ">
12 pages, 252 KiB  
Article
Information Sharing Strategies in the Social Media Era: The Perspective of Financial Performance and CSR in the Food Industry
by Magdalena Mądra-Sawicka and Joanna Paliszkiewicz
Information 2020, 11(10), 463; https://doi.org/10.3390/info11100463 - 29 Sep 2020
Cited by 10 | Viewed by 4569
Abstract
This paper aims to identify financial measures that are related to Corporate Social Responsibility (CSR) involvement activities. The study concerns the food industry, in which clients, as well as stakeholders, increasingly appreciate socially responsible companies, which could be a crucial factor for future [...] Read more.
This paper aims to identify financial measures that are related to Corporate Social Responsibility (CSR) involvement activities. The study concerns the food industry, in which clients, as well as stakeholders, increasingly appreciate socially responsible companies, which could be a crucial factor for future growth strategy. An analysis was made on a sample of 448 food companies from 50 countries in 2009–2020. As a financial measure for CSR assessment, we used profitability ratios, dividend payout ratio, price-to-earnings ratio and market capitalization. The results confirmed that CSR reporting was a crucial division that differentiated companies from the perspective of profitability, OE, market capitalization, and share price. The CSR practices that are realized and published in reports become an important signal for investors that the company has a good financial situation and is able to invest in CSR without reducing its performance. Full article
Previous Issue
Next Issue
Back to TopTop