[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (72)

Search Parameters:
Keywords = TinyML

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1547 KiB  
Review
Advancements in TinyML: Applications, Limitations, and Impact on IoT Devices
by Abdussalam Elhanashi, Pierpaolo Dini, Sergio Saponara and Qinghe Zheng
Electronics 2024, 13(17), 3562; https://doi.org/10.3390/electronics13173562 - 8 Sep 2024
Viewed by 557
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) have experienced rapid growth in both industry and academia. However, the current ML and AI models demand significant computing and processing power to achieve desired accuracy and results, often restricting their use to high-capability devices. With [...] Read more.
Artificial Intelligence (AI) and Machine Learning (ML) have experienced rapid growth in both industry and academia. However, the current ML and AI models demand significant computing and processing power to achieve desired accuracy and results, often restricting their use to high-capability devices. With advancements in embedded system technology and the substantial development in the Internet of Things (IoT) industry, there is a growing desire to integrate ML techniques into resource-constrained embedded systems for ubiquitous intelligence. This aspiration has led to the emergence of TinyML, a specialized approach that enables the deployment of ML models on resource-constrained, power-efficient, and low-cost devices. Despite its potential, the implementation of ML on such devices presents challenges, including optimization, processing capacity, reliability, and maintenance. This article delves into the TinyML model, exploring its background, the tools that support it, and its applications in advanced technologies. By understanding these aspects, we can better appreciate how TinyML is transforming the landscape of AI and ML in embedded and IoT systems. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>Comprehensive taxonomy categorizing the primary applications of TinyML. It encompasses various domains where TinyML is making significant impacts, highlighting specific use cases and their respective advantages. From healthcare and environmental monitoring to industrial automation and consumer electronics, this taxonomy provides a clear overview of how TinyML is revolutionizing different sectors with its ability to perform complex Machine Learning tasks on ultra-low-power devices.</p>
Full article ">Figure 2
<p>Tiny Machine Learning (TinyML) aims to create new applications by integrating Machine Learning models into embedded systems.</p>
Full article ">Figure 3
<p>A comprehensive framework illustrating the integration of IoT applications with cloud computing, edge computing, and TinyML. This framework highlights the synergistic interaction between these technologies, showcasing how data are processed and managed at different levels from edge devices to centralized cloud systems. It emphasizes the role of TinyML in enabling on-device intelligence and real-time analytics, enhancing the overall efficiency and responsiveness of IoT solutions.</p>
Full article ">
33 pages, 14331 KiB  
Article
A Virtual Machine Platform Providing Machine Learning as a Programmable and Distributed Service for IoT and Edge On-Device Computing: Architecture, Transformation, and Evaluation of Integer Discretization
by Stefan Bosse
Algorithms 2024, 17(8), 356; https://doi.org/10.3390/a17080356 - 15 Aug 2024
Viewed by 479
Abstract
Data-driven models used for predictive classification and regression tasks are commonly computed using floating-point arithmetic and powerful computers. We address constraints in distributed sensor networks like the IoT, edge, and material-integrated computing, providing only low-resource embedded computers with sensor data that are acquired [...] Read more.
Data-driven models used for predictive classification and regression tasks are commonly computed using floating-point arithmetic and powerful computers. We address constraints in distributed sensor networks like the IoT, edge, and material-integrated computing, providing only low-resource embedded computers with sensor data that are acquired and processed locally. Sensor networks are characterized by strong heterogeneous systems. This work introduces and evaluates a virtual machine architecture that provides ML as a service layer (MLaaS) on the node level and addresses very low-resource distributed embedded computers (with less than 20 kB of RAM). The VM provides a unified ML instruction set architecture that can be programmed to implement decision trees, ANN, and CNN model architectures using scaled integer arithmetic only. Models are trained primarily offline using floating-point arithmetic, finally converted by an iterative scaling and transformation process, demonstrated in this work by two tests based on simulated and synthetic data. This paper is an extended version of the FedCSIS 2023 conference paper providing new algorithms and ML applications, including ANN/CNN-based regression and classification tasks studying the effects of discretization on classification and regression accuracy. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

Figure 1
<p>ARM Cortex M0-based sensor node (STM32L031) implementing the REXA VM for material-integrated GUW sensing with NFC for energy transfer and bidirectional communication with only 8 kB of RAM and 32 kB of ROM.</p>
Full article ">Figure 2
<p>Principle REXA VM network architecture using different wired and wireless communication technologies.</p>
Full article ">Figure 3
<p>Basic REXA-VM architecture with integrated JIT compiler, stacks, and byte-code processor [<a href="#B11-algorithms-17-00356" class="html-bibr">11</a>,<a href="#B12-algorithms-17-00356" class="html-bibr">12</a>].</p>
Full article ">Figure 4
<p>(<b>Left</b>) Incremental growing code segment (single-tasking), persistent code cannot be removed. (<b>Right</b>) Dynamically partitioned code segments using code frames and linking code frames due to fragmentation.</p>
Full article ">Figure 5
<p>Exploding output values for negative x-values (<span class="html-italic">e</span><sup>−x</sup> term) and positive x-values (<span class="html-italic">e</span><sup>x</sup> term) of the exponential function.</p>
Full article ">Figure 6
<p>Relative discretization error of integer-scaled LUT-based approximation of the <span class="html-italic">log10</span> function for different Δ<span class="html-italic">x</span> values (1,2,4) and LUT sizes of 90, 45, and 23, respectively.</p>
Full article ">Figure 7
<p>Relative discretization error of integer-scaled LUT-interpolated approximation of the <span class="html-italic">sigmoid</span> function using the discretized <span class="html-italic">log10</span> LUT-interpolation function for different LUT resolutions and sigmoid segment ranges <span class="html-italic">R</span>. The small error plots show only positive x values.</p>
Full article ">Figure 8
<p>Relative discretization error of integer-scaled LUT-interpolated approximation of the <span class="html-italic">tanh</span> function using the discretized <span class="html-italic">log10</span> LUT-interpolation function.</p>
Full article ">Figure 9
<p>Phase 1 transformation (CNN). (<b>Top</b>) Transformation of 3-dim tensors into multiple vectors for convolutional and pooling layers and flattening of multiple vectors from last convolutional or pooling layer into one vector for the input of a fully connected neuronal layer. (<b>Bottom</b>) Convolutional and pooling operations factorized into sequential and accumulated vector operations.</p>
Full article ">Figure 10
<p>Scaling architectures for (<b>Top</b>) functional nodes, i.e., neurons; (<b>Bottom</b>) convolution or pooling operation.</p>
Full article ">Figure 11
<p>Accumulative scaled convolution or multi-vector input (flattening) neural network operation based on a product–sum calculation. Each accumulative iteration uses a different input scaling <span class="html-italic">s</span><sub>d</sub> normalization with respect to the output scaling <span class="html-italic">s</span>.</p>
Full article ">Figure 12
<p>The ML model transformation pipeline creating an intermediate USM and then creating a sequence of MLISA vector operations.</p>
Full article ">Figure 13
<p>GUW signal simulation using a 2 dim viscoelastic wave propagation model. (<b>Left</b>) Simulation set-up. (<b>Right</b>) Some example signals with and without damage (blue areas show damage features).</p>
Full article ">Figure 14
<p>Down-sampled GUW signal from simulation and low-pass-filtered rectified (envelope approximation) signal as input for the CNN (damage at position x = 100, y = 100).</p>
Full article ">Figure 15
<p>Foo/FooFP model analysis of the GUW regression CNN model. The classification error was always zero.</p>
Full article ">Figure 16
<p>(<b>Top</b>) Analysis of the ANN FP and DS models comparing RMSE and E<sub>max</sub> values for different configurations of the activation function approximation, including an FPU replacement. (<b>Bottom</b>) Selected prediction results are shown with discontinuities in the top plot using ActDS configuration 5 and without using the FPU replacement for the tanh function.</p>
Full article ">
18 pages, 3199 KiB  
Article
Optimizing Convolutional Neural Networks for Image Classification on Resource-Constrained Microcontroller Units
by Susanne Brockmann and Tim Schlippe
Computers 2024, 13(7), 173; https://doi.org/10.3390/computers13070173 - 15 Jul 2024
Viewed by 859
Abstract
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access [...] Read more.
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access memory sizes between 2 KB and 512 KB and read-only memory storage capacities between 32 KB and 2 MB. Models designed for high-end devices are usually ported to MCUs using model scaling factors provided by the model architecture’s designers. However, our analysis shows that this naive approach of substantially scaling down convolutional neural networks (CNNs) for image classification using such default scaling factors results in suboptimal performance. Consequently, in this paper we present a systematic strategy for efficiently scaling down CNN model architectures to run on MCUs. Moreover, we present our CNN Analyzer, a dashboard-based tool for determining optimal CNN model architecture scaling factors for the downscaling strategy by gaining layer-wise insights into the model architecture scaling factors that drive model size, peak memory, and inference time. Using our strategy, we were able to introduce additional new model architecture scaling factors for MobileNet v1, MobileNet v2, MobileNet v3, and ShuffleNet v2 and to optimize these model architectures. Our best model variation outperforms the MobileNet v1 version provided in the MLPerf Tiny Benchmark on the Visual Wake Words image classification task, reducing the model size by 20.5% while increasing the accuracy by 4.0%. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

Figure 1
<p>Our strategy for optimizing CNNs on MCUs.</p>
Full article ">Figure 2
<p>Model scorecard from our CNN Analyzer.</p>
Full article ">Figure 3
<p>MobileNet v1 model size in KB for different <math display="inline"><semantics> <mi>α</mi> </semantics></math> and <span class="html-italic">l</span>.</p>
Full article ">Figure 4
<p>MobileNet v1 test accuracy for different <math display="inline"><semantics> <mi>α</mi> </semantics></math> and <span class="html-italic">l</span>.</p>
Full article ">Figure 5
<p>MobileNet v1: Benchmark vs. Optimization with <span class="html-italic">pl</span> and <span class="html-italic">ll</span>.</p>
Full article ">Figure 6
<p>MobileNet v1 optimization with <math display="inline"><semantics> <mi>β</mi> </semantics></math>.</p>
Full article ">
19 pages, 3691 KiB  
Article
Enhancing Security in Connected and Autonomous Vehicles: A Pairing Approach and Machine Learning Integration
by Usman Ahmad, Mu Han and Shahid Mahmood
Appl. Sci. 2024, 14(13), 5648; https://doi.org/10.3390/app14135648 - 28 Jun 2024
Cited by 1 | Viewed by 768
Abstract
The automotive sector faces escalating security risks due to advances in wireless communication technology. Expanding on our previous research using a sensor pairing technique and machine learning models to evaluate IoT sensor data reliability, this study broadens its scope to address security concerns [...] Read more.
The automotive sector faces escalating security risks due to advances in wireless communication technology. Expanding on our previous research using a sensor pairing technique and machine learning models to evaluate IoT sensor data reliability, this study broadens its scope to address security concerns in Connected and Autonomous Vehicles (CAVs). The objectives of this research include identifying and mitigating specific security vulnerabilities related to CAVs, thereby establishing a comprehensive understanding of the risks these vehicles face. Additionally, our study introduces two innovative pairing approaches. The first approach focuses on pairing Electronic Control Units (ECUs) within individual vehicles, while the second extends to pairing entire vehicles, termed as vehicle pairing. Rigorous preprocessing of the dataset was carried out to ensure its readiness for subsequent model training. Leveraging Support Vector Machine (SVM) and TinyML methods for data validation and attack detection, we have been able to achieve an impressive accuracy rate of 97.2%. The proposed security approach notably contributes to the security of CAVs against potential cyber threats. The experimental setup demonstrates the practical application and effectiveness of TinyML in embedded systems within CAVs. Importantly, our proposed solution ensures that these security enhancements do not impose additional memory or network loads on the ECUs. This is accomplished by delegating the intensive cross-validation to the central module or Roadside Units (RSUs). This novel approach not only contributes to mitigating various security loopholes, but paves the way for scalable, efficient solutions for resource-constrained automotive systems. Full article
(This article belongs to the Special Issue Progress and Research in Cybersecurity and Data Privacy)
Show Figures

Figure 1

Figure 1
<p>The attack surface for launching security attacks on ECUs.</p>
Full article ">Figure 2
<p>Flow of the proposed ECU pairing authentication model.</p>
Full article ">Figure 3
<p>Abstract depiction of proposed ECU pairing model.</p>
Full article ">Figure 4
<p>Flow of the proposed vehicle pairing model.</p>
Full article ">Figure 5
<p>Abstract depiction of proposed vehicle pairing model.</p>
Full article ">Figure 6
<p>SVM and SNN learning results comparison.</p>
Full article ">
21 pages, 8221 KiB  
Article
Improving Short-Term Prediction of Ocean Fog Using Numerical Weather Forecasts and Geostationary Satellite-Derived Ocean Fog Data Based on AutoML
by Seongmun Sim, Jungho Im, Sihun Jung and Daehyeon Han
Remote Sens. 2024, 16(13), 2348; https://doi.org/10.3390/rs16132348 - 27 Jun 2024
Viewed by 692
Abstract
Ocean fog, a meteorological phenomenon characterized by reduced visibility due to tiny water droplets or ice particles, poses significant safety risks for maritime activities and coastal regions. Accurate prediction of ocean fog is crucial but challenging due to its complex formation mechanisms and [...] Read more.
Ocean fog, a meteorological phenomenon characterized by reduced visibility due to tiny water droplets or ice particles, poses significant safety risks for maritime activities and coastal regions. Accurate prediction of ocean fog is crucial but challenging due to its complex formation mechanisms and variability. This study proposes an advanced ocean fog prediction model for the Yellow Sea region, leveraging satellite-based detection and high-performance data-driven methods. We used Himawari-8 satellite data to obtain a lot of spatiotemporal ocean fog references and employed AutoML to integrate numerical weather prediction (NWP) outputs and sea surface temperature (SST)-related variables. The model demonstrated superior performance compared to traditional NWP-based methods, achieving high performance in both quantitative—probability of detection of 81.6%, false alarm ratio of 24.4%, f1 score of 75%, and proportion correct of 79.8%—and qualitative evaluations for 1 to 6 h lead times. Key contributing variables included relative humidity, accumulated shortwave radiation, and atmospheric pressure, indicating the importance of integrating diverse data sources. The study emphasizes the potential of using satellite-derived data to improve ocean fog prediction, while also addressing the challenges of overfitting and the need for more comprehensive reference data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area, indicated by the blue box, with the location of automated surface observing system stations located in Baeknyeongdo and Heuksando, which measure various meteorological variables including visibility.</p>
Full article ">Figure 2
<p>Process flow diagram proposed in this study.</p>
Full article ">Figure 3
<p>The structure of AutoGluon used in this study.</p>
Full article ">Figure 4
<p>Quantitative performances of AutoGluon and LDAPS V1KM models for hindcast samples of analysis and forecast data with lead times ranging from +1 to +6 h in 2020. Performance metrics such as the probability of detection, false alarm ratio, F1, and proportion correct are displayed in order.</p>
Full article ">Figure 5
<p>Variable contributions of input variables identified by the AutoGluon model.</p>
Full article ">Figure 6
<p>Detected and predicted ocean fog maps and highly contributing input variables for the AutoGluon model in the Yellow Sea at 06 June 2020 05:00 UTC with CALIPSO-based ocean fog observations acquired at 06 June 2020 05:20 UTC. AutoGluon anal3h and anal6h indicate the ocean fog prediction results using the forecast data produced at 03UTC and 00UTC as input, respectively. TT indicates correctly classified ocean fog, TF indicates missed ocean fog, FT indicates falsely classified ocean fog, and FF indicates correctly classified non-fog (refer to <a href="#sec3dot3-remotesensing-16-02348" class="html-sec">Section 3.3</a>).</p>
Full article ">Figure 7
<p>Ocean fog results from the detection and prediction models along with CALIPSO observations on 6 June 2020, at 05:20 UTC. The unknown class includes cases with two or more of the following composite characteristics: ocean fog, clear sky, and cloud.</p>
Full article ">Figure 8
<p>Timeseries mapping results of ocean fog detection, prediction, and LDAPS V1KM on 20 June 2020, from 12:00 UTC to 18:00 UTC. Analysis indicates the use of analysis data for input variables, and forecast indicates predicted results with lead times.</p>
Full article ">Figure 9
<p>Timeseries mapping results of relative humidity, pressure, visibility, accumulative shortwave radiance of previous day and from −6 h to −9 h on 20 June 2020, from 12:00 UTC to 18:00 UTC. Analysis indicates the use of analysis data for input variables, and forecast indicates the data with the lead times.</p>
Full article ">Figure 10
<p>Timeseries of measured visibility, weather report, and ocean fog detection and prediction from 20 June 2020, 13:00 UTC to 21 June 2020, 05:00 UTC at the Baekneoung-do ASOS station. The ocean fog ratio is the proportion of ocean fog coverage within a 100 km<sup>2</sup> surrounding area of the station.</p>
Full article ">Figure 11
<p>Timeseries mapping results of ocean fog detection, prediction, and LDAPS V1KM on 17 August 2020, from 00:00 UTC to 06:00 UTC. Analysis indicates the use of analysis data for input variables, and forecast indicates predictions with the lead times.</p>
Full article ">Figure 12
<p>Timeseries mapping results of relative humidity, pressure, visibility, accumulative shortwave radiance of previous day and from −6 h to −9 h on 17 August 2020, from 00:00 UTC to 06:00 UTC. Analysis indicates the use of analysis data for input variables, and forecast indicates the data with the lead times.</p>
Full article ">Figure 13
<p>Temporal ocean fog related results of measured visibility, weather report, and ocean fog detection and prediction from 16 August 2020, 19:00 UTC to 17 August 2020, 13:00 UTC at the Heuksan-do ASOS station. The ocean fog ratio is the proportion of ocean fog coverage within a 100 km<sup>2</sup> surrounding area of the station.</p>
Full article ">
16 pages, 743 KiB  
Article
Tiny-Machine-Learning-Based Supply Canal Surface Condition Monitoring
by Chengjie Huang, Xinjuan Sun and Yuxuan Zhang
Sensors 2024, 24(13), 4124; https://doi.org/10.3390/s24134124 - 25 Jun 2024
Cited by 1 | Viewed by 674
Abstract
The South-to-North Water Diversion Project in China is an extensive inter-basin water transfer project, for which ensuring the safe operation and maintenance of infrastructure poses a fundamental challenge. In this context, structural health monitoring is crucial for the safe and efficient operation of [...] Read more.
The South-to-North Water Diversion Project in China is an extensive inter-basin water transfer project, for which ensuring the safe operation and maintenance of infrastructure poses a fundamental challenge. In this context, structural health monitoring is crucial for the safe and efficient operation of hydraulic infrastructure. Currently, most health monitoring systems for hydraulic infrastructure rely on commercial software or algorithms that only run on desktop computers. This study developed for the first time a lightweight convolutional neural network (CNN) model specifically for early detection of structural damage in water supply canals and deployed it as a tiny machine learning (TinyML) application on a low-power microcontroller unit (MCU). The model uses damage images of the supply canals that we collected as input and the damage types as output. With data augmentation techniques to enhance the training dataset, the deployed model is only 7.57 KB in size and demonstrates an accuracy of 94.17 ± 1.67% and a precision of 94.47 ± 1.46%, outperforming other commonly used CNN models in terms of performance and energy efficiency. Moreover, each inference consumes only 5610.18 μJ of energy, allowing a standard 225 mAh button cell to run continuously for nearly 11 years and perform approximately 4,945,055 inferences. This research not only confirms the feasibility of deploying real-time supply canal surface condition monitoring on low-power, resource-constrained devices but also provides practical technical solutions for improving infrastructure security. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow of the paper.</p>
Full article ">Figure 2
<p>Water supply channel data acquisition device.</p>
Full article ">Figure 3
<p>Original image examples.</p>
Full article ">Figure 4
<p>Augmented image examples.</p>
Full article ">Figure 5
<p>Proposed CNN model structure.</p>
Full article ">Figure 6
<p>Training and validation loss curves of the proposed model.</p>
Full article ">Figure 7
<p>Confusion matrix of proposed model.</p>
Full article ">Figure 8
<p>Confusion matrix of deployed proposed model.</p>
Full article ">
34 pages, 2714 KiB  
Review
Overview of AI-Models and Tools in Embedded IIoT Applications
by Pierpaolo Dini, Lorenzo Diana, Abdussalam Elhanashi and Sergio Saponara
Electronics 2024, 13(12), 2322; https://doi.org/10.3390/electronics13122322 - 13 Jun 2024
Viewed by 865
Abstract
The integration of Artificial Intelligence (AI) models in Industrial Internet of Things (IIoT) systems has emerged as a pivotal area of research, offering unprecedented opportunities for optimizing industrial processes and enhancing operational efficiency. This article presents a comprehensive review of state-of-the-art AI models [...] Read more.
The integration of Artificial Intelligence (AI) models in Industrial Internet of Things (IIoT) systems has emerged as a pivotal area of research, offering unprecedented opportunities for optimizing industrial processes and enhancing operational efficiency. This article presents a comprehensive review of state-of-the-art AI models applied in IIoT contexts, with a focus on their utilization for fault prediction, process optimization, predictive maintenance, product quality control, cybersecurity, and machine control. Additionally, we examine the software and hardware tools available for integrating AI models into embedded platforms, encompassing solutions such as Vitis AI v3.5, TensorFlow Lite Micro v2.14, STM32Cube.AI v9.0, and others, along with their supported high-level frameworks and hardware devices. By delving into both AI model applications and the tools facilitating their deployment on low-power devices, this review provides a holistic understanding of AI-enabled IIoT systems and their practical implications in industrial settings. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic representation of IIoT systems architecture.</p>
Full article ">Figure 2
<p>A schematic representation of a typical IIoT communication system.</p>
Full article ">Figure 3
<p>Data safety through HW/SW firewalls.</p>
Full article ">Figure 4
<p>Schematic representation of the typical internal structure of CNNs.</p>
Full article ">Figure 5
<p>Schematic representation of the typical internal structure of an RNN.</p>
Full article ">Figure 6
<p>Schematic representation of the internal structure of an LSTM.</p>
Full article ">Figure 7
<p>General description of the internal structure of GRU hidden state.</p>
Full article ">Figure 8
<p>Schematic representation of a GAN model workflow.</p>
Full article ">Figure 9
<p>Schematic representation of autoencoder model architecture.</p>
Full article ">Figure 10
<p>Schematic representation of the Vitis AI framework v3.5 [<a href="#B152-electronics-13-02322" class="html-bibr">152</a>].</p>
Full article ">Figure 11
<p>Schematic representation of the Tensorflow/Tensorflow Lite integration framework.</p>
Full article ">Figure 12
<p>Schematic representation of the STM32Cube AI framework.</p>
Full article ">
12 pages, 514 KiB  
Article
Calibrating Glucose Sensors at the Edge: A Stress Generation Model for Tiny ML Drift Compensation
by Anna Sabatini, Costanza Cenerini, Luca Vollero and Danilo Pau
BioMedInformatics 2024, 4(2), 1519-1530; https://doi.org/10.3390/biomedinformatics4020083 - 9 Jun 2024
Cited by 1 | Viewed by 521
Abstract
Background: Continuous glucose monitoring (CGM) systems offer the advantage of noninvasive monitoring and continuous data on glucose fluctuations. This study introduces a new model that enables the generation of synthetic but realistic databases that integrate physiological variables and sensor attributes into a [...] Read more.
Background: Continuous glucose monitoring (CGM) systems offer the advantage of noninvasive monitoring and continuous data on glucose fluctuations. This study introduces a new model that enables the generation of synthetic but realistic databases that integrate physiological variables and sensor attributes into a dataset generation model and this, in turn, enables the design of improved CGM systems. Methods: The presented approach uses a combination of physiological data and sensor characteristics to construct a model that considers the impact of these variables on the accuracy of CGM measures. A dataset of 500 sensor responses over a 15-day period is generated and analyzed using machine learning algorithms (random forest regressor and support vector regressor). Results: The random forest and support vector regression models achieved Mean Absolute Errors (MAEs) of 16.13 mg/dL and 16.22 mg/dL, respectively. In contrast, models trained solely on single sensor outputs recorded an average MAE of 11.01±5.12 mg/dL. These findings demonstrate the variable impact of integrating multiple data sources on the predictive accuracy of CGM systems, as well as the complexity of the dataset. Conclusions: This approach provides a foundation for developing more precise algorithms and introduces its initial application of Tiny Machine Control Units (MCUs). More research is recommended to refine these models and validate their effectiveness in clinical settings. Full article
(This article belongs to the Special Issue Editor's Choices Series for Methods in Biomedical Informatics Section)
Show Figures

Figure 1

Figure 1
<p>Graphical representation of the sensor response: <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>G</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>—blood glucose concentration, <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>G</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>— interstitial glucose concentration, <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </semantics></math>—measurement sensor error; <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>—white noise and <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>—sensor drift over the time.</p>
Full article ">Figure 2
<p>Sensor response, the dotted lines represent the extracted values, while the linear interpolation between these points is shown in red.</p>
Full article ">Figure 3
<p>10 sensor responses; the dashed line in red is the bisector that represents the ideal sensor response.</p>
Full article ">Figure 4
<p>500 sensor responses; the blue line shows the average sensor response, while the area covers the first and third quartile.</p>
Full article ">Figure 5
<p>Example of a signal generated by the model for 15 days; the CGM sensor response is shown in orange and the reference signal is shown in blue.</p>
Full article ">Figure 6
<p>Absolute glucose concentration error; the mean value over time is shown in blue, while the area represents the measures that are within the 25th and 75th percentile.</p>
Full article ">Figure 7
<p>Cumulative distribution of sensors error. Mean = <math display="inline"><semantics> <mrow> <mn>40.79</mn> </mrow> </semantics></math> mg/dL, the 25th percentile is at <math display="inline"><semantics> <mrow> <mn>21.02</mn> </mrow> </semantics></math> mg/dL, and the 75th percentile is at 58.46 mg/dL.</p>
Full article ">Figure 8
<p>RMSE evaluation for RF models with a variation of the model max depth parameter.</p>
Full article ">
22 pages, 2903 KiB  
Article
Implementation of Lightweight Machine Learning-Based Intrusion Detection System on IoT Devices of Smart Homes
by Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi and Hadi Larijani
Future Internet 2024, 16(6), 200; https://doi.org/10.3390/fi16060200 - 5 Jun 2024
Cited by 1 | Viewed by 1276
Abstract
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems [...] Read more.
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

Figure 1
<p>System architecture for distributed IDS.</p>
Full article ">Figure 2
<p>Dataset collection on smart thermostat.</p>
Full article ">Figure 3
<p>Comparison of XGBoost-, RF-, DT-, and ANN-based IDS implementation on smart thermostat using Ton_IoT dataset.</p>
Full article ">Figure 4
<p>Comparison of XGBoost-, RF-, DT-, and ANN-based IDS implementation on smart thermostat using IDSH dataset.</p>
Full article ">
17 pages, 1298 KiB  
Article
A Case Study of a Tiny Machine Learning Application for Battery State-of-Charge Estimation
by Spyridon Giazitzis, Maciej Sakwa, Sonia Leva, Emanuele Ogliari, Susheel Badha and Filippo Rosetti
Electronics 2024, 13(10), 1964; https://doi.org/10.3390/electronics13101964 - 16 May 2024
Cited by 1 | Viewed by 972
Abstract
Growing battery use in energy storage and automotive industries demands advanced Battery Management Systems (BMSs) to estimate key parameters like the State of Charge (SoC) which are not directly measurable using standard sensors. Consequently, various model-based and data-driven approaches have been developed for [...] Read more.
Growing battery use in energy storage and automotive industries demands advanced Battery Management Systems (BMSs) to estimate key parameters like the State of Charge (SoC) which are not directly measurable using standard sensors. Consequently, various model-based and data-driven approaches have been developed for their estimation. Among these, the latter are often favored due to their high accuracy, low energy consumption, and ease of implementation on the cloud or Internet of Things (IoT) devices. This research focuses on creating small, efficient data-driven SoC estimation models for integration into IoT devices, specifically the Infineon Cypress CY8CPROTO-062S3-4343W. The development process involved training a compact Convolutional Neural Network (CNN) and an Artificial Neural Network (ANN) offline using a comprehensive dataset obtained from five different batteries. Before deployment on the target device, model quantization was performed using Infineon’s ModusToolBox Machine Learning (MTB-ML) configurator 2.0 software. The tests show satisfactory results for both chosen models with a good accuracy achieved, especially in the early stages of the battery lifecycle. In terms of the computational burden, the ANN has a clear advantage over the more complex CNN model. Full article
Show Figures

Figure 1

Figure 1
<p>Different approaches to SoC estimation.</p>
Full article ">Figure 2
<p>SoC values of testing profiles 1 and 2.</p>
Full article ">Figure 3
<p>Prediction process—a tensor of 60 [I, V, T] values corresponding to 60 s of data is used as the input for the model (in blue) to predict the value of the SoC at the 60th s (in red). The output can be described as a time series of SoC data with a 1-min sample rate.</p>
Full article ">Figure 4
<p>A single perception mathematical model.</p>
Full article ">Figure 5
<p>One-dimensional convolution operation.</p>
Full article ">Figure 6
<p>CY8CPROTO-062S3-4343W PSoC 62S3 target device.</p>
Full article ">Figure 7
<p>State of Health of the six studied cells. Around cycle 400, the state of the battery starts to degrade drastically.</p>
Full article ">Figure 8
<p>Cycle 100—comparison between the reference model and the quantized model performance.</p>
Full article ">Figure 9
<p>Cycle 400—comparison between the reference model and the quantized model performance.</p>
Full article ">Figure 10
<p>Mean quantization error for each tested model. The mean was calculated based on four test runs of each model for different test cycles.</p>
Full article ">
17 pages, 2074 KiB  
Article
CBin-NN: An Inference Engine for Binarized Neural Networks
by Fouad Sakr, Riccardo Berta, Joseph Doyle, Alessio Capello, Ali Dabbous, Luca Lazzaroni and Francesco Bellotti
Electronics 2024, 13(9), 1624; https://doi.org/10.3390/electronics13091624 - 24 Apr 2024
Cited by 1 | Viewed by 788
Abstract
Binarization is an extreme quantization technique that is attracting research in the Internet of Things (IoT) field, as it radically reduces the memory footprint of deep neural networks without a correspondingly significant accuracy drop. To support the effective deployment of Binarized Neural Networks [...] Read more.
Binarization is an extreme quantization technique that is attracting research in the Internet of Things (IoT) field, as it radically reduces the memory footprint of deep neural networks without a correspondingly significant accuracy drop. To support the effective deployment of Binarized Neural Networks (BNNs), we propose CBin-NN, a library of layer operators that allows the building of simple yet flexible convolutional neural networks (CNNs) with binary weights and activations. CBin-NN is platform-independent and is thus portable to virtually any software-programmable device. Experimental analysis on the CIFAR-10 dataset shows that our library, compared to a set of state-of-the-art inference engines, speeds up inference by 3.6 times and reduces the memory required to store model weights and activations by 7.5 times and 28 times, respectively, at the cost of slightly lower accuracy (2.5%). An ablation study stresses the importance of a Quantized Input Quantized Kernel Convolution layer to improve accuracy and reduce latency at the cost of a slight increase in model size. Full article
Show Figures

Figure 1

Figure 1
<p>A convolution implemented as a MAC operation in float vs. binary XNOR and PopCount.</p>
Full article ">Figure 2
<p>The sign and the STE function (<b>a</b>,<b>b</b>) its derivative that favors gradient descent [<a href="#B35-electronics-13-01624" class="html-bibr">35</a>].</p>
Full article ">Figure 3
<p>Workflow of BNN training, deployment, and inference on a microcontroller using CBin-NN.</p>
Full article ">Figure 4
<p>SmallCifar topology.</p>
Full article ">Figure 5
<p>SmallCifar latency using various inference engines.</p>
Full article ">Figure 6
<p>SmallCifar memory footprint using various inference engines.</p>
Full article ">
2 pages, 361 KiB  
Abstract
TinyML with Meta-Learning on Microcontrollers for Air Pollution Prediction
by I Nyoman Kusuma Wardana, Suhaib A. Fahmy and Julian W. Gardner
Proceedings 2024, 97(1), 163; https://doi.org/10.3390/proceedings2024097163 - 8 Apr 2024
Viewed by 779
Abstract
Tiny machine learning (tinyML) involves the application of ML algorithms on resource-constrained devices such as microcontrollers. It is possible to improve tinyML performance by using a meta-learning approach. In this work, we proposed lightweight base models running on a microcontroller to predict air [...] Read more.
Tiny machine learning (tinyML) involves the application of ML algorithms on resource-constrained devices such as microcontrollers. It is possible to improve tinyML performance by using a meta-learning approach. In this work, we proposed lightweight base models running on a microcontroller to predict air pollution and show how performance can be improved using a stacking ensemble meta-learning method. We used an air quality dataset for London. Deployed on a Raspberry Pi Pico microcontroller, the tinyML file sizes were 3012 bytes and 5076 bytes for the two base models we proposed. The stacked model could achieve RMSE improvements of up to 4.9% and 14.28% when predicting NO2 and PM2.5, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Stacking ensemble concept; (<b>b</b>) Base-1 and Base-2 model architectures.</p>
Full article ">
18 pages, 8259 KiB  
Article
A Portable Tool for Spectral Analysis of Plant Leaves That Incorporates a Multichannel Detector to Enable Faster Data Capture
by Juan Botero-Valencia, Erick Reyes-Vera, Elizabeth Ospina-Rojas and Flavio Prieto-Ortiz
Instruments 2024, 8(1), 24; https://doi.org/10.3390/instruments8010024 - 17 Mar 2024
Viewed by 1456
Abstract
In this study, a novel system was designed to enhance the efficiency of data acquisition in a portable and compact instrument dedicated to the spectral analysis of various surfaces, including plant leaves, and materials requiring characterization within the 410 to 915 nm range. [...] Read more.
In this study, a novel system was designed to enhance the efficiency of data acquisition in a portable and compact instrument dedicated to the spectral analysis of various surfaces, including plant leaves, and materials requiring characterization within the 410 to 915 nm range. The proposed system incorporates two nine-band detectors positioned on the top and bottom of the target surface, each equipped with a digitally controllable LED. The detectors are capable of measuring both reflection and transmission properties, depending on the LED configuration. Specifically, when the upper LED is activated, the lower detector operates without its LED, enabling the precise measurement of light transmitted through the sample. The process is reversed in subsequent iterations, facilitating an accurate assessment of reflection and transmission for each side of the target surface. For reliability, the error estimation utilizes a color checker, followed by a multi-layer perceptron (MLP) implementation integrated into the microcontroller unit (MCU) using TinyML technology for real-time refined data acquisition. The system is constructed with 3D-printed components and cost-effective electronics. It also supports USB or Bluetooth communication for data transmission. This innovative detector marks a significant advancement in spectral analysis, particularly for plant research, offering the potential for disease detection and nutritional deficiency assessment. Full article
(This article belongs to the Special Issue Feature Papers in Instruments 2021–2022)
Show Figures

Figure 1

Figure 1
<p>Relative sensitivity and luminous in AS7341 integrated board. (<b>a</b>) Relative sensitivity of the AS7341. (<b>b</b>) Relative luminous intensity profile of the EAHC2835WD6 LED.</p>
Full article ">Figure 2
<p>Assembly and list of mechanical parts.</p>
Full article ">Figure 3
<p>Electronic connection diagram.</p>
Full article ">Figure 4
<p>Color checker and reflectivity curves [<a href="#B33-instruments-08-00024" class="html-bibr">33</a>]. In the figure, the distribution of the rows and columns corresponds to the original Color checker, and within each patch the reflectance curve is shown as a reference.</p>
Full article ">Figure 5
<p>Architecture of the proposed MLP.</p>
Full article ">Figure 6
<p>Experimental setup used to measure the reflectance with different colors using an OSA.</p>
Full article ">Figure 7
<p>Photograph of the constructed spectrometer. (<b>a</b>) View of the entire device. (<b>b</b>) Detailed view of the area where the samples were located. The numbers in the figure correspond to the identification in the <a href="#instruments-08-00024-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 8
<p>Comparison of the fits of multiple patches. MEA is the color checker reference reflectance. SEN is the raw reflectance, while ADJ is the best MLP setup-adjusted reflectance.</p>
Full article ">Figure 8 Cont.
<p>Comparison of the fits of multiple patches. MEA is the color checker reference reflectance. SEN is the raw reflectance, while ADJ is the best MLP setup-adjusted reflectance.</p>
Full article ">Figure 9
<p>Validation with a reference method based on the use of a Yokogawa AQ6373 optical spectrum analyzer. MEA is the OSA reference reflectance. SEN is the raw reflectance, while ADJ is the best MLP setup-adjusted reflectance.</p>
Full article ">Figure 10
<p>Measurements of transmittance and reflectance on a vegetable leaf.</p>
Full article ">
24 pages, 7987 KiB  
Article
Noninvasive Diabetes Detection through Human Breath Using TinyML-Powered E-Nose
by Alberto Gudiño-Ochoa, Julio Alberto García-Rodríguez, Raquel Ochoa-Ornelas, Jorge Ivan Cuevas-Chávez and Daniel Alejandro Sánchez-Arias
Sensors 2024, 24(4), 1294; https://doi.org/10.3390/s24041294 - 17 Feb 2024
Cited by 4 | Viewed by 3192
Abstract
Volatile organic compounds (VOCs) in exhaled human breath serve as pivotal biomarkers for disease identification and medical diagnostics. In the context of diabetes mellitus, the noninvasive detection of acetone, a primary biomarker using electronic noses (e-noses), has gained significant attention. However, employing e-noses [...] Read more.
Volatile organic compounds (VOCs) in exhaled human breath serve as pivotal biomarkers for disease identification and medical diagnostics. In the context of diabetes mellitus, the noninvasive detection of acetone, a primary biomarker using electronic noses (e-noses), has gained significant attention. However, employing e-noses requires pre-trained algorithms for precise diabetes detection, often requiring a computer with a programming environment to classify newly acquired data. This study focuses on the development of an embedded system integrating Tiny Machine Learning (TinyML) and an e-nose equipped with Metal Oxide Semiconductor (MOS) sensors for real-time diabetes detection. The study encompassed 44 individuals, comprising 22 healthy individuals and 22 diagnosed with various types of diabetes mellitus. Test results highlight the XGBoost Machine Learning algorithm’s achievement of 95% detection accuracy. Additionally, the integration of deep learning algorithms, particularly deep neural networks (DNNs) and one-dimensional convolutional neural network (1D-CNN), yielded a detection efficacy of 94.44%. These outcomes underscore the potency of combining e-noses with TinyML in embedded systems, offering a noninvasive approach for diabetes mellitus detection. Full article
(This article belongs to the Special Issue Sensors for Breathing Monitoring)
Show Figures

Figure 1

Figure 1
<p>Components in exhaled human breath.</p>
Full article ">Figure 2
<p>E-nose, dehumidifier, and Tedlar bag for breath samples.</p>
Full article ">Figure 3
<p>Scheme of the proposed TinyML-powered e-nose measurement system.</p>
Full article ">Figure 4
<p>Calibration procedure for MQ gas sensors in a space with optimal RH conditions.</p>
Full article ">Figure 5
<p>Patients with T2DM and T1DM in the exhaled-breath sample collection procedure.</p>
Full article ">Figure 6
<p>Response of Rs/Ro signals from the e-nose before and after noise elimination with DWT. (<b>a</b>) MQ-135 sensor response in the presence of acetones; (<b>b</b>) MQ-2 sensor response in the presence of carbon monoxide.</p>
Full article ">Figure 7
<p>Importance scores of selected features from the breath sample dataset.</p>
Full article ">Figure 8
<p>Violin plot for acetone concentrations in the breath of HI and DMI groups.</p>
Full article ">Figure 9
<p>Violin plot for carbon monoxide concentrations in the breath of HI and DMI groups.</p>
Full article ">Figure 10
<p>PCA visualization with scaling to assess breath samples among HI and DMI groups.</p>
Full article ">Figure 11
<p>Learning curves generated using XGBoost.</p>
Full article ">Figure 12
<p>XGBoost confusion matrix.</p>
Full article ">Figure 13
<p>Performance of the DNN model during training: (<b>a</b>) model loss of DNN; (<b>b</b>) model accuracy of DNN.</p>
Full article ">Figure 14
<p>Performance of the 1D-CNN model during training: (<b>a</b>) Model loss of 1D-CNN; (<b>b</b>) model accuracy of 1D-CNN.</p>
Full article ">Figure 15
<p>DNN and 1D-CNN confusion matrix.</p>
Full article ">Figure 16
<p>Accuracy, precision, recall, and F1-score comparison of different algorithms.</p>
Full article ">Figure 17
<p>ROC curves comparison of different algorithms.</p>
Full article ">Figure 18
<p>Comparison of classification time by algorithms in seconds.</p>
Full article ">Figure 19
<p>Sizes in bytes of the models in TensorFlow Lite and their conversion to .h files.</p>
Full article ">Figure 20
<p>Confusion matrices of TinyML-implemented algorithms on microcontroller: (<b>a</b>) predictions with XGBoost algorithm; (<b>b</b>) predictions with DNN algorithm.</p>
Full article ">
29 pages, 743 KiB  
Article
TinyML Algorithms for Big Data Management in Large-Scale IoT Systems
by Aristeidis Karras, Anastasios Giannaros, Christos Karras, Leonidas Theodorakopoulos, Constantinos S. Mammassis, George A. Krimpas and Spyros Sioutas
Future Internet 2024, 16(2), 42; https://doi.org/10.3390/fi16020042 - 25 Jan 2024
Cited by 3 | Viewed by 2644
Abstract
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed [...] Read more.
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

Figure 1
<p>Proposed system architecture.</p>
Full article ">Figure 2
<p>Performance evaluation of TinyCleanEDF.</p>
Full article ">Figure 3
<p>Performance evaluation of EdgeClusterML.</p>
Full article ">Figure 4
<p>Performance evaluation of CompressEdgeML.</p>
Full article ">Figure 5
<p>Performance evaluation of CacheEdgeML.</p>
Full article ">Figure 6
<p>Performance evaluation of TinyHybridSenseQ.</p>
Full article ">Figure 7
<p>Cache hit rate comparison of CacheEdgeML with similar methods.</p>
Full article ">Figure 8
<p>Compression efficiency of CompressEdgeML compared to similar method.</p>
Full article ">Figure 9
<p>Compression speed of CompressEdgeML compared to similar method.</p>
Full article ">
Back to TopTop