HH-NIDS: Heterogeneous Hardware-Based Network Intrusion Detection Framework for IoT Security
<p>The proposed HH-NIDS framework architecture.</p> "> Figure 2
<p>The HH-NIDS framework’s processing flow from training to hardware implementations.</p> "> Figure 3
<p>Neural network inference acceleration on Zynq-7000 SoC architecture.</p> "> Figure 4
<p>The generated FPGA block architecture using the HLS implementation approach.</p> "> Figure 5
<p>Pipeline implementation of the <span class="html-italic">Neuron Network Architecture</span> block on an FPGA-based architecture includes four phases: Input buffering (P1), Layer_0 calculation (P2), Layer_1 calculation (P3), and Output buffering (P4).</p> "> Figure 6
<p>The <span class="html-italic">MULx11</span> block calculation architecture.</p> "> Figure 7
<p>The trained results in fifty epochs with three different learning rates: 0.0005, 0.001, and 0.01 for the MAX78000 microcontroller.</p> "> Figure 8
<p>The trained results in fifty epochs with three different learning rates: 0.0005, 0.001, and 0.01 for the PYNQ-Z2 SoC FPGA.</p> "> Figure 9
<p>The pre-process, inference, and post-process time from different implementations.</p> "> Figure 10
<p>The CPU, GPU, and FPGA inference times for different input buffer sizes.</p> "> Figure 11
<p>The wareform simulation results from the NN on FPGA using a Verilog approach.</p> ">
Abstract
:1. Introduction
1.1. Contribution
- The MAX78000 AI microcontroller with ultra-low power consumption [14].
- The SoC FPGA is implemented in high-level synthesis (HLS) and Verilog approaches. The Verilog implementation is customised in a pipeline fashion, which has achieved high-throughput and low latency in processing network packets.
1.2. Organisation
2. Related Works
3. Materials and Methods
3.1. Dataset
3.2. Evaluation Metrics
- Accuracy is one of the popular metrics for evaluating classification models. Equation (1) depicts the single-class accuracy measurement.
- Precision is the positive predictive value. It is a proportion of true positives that the model claims compared to the total number of positives it claims. The single class precision value is given in Equation (2):
- Recall is defined as the actual positive rate. This number refers to the positives which the model states, compared to the total positives in the data. The single-class recall value is given in Equation (3):
- F1-score represents the model’s performance. It can be referred to the weighted average of the precision and recall values. The single class F1-score value is given in Equation (4):
3.3. HH-NIDS Framework
- The Data Filtering block: scans the processed dataset (labelled records) and buffers all of the IPv4 records, which make up the majority of the dataset, as input data for the next blocks. In other words, this block creates a custom filter for selecting input records based on the administrators’ defined rules.
- The Classification block: examines the labelled data, and then collects records that have the same label to a new file. The number of new files is equal to the number of dataset labels/classes (benign and attacks).
- The Scaling block: reads the created data files and transforms the record’s features, which are initially in alphanumeric or numeric forms, to float values (between 0 and 1) using min-max normalisation.
- The Shuffle block: rearranges records according to different training schemes. In addition, data balancing can be applied in this block for better data distribution. Although the network data can be provided with time-based information, this block shuffles these records for the neuron network models to learn static information in the network packets.
- The Formatted Data block: holds shuffled data files from the previous block; the data are ready to be used in the training and inference processes.
- The Model Configuration block: has two main functionalities, which are: defining training settings and selecting a custom input feature set from each processed record for training. The output data will be forwarded to the GPU-based Trainer block for training.
- The GPU-based Trainer block: follows the ML library architectures (Keras, Tensorflow, Pytorch, etc.) to train NN models. The HH-NIDS framework proposes GPU-based approaches for better training performance compared to CPU-based approaches.
- The Trained Models block: stores trained models from the GPU-based Trainer block. These models will be used in the next inference phase.
- The Inference block: represents the software-based inference process. This block uses the trained models on the allocated data files to evaluate how well the trained models react to unseen data. The chosen models then need to be quantised and tested before being deployed into hardware accelerator platforms.
- The Parameter Quantisation block: transforms the model’s parameters to the supported bit width to fit into the hardware accelerator platforms.
- The Models block: contains quantised models, which will be evaluated again by the Inference block.
4. Implementation
4.1. Data Pre-Processing
4.2. Model Generation
4.3. Microcontroller
4.4. SoC FPGA
- The PS Reset block: receives the reset control signals from the PS to reset the PL to the initial state.
- The AXI Interconnect block: assigns user’s input values to registers in PL through General-Purpose Ports (GP0), which is configured as a 32-bit AXI-lite interface. There are two registers to be configured in the NN block: namely, the RESET register for resetting the module and the NUM_OF_INPUT register to indicate the number of records to be sent from PS each time.
- AXI Smart Connect block: receives grouped input features, which are buffered in the PS, using High-Performance Ports (HP0). Then, data are sent to the AXI DMA block by a Memory-Mapped to Stream (MM2S) channel. In addition, the scanned data are transferred back from AXI DMA to this block by a Stream to Memory-Mapped (S2MM) channel. These data are intrusion detection results that are ready to be sent back to PS for analysis.
- The AXI DMA block: this is connected to the NN block through an AXI Stream interface for sending/receiving data to/from the NN block.
- The Neuron Network Architecture block: this contains the NN model on FPGA. This block is implemented in two approaches: namely, high-level synthesis (HLS) and Verilog implementation.
4.4.1. HLS Approach
- Parameter Quantisation: this transforms each trained parameter of an NN model to integer values. For instance, for a layer with weight values in the range of , a quantised value, , from , is calculated by
- HLS Pragmas are applied for optimising our implementation in C++ into the RTL. For instance, HLS Dataflow is used for pipelining calculations between layers, HLS Pipeline and HLS Unroll are used for parallel computation of neurons in each layer.
4.4.2. Verilog Implementation Approach
- Input buffering (P1): contains the Pre-processor block to receive the eleven input features from DMA through the AXI4-Stream interface.
- Layer_0 calculation (P2): has 32 neurons running simultaneously. These neurons have the same architecture, which has six calculation stages: multiplication, four additions, and an activation. Figure 6 illustrates a single neuron architecture on FPGA. From the P1 phase, the 11-input features are fed into the corresponding eleven 2-input multipliers (MULx 11 block). Next, these results are accumulated by the next four addition stages, represented as ADD x6, ADD x3, ADD x2, and ADD x1 blocks. The final summed result is passed to the ACT block, which is implemented as a hardware-efficient ReLU (Rectified Linear Unit) activation function.
- Layer_1 calculation (P3): receives buffered results from the P2 phase; then, these data are distributed to sixteen neurons in this block. Each neuron shares the same calculation architecture with Layer_0 neurons; except that each neuron in Layer_1 has five calculation stages: a multiplication (MUL x16 block) and four additions (ADD x8, ADD x4, ADD x2, and ADD x1 blocks). The final results are buffered and will be read in the next clock cycle.
- Output buffering (P4): includes the Comparator, the FIFO buffer, and the Post-processor blocks. The Comparator block returns the maximum index from sixteen inputs, corresponding to sixteen output neurons in the P3 phase. These indices are stored in a FIFO (First in, first out) buffer. The Post-processor block reads data from the buffer and sends it to DMA; this block also sends a signal to the Pre-processor block to indicate if the buffer can store all the results.
5. Results
5.1. Implemented Result
5.2. Accuracy
5.3. Performance
5.3.1. Inference Time
5.3.2. FPGA Performance
6. Discussion
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Vailshery, L.S. Number of Internet of Things (IoT) Connected Devices Worldwide from 2019 to 2030. Available online: https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/ (accessed on 8 November 2022).
- Ahmed, M.; Mahmood, A.N.; Hu, J. A survey of network anomaly detection techniques. J. Netw. Comput. Appl. 2016, 60, 19–31. [Google Scholar] [CrossRef]
- Hubballi, N.; Suryanarayanan, V. False alarm minimization techniques in signature-based intrusion detection systems: A survey. Comput. Commun. 2014, 49, 1–17. [Google Scholar] [CrossRef] [Green Version]
- Heidari, A.; Jabraeil Jamali, M.A. Internet of Things intrusion detection systems: A comprehensive review and future directions. Cluster Comput. 2022, 1–28. [Google Scholar] [CrossRef]
- Garcia-Teodoro, P.; Diaz-Verdejo, J.; Maciá-Fernández, G.; Vázquez, E. Anomaly-based network intrusion detection: Techniques, systems and challenges. Comput. Secur. 2009, 28, 18–28. [Google Scholar] [CrossRef]
- Gao, C.; Braun, S.; Kiselev, I.; Anumula, J.; Delbruck, T.; Liu, S. Real-Time Speech Recognition for IoT Purpose using a Delta Recurrent Neural Network Accelerator. In Proceedings of the 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Maitra, S.; Richards, D.; Abdelgawad, A.; Yelamarthi, K. Performance Evaluation of IoT Encryption Algorithms: Memory, Timing, and Energy. In Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13 March 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Antonopoulos, C.P.; Voros, N.S. A data compression hardware accelerator enabling long-term biosignal monitoring based on ultra-low power IoT platforms. Electronics 2017, 6, 54. [Google Scholar] [CrossRef] [Green Version]
- Expertsystem. What Is Machine Learning? A Definition. Available online: https://www.expertsystem.com/machine-learning-definition/ (accessed on 8 November 2022).
- Sidana, M. Types of Classification Algorithms in Machine Learning. Available online: https://medium.com/@Mandysidana/machine-learning-types-of-classification-9497bd4f2e14 (accessed on 8 November 2022).
- Ngo, D.M.; Temko, A.; Murphy, C.C.; Popovici, E. FPGA Hardware Acceleration Framework for Anomaly-based Intrusion Detection System in IoT. In Proceedings of the 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), Dresden, Germany, 30 August–3 September 2021; pp. 69–75. [Google Scholar]
- Garcia, S.; Parmisano, A.; Erquiaga, M. IoT-23: A Labeled Dataset with Malicious and Benign IoT Network Traffic; Stratosphere Lab.: Praha, Czech Republic, 2020. [Google Scholar]
- Moustafa, N.; Slay, J. UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). In Proceedings of the 2015 Military Communications and Information Systems Conference (MilCIS), Canberra, Australia, 10–12 November 2015; pp. 1–6. [Google Scholar]
- Integrated, M. MAX78000—Artificial Intelligence Microcontroller with Ultra-Low-Power Convolutional Neural Network Accelerator. Available online: https://www.maximintegrated.com/en/products/microcontrollers/MAX78000.html (accessed on 8 November 2022).
- Integrated, M. MAX78000EVKIT—Evaluation Kit for the MAX78000. Available online: https://www.maximintegrated.com/en/products/microcontrollers/MAX78000EVKIT.html (accessed on 8 November 2022).
- Xilinx. XUP PYNQ-Z2. Available online: https://www.xilinx.com/support/university/xup-boards/XUPPYNQ-Z2.html (accessed on 8 November 2022).
- Yang, Z.; Liu, X.; Li, T.; Wu, D.; Wang, J.; Zhao, Y.; Han, H. A systematic literature review of methods and datasets for anomaly-based network intrusion detection. Comput. Secur. 2022, 116, 102675. [Google Scholar] [CrossRef]
- Alsoufi, M.A.; Razak, S.; Siraj, M.M.; Nafea, I.; Ghaleb, F.A.; Saeed, F.; Nasser, M. Anomaly-based intrusion detection systems in IoT using deep learning: A systematic literature review. Appl. Sci. 2021, 11, 8383. [Google Scholar] [CrossRef]
- Mishra, A.; Yadav, P. Anomaly-based IDS to detect attack using various artificial intelligence & machine learning algorithms: A review. In Proceedings of the 2nd International Conference on Data, Engineering and Applications (IDEA), Bhopal, India, 28–29 February 2020; pp. 1–7. [Google Scholar]
- Hasan, M.; Islam, M.M.; Zarif, M.I.I.; Hashem, M. Attack and anomaly detection in IoT sensors in IoT sites using machine learning approaches. Internet Things 2019, 7, 100059. [Google Scholar] [CrossRef]
- Kumar, P.; Gupta, G.P.; Tripathi, R. Design of anomaly-based intrusion detection system using fog computing for IoT network. Autom. Control Comput. Sci. 2021, 55, 137–147. [Google Scholar] [CrossRef]
- Thamaraiselvi, D.; Mary, S. Attack and anomaly detection in IoT networks using machine learning. Int. J. Comput. Sci. Mob. Comput 2020, 9, 95–103. [Google Scholar] [CrossRef]
- Vinayakumar, R.; Alazab, M.; Soman, K.; Poornachandran, P.; Al-Nemrat, A.; Venkatraman, S. Deep learning approach for intelligent intrusion detection system. IEEE Access 2019, 7, 41525–41550. [Google Scholar] [CrossRef]
- Xu, C.; Shen, J.; Du, X.; Zhang, F. An intrusion detection system using a deep neural network with gated recurrent units. IEEE Access 2018, 6, 48697–48707. [Google Scholar] [CrossRef]
- Nguyen, T.D.; Marchal, S.; Miettinen, M.; Fereidooni, H.; Asokan, N.; Sadeghi, A.R. DÏoT: A federated self-learning anomaly detection system for IoT. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–9 July 2019; pp. 756–767. [Google Scholar]
- Mothukuri, V.; Khare, P.; Parizi, R.M.; Pouriyeh, S.; Dehghantanha, A.; Srivastava, G. Federated-Learning-Based Anomaly Detection for IoT Security Attacks. IEEE Internet Things J. 2021, 9, 2545–2554. [Google Scholar] [CrossRef]
- Vaccari, I.; Chiola, G.; Aiello, M.; Mongelli, M.; Cambiaso, E. MQTTset, a New Dataset for Machine Learning Techniques on MQTT. Sensors 2020, 20, 6578. [Google Scholar] [CrossRef]
- Manimurugan, S.; Al-Mutairi, S.; Aborokbah, M.M.; Chilamkurti, N.; Ganesan, S.; Patan, R. Effective attack detection in internet of medical things smart environment using a deep belief neural network. IEEE Access 2020, 8, 77396–77404. [Google Scholar] [CrossRef]
- Yin, C.; Zhang, S.; Wang, J.; Xiong, N.N. Anomaly detection based on convolutional recurrent autoencoder for IoT time series. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 112–122. [Google Scholar] [CrossRef]
- Bovenzi, G.; Aceto, G.; Ciuonzo, D.; Persico, V.; Pescapé, A. A hierarchical hybrid intrusion detection approach in IoT scenarios. In Proceedings of the GLOBECOM 2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–7. [Google Scholar]
- Protogerou, A.; Papadopoulos, S.; Drosou, A.; Tzovaras, D.; Refanidis, I. A graph neural network method for distributed anomaly detection in IoT. Evol. Syst. 2021, 12, 19–36. [Google Scholar] [CrossRef]
- Dutta, V.; Choraś, M.; Pawlicki, M.; Kozik, R. A deep learning ensemble for network anomaly and cyber-attack detection. Sensors 2020, 20, 4583. [Google Scholar] [CrossRef]
- Ullah, I.; Mahmoud, Q.H. Design and Development of RNN Anomaly Detection Model for IoT Networks. IEEE Access 2022, 10, 62722–62750. [Google Scholar] [CrossRef]
- Hussain, F.; Abbas, S.G.; Fayyaz, U.U.; Shah, G.A.; Toqeer, A.; Ali, A. Towards a Universal Features Set for IoT Botnet Attacks Detection. arXiv 2020, arXiv:2012.00463. [Google Scholar]
- Storcheus, D.; Rostamizadeh, A.; Kumar, S. A survey of modern questions and challenges in feature extraction. In Proceedings of the Feature Extraction: Modern Questions and Challenges. PMLR, Montreal, QC, Canada, 11 December 2015; pp. 1–18. [Google Scholar]
- Stoian, N.A. Machine Learning for Anomaly Detection in IoT Networks: Malware Analysis on the IoT-23 Data Set. Bachelor Thesis, University of Twente, Enschede, The Netherlands, 2020. [Google Scholar]
- Hegde, M.; Kepnang, G.; Al Mazroei, M.; Chavis, J.S.; Watkins, L. Identification of Botnet Activity in IoT Network Traffic Using Machine Learning. In Proceedings of the 2020 International Conference on Intelligent Data Science Technologies and Applications (IDSTA), Valencia, Spain, 19–22 October 2020; pp. 21–27. [Google Scholar]
- Nobakht, M.; Javidan, R.; Pourebrahimi, A. DEMD-IoT: A deep ensemble model for IoT malware detection using CNNs and network traffic. Evol. Syst. 2022, 1–17. [Google Scholar] [CrossRef]
- Alani, M.M.; Miri, A. Towards an Explainable Universal Feature Set for IoT Intrusion Detection. Sensors 2022, 22, 5690. [Google Scholar] [CrossRef] [PubMed]
- Douiba, M.; Benkirane, S.; Guezzaz, A.; Azrour, M. An improved anomaly detection model for IoT security using decision tree and gradient boosting. J. Supercomput. 2022, 1–20. [Google Scholar] [CrossRef]
- Kumar, S.; Sahoo, S.; Mahapatra, A.; Swain, A.K.; Mahapatra, K.K. Security enhancements to system on chip devices for IoT perception layer. In Proceedings of the 2017 IEEE International Symposium on Nanoelectronic and Information Systems (iNIS), Bhopal, India, 18–20 December 2017; pp. 151–156. [Google Scholar]
- Chéour, R.; Khriji, S.; Abid, M.; Kanoun, O. Microcontrollers for IoT: Optimizations, computing paradigms, and future directions. In Proceedings of the 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 2–16 June 2020; pp. 1–7. [Google Scholar]
- d’Orazio, L.; Lallet, J. Semantic caching framework: An FPGA-based application for IoT security monitoring. Open J. Internet Things (OJIoT) 2018, 4, 150–157. [Google Scholar]
- van Long, N.H.; Lallet, J.; Casseau, E.; d’Orazio, L. Mascara (ModulAr semantic caching framework) towards FPGA acceleration for IoT security monitoring. In Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2020), Tokyo, Japan, 4 September 2020. [Google Scholar]
- Wielgosz, M.; Karwatowski, M. Mapping neural networks to FPGA-based IoT devices for ultra-low latency processing. Sensors 2019, 19, 2981. [Google Scholar] [CrossRef] [Green Version]
- Kalantar, A.; Zimmerman, Z.; Brisk, P. FA-LAMP: Fpga-accelerated learned approximate matrix profile for time series similarity prediction. In Proceedings of the 2021 IEEE 29th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), Orlando, FL, USA, 9–12 May 2021; pp. 40–49. [Google Scholar]
- Ioannou, L.; Fahmy, S.A. Network intrusion detection using neural networks on FPGA SoCs. In Proceedings of the 2019 29th International Conference on Field Programmable Logic and Applications (FPL), Barcelona, Spain, 9–13 September 2019; pp. 232–238. [Google Scholar]
- Hossin, M.; Sulaiman, M.N. A review on evaluation metrics for data classification evaluations. Int. J. Data Min. Knowl. Manag. Process 2015, 5, 1. [Google Scholar]
- Dutta, V.; Choras, M.; Pawlicki, M.; Kozik, R. Detection of Cyberattacks Traces in IoT Data. J. Univers. Comput. Sci. 2020, 26, 1422–1434. [Google Scholar] [CrossRef]
- Idhammad, M.; Afdel, K.; Belouch, M. Dos detection method based on artificial neural networks. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 465–471. [Google Scholar] [CrossRef]
Work | Publication Year | Dataset | Platform | Accuracy |
---|---|---|---|---|
[25] | 2019 | Self-generated | Redeon Rx 460 GPU | 95.6% |
[26] | 2020 | Modbus network | Lambda GPU | 99.5% |
[27] | 2020 | MQTTset | Core Intel i7 dual, 16 GB of RAM | 98.0% |
[28] | 2020 | CICIDS 2017 | Core Intel i7, 16 GB of RAM | 99.4% |
[29] | 2021 | Yahoo Webscope S5 | Google Colab | 99.6% |
[30] | 2021 | Bot-IoT | Not mentioned | 99.0% |
[31] | 2021 | Self-generated | Not mentioned | 97.0% |
Work | Publication Year | Dataset | Platform | Accuracy |
---|---|---|---|---|
[32] | 2020 | IoT-23, LITNET-2020 and NetML-2020 | Not mentioned | 99.0% |
[33] | 2022 | NSL-KDD, BoT-IoT, IoT-NI, IoT-23, MQTT and IoT-DS2 | Not mentioned | 99.8% |
[34] | 2020 | CICIDS2017, CTU-13 and IoT-23 | Not mentioned | 99.9% |
[36] | 2020 | IoT-23 | GTX 1060, 6 GB of RAM, | 99.5% |
[37] | 2020 | IoT-23 and Self-generated | Microsoft Azure | 99.8% |
[38] | 2022 | IoT-23 | Intel Core i7-9750H, 32 GB of RAM | 99.9% |
[39] | 2022 | TON-IoT, IoT-23 and IoT-ID | Not mentioned | 99.6% |
[40] | 2022 | NSL-KDD, IoT-23, BoT-IoT and Edge-IIoT | GPU GeForce MX330, 8 GB of RAM | 99.9% |
# | Label | Number of Records |
---|---|---|
1 | C&C-Mirai | 2 |
2 | PartOfAHorizontalPortScan-Attack | 5 |
3 | C&C-HeartBeat-FileDowbload | 11 |
4 | FileDownload | 18 |
5 | C&C-Torii | 30 |
6 | C&C-FileDownload | 53 |
7 | C&C-HeartBeat-Attack | 834 |
8 | C&C-PartOfAHorizontalPortScan | 888 |
9 | Attack | 9398 |
10 | C&C | 21,995 |
11 | C&C-HeartBeat | 33,673 |
12 | DDoS | 19,538,713 |
13 | Benign | 30,858,735 |
14 | Okiru-Attack | 13,609,470 |
15 | Okiru | 47,381,241 |
16 | PartOfAHorizontalPortScan | 213,852,924 |
# | Label | Number of Records |
---|---|---|
1 | Worms | 174 |
2 | Shellcode | 1511 |
3 | Backdoors | 2329 |
4 | Analysis | 2677 |
5 | Reconnaissance | 13,987 |
6 | DoS | 16,353 |
7 | Fuzzers | 24,246 |
8 | Exploits | 44,525 |
9 | Generic | 215,481 |
10 | Benign | 2,218,761 |
# | IoT-23 | UNSW-NB15 | Inputs | Description |
---|---|---|---|---|
1 | id_orig.h | srcip | 4 | Source IP address |
2 | id_orig.p | sport | 1 | Source port number |
3 | id_resp.h | dstip | 4 | Destination IP address |
4 | id_resp.p | dsport | 1 | Destination port number |
5 | proto | proto | 1 | Transaction protocol |
Hardware Resources Usage | ||||
Approach | Resources | Utilisation | Available | Utilisation (%) |
HLS | LUT | 13,089 | 53,200 | 24.60 |
LUTRAM | 242 | 17,400 | 1.39 | |
FF | 16,224 | 106,400 | 15.25 | |
BRAM | 42.5 | 140 | 30.36 | |
DSP | 152 | 220 | 69.09 | |
Verilog | LUT | 22,329 | 53,200 | 41.97 |
LUTRAM | 1420 | 17,400 | 8.16 | |
FF | 28,304 | 106,400 | 26.60 | |
BRAM | 20 | 140 | 14.29 | |
DSP | 220 | 220 | 100 | |
Timing and Power | ||||
F_max | Static | Dynamic | Total | |
HLS | 102.1 MHZ | 0.15 W | 1.68 W | 1.83 W |
Verilog | 101.2 MHZ | 0.14 W | 1.39 W | 1.53 W |
Hardware Resources Usage | ||||
Approach | Resources | Utilisation | Available | Utilisation (%) |
HLS | LUT | 11,783 | 53,200 | 22.15 |
LUTRAM | 242 | 17,400 | 1.39 | |
FF | 16,236 | 106,400 | 15.26 | |
BRAM | 44 | 140 | 31.43 | |
DSP | 152 | 220 | 69.09 | |
Verilog | LUT | 28,004 | 53,200 | 52.64 |
LUTRAM | 1412 | 17,400 | 8.11 | |
FF | 33,974 | 106,400 | 31.93 | |
BRAM | 20 | 140 | 14.29 | |
DSP | 219 | 220 | 99.55 | |
Timing and Power | ||||
F_max | Static | Dynamic | Total | |
HLS | 102.5 MHZ | 0.15 W | 1.64 W | 1.79 W |
Verilog | 102.4 MHZ | 0.14 W | 1.42 W | 1.56 W |
Dataset | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
UNSW-NB15 (1) | 98.57 | 90.11 | 97.09 | 93.47 |
IoT-23 (2) | 92.69 | 95.04 | 95.22 | 95.13 |
Dataset | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
UNSW-NB15 (3) | 98.43 | 87.95 | 98.01 | 92.71 |
IoT-23 (4) | 99.66 | 99.97 | 99.65 | 99.81 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ngo, D.-M.; Lightbody, D.; Temko, A.; Pham-Quoc, C.; Tran, N.-T.; Murphy, C.C.; Popovici, E. HH-NIDS: Heterogeneous Hardware-Based Network Intrusion Detection Framework for IoT Security. Future Internet 2023, 15, 9. https://doi.org/10.3390/fi15010009
Ngo D-M, Lightbody D, Temko A, Pham-Quoc C, Tran N-T, Murphy CC, Popovici E. HH-NIDS: Heterogeneous Hardware-Based Network Intrusion Detection Framework for IoT Security. Future Internet. 2023; 15(1):9. https://doi.org/10.3390/fi15010009
Chicago/Turabian StyleNgo, Duc-Minh, Dominic Lightbody, Andriy Temko, Cuong Pham-Quoc, Ngoc-Thinh Tran, Colin C. Murphy, and Emanuel Popovici. 2023. "HH-NIDS: Heterogeneous Hardware-Based Network Intrusion Detection Framework for IoT Security" Future Internet 15, no. 1: 9. https://doi.org/10.3390/fi15010009