EIOT Full Notes
EIOT Full Notes
Master Out Slave In (MOSI): Signal line carrying the data from master to slave device. It is also
known as Slave Input Slave Data In (SI/SDI)
Master In Slave Out (MISO): Signal line carrying the data from slave to master device. It is
also known as Slave Output (SO/SDO)
Signals Serial Clock (SCLK): Signal line carrying the clock Signals
Slave Select (SS): Signal line for slave device select It is an active low signal.
The master device is responsible for generating the clock signal. It selects the required
slave device by asserting the corresponding slave device's slave select signal 'LOW'.
SPI devices contain a certain set of registers.
• The serial peripheral control register holds the various configuration parameters like
master slave selection for the device, baud rate selection for communication, clock signal
control, etc.
• The status register holds the status of various conditions for - transmission and reception.
• SPI works on the principle of 'Shift Register'. The master and slave devices contain a
special shift register for the data to transmit or receive. During transmission from the
master to slave, the data in the master's shift register is shifted out to the MOSI pin and it
enters the shift register of the slave device through the MOSI pin of the slave device.
• At the same time the shifted-out data bit from the slave device's shift register enters the
shift register of the master device through MISO pin.
LPC2148 SPI
• LPC2148 has two inbuilt SPI modules i.e. SPI0 and SPI1.
• SPI0 supports variable (8 to 16) bits of data per transfer.
• SPI0 module is compatible with Motorola SPI (SPI0).
• SPI1 supports variable (4 to 16) bits of frame.
• SPI1module is compatible with Motorola SPI (SPI1)
I2C (Inter Integrated Circuit)
I2C is a serial bus interface connection protocol first invented by Philips Semiconductor.
It is also called as TWI (two wire interface) since it uses only two wires for communication.
I2C uses a handshaking mechanism for communication. Hence, it is also called as
acknowledgment-based communication protocol. It is a synchronous bidirectional half duplex
(one -directional communication at a given point of time) two wire serial interface bus.
Two wires of the I2C interface are SDA (serial data) and SCL (serial clock).
• SDA is Serial Data wire used for data transfer in between master and slave
• SCL is Serial Clock wire used for clock synchronization. Clock is provided by the
master
I2C bus is a shared bus system to which many number of I2C devices can be connected.
Devices connected to the I2C bus can act as either 'Master' device or 'Slave' device.
• The 'Master' device is responsible for controlling the communication by
initiating/terminating data transfer, sending data and generating necessary synchronisation
clock pulses.
• The 'Slave' devices wait for the commands from the master and respond upon receiving
the commands.
I2C works in two modes namely,
• Master Mode
Master is responsible for generating clock and initiating communication
• Slave Mode
Slave receives the clock and responds when addressed by the Master
The sequence of operations for communicating with an I2C slave device is listed below:
1. The master device pulls the clock line (SCL) of the bus to 'HIGH'
2. The master device pulls the data line (SDA) 'LOW', when the SCL line is at logic `HIGH'
(This is the 'Start' condition for data transfer)
3. The master device sends the address (7 bit or 10 bit wide) of the 'slave' device to which it
wants to communicate, over the SDA line. Clock pulses are generated at the SCL line for
synchronizing the bit reception by the slave device. The MSB of the data is always transmitted
first. The data in the bus is valid during the 'HIGH period of the clock signal.
4. The master device sends the Read or Write bit (Bit value = 1 Read operation; Bit value = 0
Write operation) according to the requirement
5. The master device waits for the acknowledgement bit from the slave device whose address is
sent on the bus along with the Read Write operation command. Slave devices connected to the
bus compares the address received with the address assigned to them.
6. The slave device with the address requested by the master device responds by sending an
acknowledge bit (Bit value = I) over the SDA line
7. Upon receiving the acknowledge bit. the Master device sends the 8bit data to the slave device
over SDA line, if the requested operation is 'Write to device'. If the requested operation is 'Read
from device’, then the slave device sends data to the master over the SDA line.
8. The master device waits for the acknowledgement bit from the device upon byte transfer
complete for a write operation and sends an acknowledge bit to the Slave device for a read
operation
9. The master device terminates the transfer by pulling the SDA line 'HIGH' when the clock line
SCL is at logic 'HIGH'
10. I2C bus supports three different data rates. They are: Standard mode (Data rate up to
100kbits/sec (100 kbps)), Fast mode (Data rate up to 400kbits sec (400 kbps)) and High speed
mode (Data rate up to 3.4Mbitsisec (3.4 Mbps)).
LPC2148 I2C
Inside the USB cable there are two wires that supply the power to the peripherals +5 volts (red)
and ground (brown) and a twisted pair (yellow and blue) of wires to carry the data.
USB supports four different types of data transfers, namely; Control, Bulk, Isochronous and
Interrupt.
• Control transfer is used by USB system software to query, configure and issue commands
to the USB device.
• Bulk transfer is used for sending a block of data to a device. Bulk transfer supports error
checking and correction. Transferring data to a printer is an example for bulk transfer.
• Isochronous data transfer is used for real-time data communication. In Isochronous
transfer, data is transmitted as streams in real-time. Isochronous transfer doesn't support
error checking and re -transmission of data in case of any transmission loss. All streaming
devices like audio devices and medical equipment for data collection make use of the
isochronous transfer.
• Interrupt transfer is used for transferring small amount of data. Interrupt transfer
mechanism makes use of polling technique to see whether the USB device has any data to
send. The frequency of polling is determined by the USB device and it varies from 1 to 255
milliseconds. Devices like Mouse and Keyboard, which transmits fewer amounts of data,
uses Interrupt transfer.
CAN Framing
A CAN network consists of multiple of CAN nodes. In the above case, we have considered three
CAN nodes, and named them as node A, node B, and node C. CAN node consists of three
elements which are given below:
• Host: A host is a microcontroller or microprocessor which is running some application to do
a specific job. A host decides what the received message means and what message it should
send next.
• CAN Controller: CAN controller deals with the communication functions described by the
CAN protocol. It also triggers the transmission, or the reception of the CAN messages.
• CAN transceiver: It is responsible for the transmission or the reception of the data on the
CAN bus. It converts the data signal into the stream of data collected from the CAN bus that
the CAN controller can understand.
In the above diagram, unshielded twisted pair cable is used to transmit or receive the data. It is also
known as CAN bus, and CAN bus consists of two lines, i.e., CAN low line and CAN high line,
which are also known as CANH and CANL, respectively. The transmission occurs due to the
differential voltage applied to these lines. The CAN uses twisted pair cable and differential voltage
because of its environment. For example, in a car, motor, ignition system, and many other devices
can cause data loss and data corruption due to noise. The twisting of the two lines also reduces the
magnetic field. The bus is terminated with 120Ω resistance at each end.
ETHERNET
Ethernet is a protocol that allows computers (from servers to laptops) to talk to each other
over wired networks that use devices like routers, switches and hubs to direct traffic. Ethernet works
seamlessly with wireless protocols, too.
It allows organizations to use the same Ethernet protocol in their local area network (LAN)
and their wide-area network (WAN). That means that it works well in data centers, in private or
internal company networks, for internet applications and almost anything in between.
Ethernet works by breaking up information being sent to or from devices, like a personal
computer, into short pieces of different sized bits of information called frames. Those frames contain
standardized information such as the source and destination address that helps the frame route its
way through a network.
And because computers on a LAN typically shared a single connection, Ethernet was built
around the principal of carrier-sense multiple access with collision detection (CSMA/CD). Basically,
the protocol makes sure that the line is not in use before sending any frames out.
Ethernet frame
An Ethernet frame contains three parts; an Ethernet header (Preamble, SFD, Destination,
Source, and Type), Encapsulated data (Data and Pad), and an Ethernet trailer (FCS).
The preamble field
It is 7 bytes long. It contains a string of 7 bytes. Each byte alternatively stores 1 and 0
to make the pattern '10101010'. Preamble bytes help the receiving device to identify the beginning
of an Ethernet frame
The SFD field
The SFD (Start Frame Delimiter) field is 1 byte long. This byte also stores the same
pattern, except the last bit. In the last bit, it stores 1 instead of the 0.
The SFD byte indicates the receiving device that the next byte is the destination MAC address of
the Ethernet frame.
Destination MAC address
This field is 6 bytes long. It contains the MAC address of the destination device. MAC
address is 6 bytes or 48 bits (1 byte = 8 bits, 6x8 = 48bits) long. For convenience, usually, it is
written as 12-digit hexadecimal numbers (such as 0000.0A12.1234).
Source MAC address
This field is also 6 bytes long. It contains the MAC address of the source device. It helps
the receiving device in identifying the source device.
Type field
This field is 2 bytes long. This field stores information about the protocol of the upper
layer (network layer). The data link layer of the destination computer can easily determine the
upper layer protocol to which it should hand over the received frame.
Data and Pad field
This field stores the encapsulated data of the upper layer. This field has a size limit of 46
bytes (minimum) to 1500 bytes (maximum). If data is less than the minimum requirement,
padding is added. If data is more than the maximum limit, extra data is packed in the next packet.
Frame Check Sequence
This field stores a 4 bytes value generated by CRC (Cyclic Redundancy Check)
Algorithm that is used to check whether the received frame is correct or not.
WI-FI (802.11)
Emerging IEEE 802.11 standard, also known as Wi-Fi. 802.11 is designed for use in a
limited geographical area (homes, office buildings, campuses).
Wi-Fi is the Internet Protocol (IP) based wireless communication technique, each device
is identified by IP address, which is unique to each device on the network. Wi-Fi based
communications require an intermediate agent called Wi-Fi router/Wireless Access point to
manage the communications.
The Wi-Fi router is responsible for restricting the access to a network, assigning IP address
to devices on the network, routing data packets to the intended devices on the network. Wi-Fi
enabled devices contain a wireless adaptor for transmitting and receiving data in the form of radio
signals through an antenna. The hardware part of it is known as Wi-Fi Radio.
PHYSICAL PROPERTIES:
• The original 802.11 standard defined two radio based physical layers standards,
one using frequency hopping and the other using direct sequence. Both provide up
to 2 Mbps.
• The standard 802.11 a, transmits at 5 GHz which support 54 Mbps using a
orthogonal frequency division multiplexing (OFDM) technique.
• The standard 802.11b transmits in the 2.4 GHz frequency band of the radio
spectrum. It can handle up to 11 megabits of data per second, and it uses
complementary code keying (CCK) modulation to improve speeds.
• The standard 802.11g transmits at 2.4 GHz like 802.11b, but supports 54 Mbps
with orthogonal frequency division multiplexing (OFDM) technique.
• The standard 802.11n is the most widely available of the standards and is
backward compatible with a, b and g. It support 150 Mbps
Bluetooth
Bluetooth is a low cost, low power, short range wireless technology for data and voice
communication. Bluetooth operates at 2.4GHz of the Radio Frequency spectrum. This frequency
band has been using for industrial, scientific and medical devices (ISM) and uses the Frequency
Hopping Spread Spectrum (FHSS ) technique for communication. Literally it supports a data rate
of up to 1Mbps and a range of approximately 30 feet or 10m for data communication.
The protocols in the Bluetooth standard can be loosely grouped into the physical layer, data
link layer, middleware layer, and application layer
Bluetooth Frame consists of three main fields namely Access code, Packet header, and payload.
Access Code: It is the first field of Frame Structure. It is of size 72 bits. it is again
divided into three parts first part is the preamble which is of size 4 bits, the second part is
synchronization which is of 64 bits and the third part is a trailer which is of size 4 bits. Access
Code field is used for timing synchronization and piconet identification.
Packet header: Its size is 54 bits. It contains six subfields. The first field is an address
which is of 3 bits in size and can define up to 7 slaves. The second field is Type of 4 bits in size
and used to identify the type of data. The third subfield is flow used for flow control. The fourth
field is ARQN used for acknowledgement. The fifth part is SEQN which contains sequence
numbers of frames and the sixth field is HEC used to detect errors in the header.
Payload: This field can be 0-2744 bits long and its structure depends on the type of link
established.
UNIT 3
EMBEDDED SYSTEM SOFTWARE DESIGN
Application Software, System Software, Design techniques –
State diagrams, sequence diagrams, flowcharts, etc., Model-
based system engineering (MBSE), Use of High-Level
Languages embedded C / C++ Programming, Integrated
Development Environment tools- Editor- Compiler - Linker-
Automatic Code Generators- Debugger- Board Support
Library- Chip Support Library, Analysis and Optimization-
Execution Time- Energy & Power.
Model-based system engineering (MBSE)
Model-Based Systems Engineering is a model-based approach
where a system is represented through different models that capture the
system’s behavior, functions, and physical characteristics. MBSE
provides a structured framework for the development of the system
through its lifecycle, enabling various stakeholders to collaborate and
communicate using a common language. The models in MBSE are
typically created using specialized software tools that can simulate and
analyze the system’s behavior.
One of the primary benefits of MBSE is that it helps to reduce
errors and inconsistencies in system design and development. MBSE
facilitates better communication and collaboration among stakeholders,
allowing for faster and more accurate decision-making. Additionally,
MBSE enables the system to be developed more efficiently, reducing
overall costs and improving the quality of the final product.
MBSE also offers greater flexibility and adaptability than TSE. As the
system evolves, the models in MBSE can be updated and refined to reflect
the changes, allowing for greater agility in the development process.
Additionally, the use of standardized modeling languages and tools in
MBSE makes it easier for stakeholders to understand the system and its
components, reducing the risk of miscommunication and
misunderstandings.
Comparison of TSE and MBSE
The main difference between TSE and MBSE lies in their
approach to system development. TSE is a document-based approach
that relies on the creation and maintenance of multiple documents to
capture system information. In contrast, MBSE is a model-based
approach that uses models to represent different aspects of the system,
allowing for greater accuracy and consistency.
Another key difference between TSE and MBSE is their level of
efficiency. TSE can be a time-consuming and costly process, requiring
the creation and maintenance of multiple documents. In contrast,
MBSE enables the system to be developed more efficiently, reducing
overall costs and improving the quality of the final product.
Modeling Language
One of the key components of MBSE is the modeling language used to
create system models. The modeling language is a formal notation used to capture
the structure, behavior, and requirements of a system. MBSE can use different
modeling languages such as SysML (Systems Modeling Language), UML
(Unified Modeling Language), or specific domain-specific modeling languages,
depending on the system and the project requirements.
Model Management Tools
Model management tools are another key component of MBSE. These tools
allow engineers to create, organize, and manage system models throughout the
entire systems engineering process. Model management tools provide a graphical
user interface that enables engineers to easily create and modify system models,
as well as to view and analyze the results of simulations and analysis.
Simulation and Analysis Tools
Simulation and analysis tools are also key components of MBSE. These
tools enable engineers to perform simulations and analyses on system models to
better understand and optimize the system’s performance. By using simulations
and analysis tools, engineers can identify potential issues with a system’s design
and develop solutions to mitigate them.
Requirements Management Tools
Another critical component of MBSE is requirements management tools. These
tools allow engineers to capture, manage, and trace system requirements
throughout the entire systems engineering process. By using requirements
management tools, engineers can ensure that system requirements are accurately
captured and that they are being met by the system design.
Integration with other Tools and Systems
Finally, integration with other tools and systems is a critical component of
MBSE. MBSE should be integrated with other tools and systems, including
project management tools, configuration management tools, and software
development tools. This integration ensures that all aspects of the project are
being effectively managed.
Benefits of MBSE
• Improved Communication and Collaboration
• Increased Efficiency and Cost Savings
• Improved System Quality and Performance
• Better Requirements Management
• Enhanced System Understanding and Documentation
• Improved Risk Management
Integrated development environments (IDE):
Integrated development environments (IDE) are applications that
facilitates the development of other applications. Designed to encompass all
programming tasks in one application, one of the main benefits of an IDE is that
they offer a central interface with all the tools a developer needs, including:
Code editor: Designed for writing and editing source code, these editors are
distinguished from text editors because work to either simplify or enhance the
process of writing and editing of code for developers
Compiler: Compilers transform source code that is written in a human
readable/writable language into object(machine) code that computers can execute.
Linker: A Linker is a computer program that takes one or more object files
generated by a compiler and combines them into a single executable file
Automatic Code Generators: Automatic code generation is a technique by
which software developers can reduce the manual effort required to write source
code. It involves using specialized tools to generate code automatically based on
a model or input data. The end goal is to minimize the time required to write the
code and to reduce the risk of manual coding errors.
Debugger: Debuggers are used during testing and can help developers debug their
application programs.
Board Support Library (BSL): It is a set of application programming interfaces
(APIs) used to configure and control all on-board devices. Some of the advantages
offered by the BSL include: device ease of use, a level of compatibility between
devices, shortened development time, portability, some standardization, and
hardware abstraction.
The Chip Support Library (CSL): It provides an application programming
interface (API) used for configuring and controlling the Processor on-chip
peripherals for ease of use, compatibility between various devices and hardware
abstraction.
• The embedded firmware is written in any high level language like C, C++
• The Absolute object file created from the object files corresponding to
different source code modules contain information about the address where
each instruction needs to be placed in code memory
Definition of IOT:
The Internet of Things (IoT) describes the network of physical
objects(“things”) that are embedded with sensors, software, and other
technologies for the purpose of connecting and exchanging data with other devices
and systems over the internet. These devices range from ordinary household
objects to sophisticated industrial tools.
Characteristics of the Internet of Things:
1.Connectivity
Connectivity is an important requirement of the IoT infrastructure. Things
of IoT should be connected to the IoT infrastructure. Anyone, anywhere, anytime
can connect, this should be guaranteed at all times. For example, the connection
between people through Internet devices like mobile phones, and other gadgets,
also a connection between Internet devices such as routers, gateways, sensors,
etc.
2. Intelligence and Identity
The extraction of knowledge from the generated data is very important. For
example, a sensor generates data, but that data will only be useful if it is interpreted
properly. Each IoT device has a unique identity. This identification is helpful in
tracking the equipment and at times for querying its status.
3. Scalability
The number of elements connected to the IoT zone is increasing day by day.
Hence, an IoT setup should be capable of handling the massive expansion. The
data generated as an outcome is enormous, and it should be handled appropriately.
4. Dynamic and Self-Adapting (Complexity)
IoT devices should dynamically adapt themselves to changing contexts and
scenarios. Assume a camera meant for surveillance. It should be adaptable to work
in different conditions and different light situations (morning, afternoon, and
night).
5. Architecture
IOT Architecture cannot be homogeneous in nature. It should be hybrid,
supporting different manufacturing products to function in the IoT network. IoT is
a reality when multiple domains come together.
6. Safety
There is a danger of the sensitive personal details of the users getting
compromised when all his/her devices are connected to the internet. This can cause
a loss to the user. Hence, data security is the major challenge. Besides, the
equipment involved is huge. IoT networks may also be at risk. Therefore,
equipment safety is also critical.
7. Self Configuring
IoT devices are able to upgrade their software in accordance with
requirements with a minimum of user participation. Additionally, they can set up
the network, allowing for the addition of new devices to an already-existing
network.
8. Interoperability
IoT devices use standardized protocols and technologies to ensure they can
communicate with each other and other systems
MQTT (Message Queuing Telemetry Transport): MQTT is a
publish/subscribe communication protocol used for IoT device communication.
CoAP (Constrained Application Protocol): CoAP is a lightweight
communication protocol for IoT devices with limited resources.
Bluetooth Low Energy (BLE): BLE is a wireless communication
technology used for IoT devices with low power consumption requirements.
Wi-Fi: A wireless communication technology used for IoT devices that
require high data transfer rates.
Zigbee: A low-power, low-cost wireless communication technology used
for IoT devices.
9. Autonomous operation
Autonomous operation refers to the ability of IoT devices and systems to
operate independently and make decisions without human intervention. It enables
a wide range of new applications and services.
10. Data-driven
Data-driven is a key characteristic of the IoT. IoT devices and systems collect
vast amounts of data from sensors and other sources, which can be analyzed and
used to make data-driven decisions.
11. Security
Security is a critical concern for the IoT, as IoT devices and systems handle
sensitive data and are connected to critical infrastructure. The increasing number
of connected devices and the amount of data being transmitted over the Internet
make IoT systems a prime target for cyberattacks.
12. Ubiquity
Ubiquity refers to the widespread and pervasive presence of the Internet of
Things (IoT) devices and systems in our daily lives. The goal of IoT is to create a
seamless and interconnected world where devices and systems can communicate
and share data seamlessly and transparently.
13. Context Awareness
Context awareness refers to the ability of Internet of Things (IoT) devices
and systems to understand and respond to the environment and context in which
they are operating. This is achieved through the use of sensors and other
technologies that can detect and collect data about the environment.
Technical Building blocks of IoT
Sensors:
• These form the front end of the IoT devices. These are the so-called “Things”
of the system. Their main purpose is to collect data from its surroundings
(sensors) or give out data to its surrounding (actuators).
• These have to be uniquely identifiable devices with a unique IP address so
that they can be easily identifiable over a large network.
• These have to be active in nature which means that they should be able to
collect real-time data. These can either work on their own (autonomous in
nature) or can be made to work by the user depending on their needs (user-
controlled).
• Examples of sensors are gas sensor, water quality sensor, moisture sensor,
etc.
Processors:
• Processors are the brain of the IoT system. Their main function is to process
the data captured by the sensors and process them so as to extract the valuable
data from the enormous amount of raw data collected. In a word, we can say
that it gives intelligence to the data.
• Processors mostly work on real-time basis and can be easily controlled by
applications. These are also responsible for securing the data – that is
performing encryption and decryption of data.
• Embedded hardware devices, microcontroller are the ones that process the
data because they have processors attached to it.
Gateways:
• Gateways are responsible for routing the processed data and send it to proper
locations for its (data) proper utilization.
• In other words, we can say that gateway helps in to and fro communication
of the data. It provides network connectivity to the data. Network
connectivity is essential for any IoT system to communicate.
• LAN, WAN, PAN, etc are examples of network gateways.
Applications:
• Applications form another end of an IoT system. Applications are essential
for proper utilization of all the data collected.
• These cloud-based applications which are responsible for rendering the
effective meaning to the data collected. Applications are controlled by users
and are a delivery point of particular services.
• Examples of applications are home automation apps, security systems,
industrial control hub, etc.
In Figure, the extreme right block forms the application end of the IoT system.
Communication Technologies:
In IoT, communication refers to the exchange of information and data
between various devices, systems, and networks. Internet of Things devices, such
as actuators, sensors, and other smart devices, rely on communication to interact
with one another and external systems, such as cloud-based platforms and mobile
apps. Communication is an important aspect of IoT because it allows devices to
collaborate to achieve common goals.
For example, Sensors in an IoT system, might collect data on environmental
conditions like temperature, humidity, and light levels and send it to a cloud platform
for storage and analysis. The platform may then share that data with other IoT
devices, such as a smart thermostat or lighting system, allowing them to make
adjustments to the environment based on the data.
Types of Communications in IoT:
1)Device-to-Device Communication
This type of communication takes place between two or more Internet of Things
(IoT) devices, such as a smart thermostat that's communicating with a smart lighting
system to adjust the lighting and temperature in a room. The devices communicate
with one another to coordinate their actions in response to environmental factors like
temperature and light levels.
A smart home system, for example, may include several devices such as a
thermostat, a lighting system, and a safety system. These devices have to interact with
each other in order to keep the home safe, comfortable, and secure.
2) Device-to-Cloud Communication
This type of communication occurs between an IoT device and a cloud-based
platform, in which the device sends data to the cloud for storage and analysis. A smart
home security camera, for example, sends video to a cloud-based platform for remote
viewing, storage and analyze data in the cloud, which can be accessed from anywhere
in the world. This allows users to remotely monitor and control their IoT devices even
when they are away from home.
3) Cloud-to-Device Communication
This kind of communication takes place when a cloud-based platform sends
data or commands to an IoT device. A weather tracking platform, for example, may
send commands to a smart irrigation system to adjust watering schedules based on
upcoming weather conditions. This type of communication is important in IoT
because it allows users to control their IoT devices from anywhere in the world. This
can be useful in scenarios where users need to make changes to their IoT devices in
response to changing conditions or events.
4) Peer-to-Peer Communication
This type of communication takes place between IoT devices without the use
of a cloud platform. This is useful in situations where cloud-based communication is
not possible or desirable, such as in remote or secure locations. A group of IoT
devices, for example, may communicate with one another in a peer-to-peer network
to share data and coordinate their actions. This is useful in manufacturing, where
machines must communicate with one another to optimize production processes.
5) Machine-to-Machine Communication
This type of communication takes place between machines without the need
for human intervention. Communication between IoT devices and non-IoT
machines, such as a manufacturing robot communicating with a conveyor belt
system to optimize production processes, is one example. This is useful in
situations where human intervention is neither practical nor desirable, such as
manufacturing or logistics.
Physical design of IoT
The physical design of an IOT system is referred as the Things/Devices and
protocols that are used to build an IoT system. All these things/Devices are called
Node Devices and every device has a unique identity that performs remote sensing,
actuating, and monitoring work. and the protocols that are used to establish
communication between the Node devices and servers over the internet.
Things/Devices
Things/Devices are used to build a connection, process data, provide
interfaces, provide storage, and provide graphics interfaces in an IoT system. All
these generate data in a form that can be analyzed by an analytical system and
program to perform operations and used to improve the system.
1)Connectivity
Devices like USB hosts and ETHERNET are used for connectivity between
the devices and the server.
2)Processor
A processor like a CPU and other units are used to process the data. these
data are further used to improve the decision quality of an IoT system.
3)Audio/Video Interfaces
An interface like HDMI and RCA devices is used to record audio and videos
in a system.
4)Input/Output interface
To give input and output signals to sensors, and actuators we use things like
UART, SPI, CAN, etc.
5)Storage Interfaces
Things like SD, MMC, and SDIO are used to store the data generated from
an IoT device. Other things like DDR and GPU are used to control the activity of
an IoT system.
IoT Protocols
These protocols are used to establish communication between a node device
and a server over the internet. it helps to send commands to an IoT device and
receive data from an IoT device over the internet. we use different types of
protocols that are present on both the server and client side and these protocols are
managed by network layers like application, transport, network, and link layer.
1) Application Layer protocol
In this layer, protocols define how the data can be sent over the network with
the lower layer protocols using the application interface. these protocols include
HTTP, WebSocket, XMPP, MQTT, DDS, and AMQP protocols.
a) HTTP
Hypertext transfer protocol is a protocol that presents an application layer for
transmitting media documents. it is used to communicate between web browsers
and servers. it makes a request to a server and then waits till it receives a response
and in between the request server does not keep any data between the two requests.
b) WebSocket
This protocol enables two-way communication between a client and a host
that can be run on an untrusted code in a controlled environment. This protocol is
commonly used by web browsers.
c) MQTT
It is a machine-to-machine connectivity protocol that was designed as a
publish/subscribe messaging transport. and it is used for remote locations where a
small code footprint is required.
2)Transport Layer
This layer is used to control the flow of data segments and handle error
control. also, these layer protocols provide end-to-end message transfer capability
independent of the underlying network.
a) TCP
The transmission control protocol is a protocol that defines how to establish
and maintain a network that can exchange data in a proper manner using the internet
protocol.
b) UDP
A User Datagram Protocol is part of an internet protocol called the
connectionless protocol. this protocol is not required to establish the connection to
transfer data.
3)Network Layer
This layer is used to send datagrams from the source network to the destination
network. we use IPv4 and IPv6 protocols as host identification that transfers data
in packets.
IPv4
This is a protocol address that is a unique and numerical label assigned to each device
connected to the network. an IP address performs two main functions host and location
addressing. IPv4 is an IP address that is 32-bit long.
IPv6
It is a successor of IPv4 that uses 128 bits for an IP address.
4)Link Layer
Link-layer protocols are used to send data over the network’s physical layer.
it also determines how the packets are coded and signaled by the devices.
a) Ethernet
It is a set of technologies and protocols that are used primarily in LANs. it
defines the physical layer and the medium access control for wired ethernet
networks.
b) Wi-Fi
It is a set of LAN protocols and specifies the set of media access control and
physical layer protocols for implementing wireless local area networks.
Cybersecurity
The importance of cybersecurity in sustaining business operations has
increased significantly as the value of data increases every day. Organizations must
successfully prevent employee and customer data breaches if they want to develop
new business connections and sustain long-term relationships. A thorough awareness
of cybersecurity vulnerabilities and the techniques used by threat actors to access
networks is necessary to achieve this level of security.
Effective vulnerability management not only improves security programs but
also lessens the impact of successful attacks. For enterprises across industries, having
a well-established vulnerability management system is now a must.
Cyber Security Vulnerabilities
Any flaw in an organization’s internal controls, system procedures, or
information systems is a vulnerability in cyber security. Cybercriminals and Hackers
may target these vulnerabilities and exploit them through the points of vulnerability.
These hackers can enter the networks without authorization and seriously harm data
privacy. Data being a gold mine in this modern world is something that has to be
secured preciously.
Here are a few examples of cybersecurity vulnerabilities
• Missing data encryption
• Lack of security cameras
• Unlocked doors at businesses
• Unrestricted upload of dangerous files
• Code downloads without integrity checks
• Using broken algorithms
• URL Redirection to untrustworthy websites
• Weak and unchanged passwords
• Website without SSL
Encryption technologies
The three major encryption types are DES, AES, and RSA.
DES ENCRYPTION
Accepted as a standard of encryption in the 1970s, DES encryption is no longer
considered to be safe on its own. It encrypts just 56-bits of data at a time and it was
found to be easily hacked not long after its introduction. It has, however, served as
the standard upon which future, more-secure encryption tools were based.
3DES
A more modern 3DES is a version of block cipher used today. Triple Data
Encryption Standard (3DES) works as its name implies. Instead of using a single 56-
bit key, it uses three separate 56-bit keys for triple protection.
The drawback to 3DES is that it takes longer to encrypt data.
When should you use DES encryption?
You probably won’t use DES or even 3DES on your own today. Banking institutions
and other businesses may use 3DES internally or for their private transmissions. The
industry standard has moved away from it, however, and it’s no longer being
incorporated into the newest tech products.
AES ENCRYPTION
One of the most secure encryption types, Advanced Encryption Standard (AES)
is used by governments and security organizations as well as everyday businesses for
classified communications. AES uses “symmetric” key encryption. Someone on the
receiving end of the data will need a key to decode it.
AES differs from other encryption types in that it encrypts data in a single block,
instead of as individual bits of data. The block sizes determine the name for each kind
of AES encrypted data:
• AES-128 encrypts blocks of a 128-bit size
• AES-192 encrypts blocks of a 192-bit size
• AES-256 encrypts blocks of a 256-bit size
When should you use AES encryption?
Most of the data tools available on the market today use AES encryption. It
works in so many applications, and it’s still the most widely-accepted and secure
encryption method for the price
RSA ENCRYPTION
Another popular encryption standard is “Rivest-Shamir-Adleman” or RSA. It is
widely used for data sent online and relies on a public key to encrypt the data. Those
on the receiving end of the data will have their own private key to decode the
messages. It’s proven to be a secure way to send information between people who
may not know each other and want to communicate without compromising their
personal or sensitive data.
When should you use RSA encryption?
Some people use it to verify a digital signature and ensure the person they are
communicating with is really who they say they are. It takes a long time to encrypt
data this way, however, and isn’t practical for large or numerous files.
UNIT V
INTERNET OF MEDICAL THINGS
Case studies – Novel Symmetrical Uncertainty Measure (NSUM)
Technique for Diabetes Patients, Healthcare Monitoring system
through Cyber-physical system, An loT Model for Neuro sensors,
AdaBoost with feature selection using loT for somatic mutations
evaluation in Cancer, A Fuzzy Based expert System to diagnose
Alzheimer’s Disease, Secured architecture for loT enabled
Personalized Healthcare Systems, Healthcare Application
Development in Mobile and Cloud Environments.
Novel Symmetrical Uncertainty Measure (NSUM) Technique for Diabetes Patients
Diabetes is a common disease among children to adult in this era. To predict whether the patient
has diabetes or not, we introduce a novel filter method ranking technique called Novel Symmetrical
Uncertainty Measure (NSUM). NSUM technique experimentally shows that compared to the other
algorithms in filter method, wrapper method, embedded method and hybrid method, it proves more
efficient in terms of Performance, Accuracy, less computational complexity. The existing technique of
symmetric uncertainty measure shows less computational power and high performance, but it lacks in
accuracy.
A novel symmetrical uncertainty measure (SUM) technique for diabetes patients is a new
method for measuring the uncertainty associated with predicting the risk of diabetes-related
complications. The technique is symmetrical because it takes into account the uncertainty in both
positive and negative predictions. This is important because diabetes patients are at risk of a variety of
complications, both acute and chronic.
The Novel Symmetrical Uncertainty Measure (NSUM) is a measure used in information theory
and data analysis. It quantifies the mutual dependence or information gain between two variables in a
dataset. NSUM is used for feature selection, clustering, and classification in machine learning and data
mining. It measures the similarity between two variables and provides insights into the relationships
between them. The NSUM value ranges from 0 (no mutual information) to 1 (perfect mutual
information).
1.Background and objectives: Diabetes is a chronic disease characterized by high blood sugar.
It may cause many complicated diseases like stroke, kidney failure, heart attack, etc. About 422 million
people were affected by diabetes disease in worldwide in 2014. The figure will be reached 642 million
in 2040. The main objective of this study is to develop a machine learning (ML)-based system for
predicting diabetic patients.
2. Materials and methods: Logistic regression (LR) is used to identify the risk factors for
diabetes disease based on p value and odds ratio. We have adopted four classifiers like naive Bayes
(NB), decision tree (DT), Adaboost (AB), and random forest (RF) to predict the diabetic patients.
Performances of these classifiers are evaluated using accuracy (ACC) and area under the curve (AUC).
Applications of NSUM:
1. Disease Risk Assessment: NSUM can be used to assess the risk of diabetes in individuals based
on various factors such as genetics, lifestyle, and medical history. It can help identify high-risk
individuals who may benefit from preventive measures.
2. Treatment Planning: NSUM can aid in creating personalized treatment plans for diabetes
patients. By analyzing a patient's data, it can recommend specific medications, lifestyle changes,
and dietary adjustments to optimize their management of the condition.
3. Early Detection: NSUM can be applied for early detection of diabetes. It can analyze health
records and biomarkers to identify individuals at an early stage of the disease, allowing for timely
intervention.
4. Monitoring and Control: The technique can be integrated into wearable devices and continuous
glucose monitors to provide real-time data analysis and feedback for diabetes management. This
helps patients keep their blood sugar levels within a healthy range.
5. Data Integration: NSUM can be used to integrate data from various sources, including medical
records, genetic information, and patient-reported data. This holistic approach can provide a
comprehensive view of a patient's health.
6. Research: Researchers can use NSUM to analyze large datasets related to diabetes. It can help
in identifying trends, correlations, and potential areas for further study, contributing to a better
understanding of the disease.
4. Can be used to identify the most relevant features for diabetes diagnosis
3. May not be able to capture all of the relevant features for diabetes diagnosis
Healthcare Monitoring system through Cyber-physical system
A healthcare cyber-physical system (CPS) assisted by cloud and big data refers to the integration
of intelligent healthcare devices, connectivity technologies, cloud computing, and big data analytics to
enhance healthcare services and outcomes. This system enables the collection, storage, and analysis of
vast amounts of healthcare data from various sources, such as wearable devices, electronic health
records, and medical imaging. The cloud serves as a platform for securely storing and processing this
data, while big data analytics techniques extract valuable insights and patterns to support decision-
making in healthcare. This combined approach helps improve the accuracy and efficiency of
diagnostics, personalized treatment plans, remote monitoring, and overall healthcare management.
The architecture of Health-CPS consists of three layers, namely: 1. Data collection layer 2. Data
management layer 3. Application service layer
1. DATA COLLECTION LAYER:
This layer consists of data nodes and adapters, provides a unified system access interface for
multisource heterogeneous data from hospitals, Internet, or user-generated content.
1. Patient Data Collection: Develop a system to collect patient data through various sensors and
wearable devices, such as heart rate monitors, blood pressure cuffs, and IoT devices.
2. Data Transmission: Implement secure data transmission protocols to send patient data to a cloud-
based server. Ensure data encryption for privacy and compliance with healthcare regulations.
3. Cloud Storage: Utilize cloud storage to store and manage the collected patient data. Choose a reliable
cloud service provider with healthcare compliance certifications.
4. Remote Monitoring: Create a user-friendly interface for patients and healthcare professionals to
access real-time health data and receive alerts.
5. Security: Implement robust security measures to protect patient data from breaches. Compliance with
regulations like HIPAA (Health Insurance Portability and Accountability Act) is crucial.
6. Cost Management: Consider the cost of cloud services, data storage, and maintenance. Ensure the
system remains financially sustainable.
7. Research and Development: Invest in ongoing research to stay at the forefront of healthcare
technology.
ADVANTAGES:
1. Avoiding legal Fines and Penalties.
2. Maintaining Employee and Customer Trust.
3. Safeguarding Business Operations.
4. Better cybersecurity posture.
5. Protecting organizations against paying a ransom.
6. Staying ahead of the competition
DISADVANTAGES:
1. Data protection and data security.
2. Lack of benefit quantification.
3. Lack of prioritization by top management.
4. Industrial broadband structure.
5. Industrial espionage/sabotage.
6. Production outages due to non-availability of data.
The integration of neuro sensors with the Internet of Things (IoT) has brought about a
transformative paradigm shift in the healthcare industry. Neuro sensors, which encompass a range of
brain-computer interfaces, electroencephalogram (EEG) devices, and other neural monitoring tools,
offer unprecedented opportunities for capturing and interpreting neurological data. The IoT
infrastructure plays a crucial role in efficiently collecting, transmitting, and processing this data,
enabling remote monitoring, early detection of neurological disorders, and personalized healthcare. The
potential applications of neuro sensors in IoT-based healthcare are diverse, including the early detection
of neurodegenerative diseases, brain-computer interfaces for individuals with disabilities, and improved
understanding of brain functions.
1. Real-time Monitoring: IoT neuro systems provide real-time monitoring of neural data. This enables
immediate feedback and the ability to respond to neurological events or conditions as they occur.
2. Remote Accessibility: Users and healthcare providers can access neural data remotely, allowing for
telemedicine and remote patient monitoring. This is particularly beneficial for individuals with mobility
limitations or those in remote areas.
3. Data Aggregation and Integration: IoT neuro systems can aggregate data from multiple sensors
and sources, allowing for a comprehensive view of neurological health and trends. This integrated data
can lead to more accurate diagnoses and treatment plans.
4. Predictive Analytics: Machine learning algorithms can be applied to the collected neural data to
identify patterns and make predictions. This is valuable for early detection of neurological disorders or
predicting seizure events.
5. Scalability: IoT neuro systems can be scaled to accommodate a variety of applications, from personal
use to clinical settings or research environments.
Neurosensors, often in the form of wearable or implantable devices, can help in monitoring
various physiological and neurological parameters, which can be useful for both disease classification
and prediction.
1. Data Collection: Neurosensors are used to continuously or intermittently collect data from patients.
These sensors can measure various parameters, including brain activity (EEG), heart rate variability,
movement patterns (accelerometers), skin conductance, temperature, and more. The data collected is in
the form of time-series data.
2. Feature Extraction: Raw sensor data is often complex, and feature extraction is used to reduce
dimensionality and highlight relevant information. This step involves extracting meaningful features
from the sensor data. For example, in EEG data, features might include spectral power, coherence, and
event related potentials.
3. Data Preprocessing: This step involves cleaning the data to remove noise, artifacts, and anomalies.
Data preprocessing also includes data normalization, filtering, and other techniques to prepare the data
for analysis. prediction.
Neurological disorders are conditions that affect the nervous system, which includes the brain, spinal
cord, and peripheral nerves. There are many different neurological disorders, each with its own unique
characteristics and symptoms. Here are some examples of common neurological disorders:
4. Epilepsy: A neurological disorder characterized by recurrent seizures, which are caused by abnormal
electrical activity in the brain.
5. Migraine: A type of headache disorder that can cause severe throbbing pain, often accompanied by
other symptoms like nausea and sensitivity to light and sound.
Locked-in syndrome (LIS) is a condition in which a person is conscious and awake but unable
to move or communicate due to complete paralysis, except for eye movements. It is a devastating
condition that severely limits a person's ability to interact with the outside world. In such cases,
Neurosensors (electroencephalography (EEG)) and BCIs (brain-computer interfaces) offer a potential
means of communication and control.
Outcomes:
• The Neurosensor based BCI system enabled the patient to communicate effectively with their
caregivers, family, and medical professionals.
• The patient regained a sense of autonomy and independence in their daily life, being able to express
their needs and desires.
• The BCI system underwent continuous improvement and finetuning to enhance its performance and
speed of communication.
AdaBoost with feature selection using lOT for somatic mutations evaluation in Cancer
AdaBoost, a powerful machine learning algorithm, has found a novel application in the realm of
healthcare and cancer research, particularly in the evaluation of somatic mutations. In this context, the
Internet of Things (IoT) plays a pivotal role by providing a network of interconnected devices and
sensors for data collection and real-time monitoring. This innovative combination of AdaBoost and IoT
technology offers a promising approach to enhance the accuracy and efficiency of somatic mutation
evaluation in cancer, potentially revolutionizing the way we diagnose and treat this devastating disease.
• Feature Selection: One approach is to use AdaBoost with different feature selection techniques, such
as Recursive Feature Elimination (RFE) or Principal Component Analysis (PCA), to optimize the
input data for somatic mutation evaluation.
• Classifier Variation: AdaBoost can be used with various base classifiers, including decision trees,
support vector machines, or neural networks, to enhance its performance in detecting somatic
mutations.
• Ensemble Size: The number of weak learners (base classifiers) in the AdaBoost ensemble can vary.
A larger ensemble may increase accuracy but also computational complexity.
• IoT Data Sources: Leveraging different IoT devices and data sources, such as wearable sensors,
remote patient monitoring, or genetic sequencing instruments, can influence the types of data
available for evaluation.
• Real-time vs. Batch Processing: Depending on the IoT infrastructure, AdaBoost can be adapted for
real-time evaluation or batch processing, catering to specific medical scenarios.
• Data Preprocessing: IoT-generated data often requires preprocessing for noise reduction and data
cleansing. AdaBoost can be used with different preprocessing techniques, such as data imputation
or filtering.
• Scalability: Considering the scalability of the solution for handling large datasets and distributed
IoT environments.
• Privacy and Security: Ensuring that sensitive patient data collected via IoT devices is adequately
secured and anonymized to protect patient privacy.
• Clinical Application: Tailoring the AdaBoost-IoT approach to specific clinical applications within
cancer research, such as early detection, treatment response prediction, or monitoring disease
progression.
ADVANTAGES:
System Components:
1. Knowledge Base: The expert system boasts an extensive database of medical knowledge pertaining
to Alzheimer's disease. This knowledge encompasses information about risk factors, common and
atypical symptoms, recognized diagnostic criteria, and the range of available diagnostic tests.
2. Fuzzy Inference Engine: At the heart of the system lies the fuzzy logic-based inference engine.
Fuzzy logic is employed to handle the inherent uncertainty and imprecision that is often present in
medical data. The engine applies predefined fuzzy rules based on recognized diagnostic criteria to make
a probabilistic diagnosis.
3. User Interface: The system provides a user-friendly interface through which healthcare professionals
can input patient-specific data and symptoms. In Sarah's case, the healthcare provider enters information
such as age, gender, family history of Alzheimer's, cognitive assessment scores, and any other pertinent
variables.
Diagnostic Process:
1. Data Input: Sarah's healthcare provider inputs her information into the expert system. This includes
details such as age, gender, family history of Alzheimer's, cognitive test results, and any other relevant
medical data.
2. Fuzzy Logic Processing: The system processes this data utilizing fuzzy logic, which permits it to
deal with imprecise and uncertain information effectively. Fuzzy logic applies a set of predefined rules
that reflect diagnostic criteria for Alzheimer's disease.
3. Degree of Membership: The fuzzy inference engine calculates a "degree of membership" or
confidence score for the diagnosis. This score indicates the likelihood of Sarah having Alzheimer's
disease. For instance, the system might determine that there is an 85% probability that Sarah has
Alzheimer's disease based on the input data.
4. Diagnosis Output: The result is presented in an easily understandable format. It provides a
quantitative estimate of the likelihood of Sarah having Alzheimer's disease. For instance, the system
could output a statement like, "There is an 85% probability that Sarah has Alzheimer's disease based on
the data provided."
Advantages:
1) Handling Uncertainty: Fuzzy logic allows for the representation and manipulation of imprecise or
uncertain information, making it well-suited for domains where precise numerical values may not
be available or practical.
2) Linguistic Variables: Fuzzy logic uses linguistic variables (e.g., "high," "low," "warm," "cold") that
are closer to human natural language, making it easier for non-experts to understand and use the
system.
3) Expert Knowledge Capture: Fuzzy expert systems can capture and utilize the knowledge and
heuristics of human experts effectively, enabling them to make complex decisions based on
experience and intuition.
4) Adaptability: Fuzzy systems can be modified and updated easily by adjusting membership functions
and rules, allowing for adaptability to changing conditions or requirements.
5) Multi-criteria Decision Making: Fuzzy logic is suitable for solving multi-criteria decision-making
problems, where multiple factors need to be considered simultaneously.
6) Interpretability: Fuzzy logic results are often more interpretable than traditional binary logic
systems. They provide a degree of membership that quantifies the certainty of a decision.
Disadvantages:
1) Complex Rule Development: Constructing the fuzzy rule base and defining appropriate membership
functions can be complex and require expertise. Developing a comprehensive knowledge base is
time-consuming.
2) Computational Overhead: Fuzzy logic computations can be more computationally intensive than
traditional logic, which can slow down decision-making in real-time applications.
3) Limited Precision: Fuzzy systems are not always precise and may not be suitable for applications
requiring high levels of precision, such as certain scientific or engineering tasks.
4) Difficulty in Validation: It can be challenging to validate the accuracy and reliability of fuzzy expert
systems, especially when the rules and membership functions are complex.
Applications:
1) Medical Diagnosis: Fuzzy expert systems are used in medical diagnosis to handle the uncertainty
and imprecision in patient data, aiding in early diagnosis of diseases like diabetes, heart conditions,
and Alzheimer's disease.
2) Control Systems: Fuzzy logic is applied in control systems for processes such as HVAC (heating,
ventilation, and air conditioning), traffic signal control, and industrial automation to manage
complex and dynamic environments.
3) Financial Decision Making: Fuzzy systems are employed in financial risk assessment, portfolio
management, and credit scoring to assess and manage financial risk.
4) Natural Language Processing: Fuzzy logic can be used to process and understand human language,
making it valuable in chatbots, language translation, and sentiment analysis.
5) Engineering and Robotics: Fuzzy logic is used in robotics for tasks like path planning, obstacle
avoidance, and decision-making in uncertain environments.
6) Consumer Electronics: Fuzzy logic is employed in various household appliances, such as washing
machines and rice cookers, to provide adaptive and user-friendly control.
Patient monitoring systems have evolved significantly with the integration of the Internet of
Things (IoT). These systems provide real-time health data and allow healthcare providers to remotely
monitor patients, enhancing the quality of care. However, the critical challenge lies in securing patient
data within this IoT architecture. The BFIM (Blockchain-Based Fog IoT Microservice) security
algorithm offers an innovative solution to this challenge.
The BFIM (Blockchain-Based Fog IoT Microservice) security scheme is an innovative solution
designed to bolster data security within a fog-IoT network architecture. This sophisticated system
operates across three primary layers: the cloud, fog, and IoT, each playing a unique role in ensuring the
security and integrity of data exchanged within the network. The following sections delve into the
detailed workflow and functionality of the BFIM security scheme.
The IoT layer is the front-end of the network, consisting of various connected devices. Here,
data security is paramount. The BFIM scheme initiates its workflow by:
a) IoT Device Configuration: IoT devices, residing at the network's edge, form the first layer
of data security.
b) Cluster Node Properties: IoT cluster nodes have specific configurations that align with their
respective gateway entities, establishing baseline security within the IoT layer.
The fog layer acts as a pivotal intermediary in the network, providing essential support to both
the IoT layer and the central cloud. Its role in enhancing data security is multifaceted:
a) Primary IoT Support: The fog layer acts as the primary guardian of IoT devices, protecting
them against various security threats commonly encountered at the network edge.
b) Secondary Cloud Support: It also serves as a secondary layer of defense for the central cloud,
ensuring the integrity of data transmitted between the fog and the cloud.
c) Intrusion Detection: The fog layer continuously monitors the IoT layer for potential attacks.
If an attack is detected, it takes prompt action to safeguard the network.
One of the key functionalities of the BFIM security scheme is its ability to respond to
attacks and recover from security breaches. The process unfolds as follows:
a) IoT Block Shutdown: In the event of an attack within the IoT layer, the affected IoT blocks are
shut down, preventing further unauthorized access.
b) Delayed Block Regeneration: The fog layer takes a strategic approach by introducing a delay
mechanism for block regeneration, which is configured based on an OTSP (One-Time Session
Password) approach.
c) Cloud Repository: Information from the disabled IoT blocks is transferred to the central cloud,
where it is stored as a secure record log.
The BFIM security scheme employs various security mechanisms to protect the network
against potential attacks:
a) Proof of Work (PoW): PoW ensures that only authorized users with adequate hashing power
can participate in the computational activities.
b) IP Targeting: The system utilizes IP targeting to track and counter unauthorized access attempts
effectively.
c) Hashing Power Monitoring: The scheme continuously monitors the hashing power of users to
maintain the integrity of the network.
The BFIM security scheme places a strong emphasis on efficient service provisioning
across the network, ensuring minimal disruption during both regular operation and attack responses:
a) Delay Minimization: The scheme is designed to minimize delays and latency, enhancing the
user experience and maintaining the efficiency of network operations.
b) Authenticated Sessions: Authenticated user sessions are granted minimal interruption to ensure
the seamless flow of data.
Mobile healthcare management focuses toward on e-health applications and retrieves or accesses
the medical information anywhere and anytime through the mobile. Mobile pervasive healthcare
technologies can help both patient and physician it supports of a wide range of healthcare services and
provide various services such as mobile telemedicine, patient monitoring, location-based tracking
medical services, emergency response, personalized monitoring, and pervasive access to the health
information.
The health information management services in each hospital are different from each other; the
services are used to share all information from every hospital to design a uniform information
management system. The existing system in healthcare management application has paper record
information and prescription format to store the data and retrieve frantic process. To shift traditional
system to the e-healthcare system data accessing, data migration, storage, maintenance, update, etc., and
to develop atomic distributed system for mobile application are achieved through the cloud computing
is benefited to the common people.
The cloud computing model has various characteristics. One of the primary characteristics which
involved in mobile health services is “on-demand self-service”, it automatically makes consumer can
access the network storage, server computing time. The next primary characteristic is “Broad Network
access” that the smart phone can access the resources through the standard network mechanism on
heterogeneous platforms.
Mobile Healthcare Application
The Mobile healthcare application on Android OS provides the various services based on request
from the user mobile. The set of cloud computing services are usually in the two platforms: front end
and back end. The front interface provides the cloud platform that communicates directly with users
mobile, and the back end allows the management of the storage content.
The mobile healthcare application is a challenging task to develop on Android OS; it accesses
the patient record, medical images, and cloud services to the client.
Clinic Module: This module works on routine activity of patients and physicians; it interacts with the
web server to access the patient data, and it can access the clinic staff, the patients required for treatment
and medicine required for people.
Patient Module: In this module, patients can get the medical information through their mobile phones;
the patient authentication will be provided by the cloud server.
Cloud Module: In this module, cloud computing provides storage and services. The proposed
healthcare application medical images are stored in Amazon S3 cloud service. The authenticated user
accesses the features of client–server scheme in cloud computing paradigm.
The HealthKit application works with the BYOD that provided by the IoT. The BYOD provides
security and privacy to connect the various ubiquitous like tablets, mobile devices, smart phones and
laptops. The healthcare organizations focus on the concept of BYOD, it satisfies the patients, nurses and
physician they are used portable device in their workplace through BYOD.