[go: up one dir, main page]

0% found this document useful (0 votes)
291 views31 pages

Ambusens System Overview and Topologies

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
291 views31 pages

Ambusens System Overview and Topologies

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1a.

Classify network types based on physical topologies and connection types with
schematic diagrams. (10)
Connection types
Depending on the way a host communicates with other hosts, computer networks are of two
types—Figure :Point-to-point and Point-to-multipoint.
(i) Point-to-point: Point-to-point connections are used to establish direct connections
between two hosts. Day-to-day systems such as a remote control for an air
conditioner or television is a point to point connection, where the connection has the
whole channel dedicated to it only. These networks were designed to work over
duplex links and are functional for both synchronous as well as asynchronous
systems. Regarding computer networks, point to point connections find usage for
specific purposes such as in optical networks.
(ii) Point-to-multipoint: In a point-to-multipoint connection, more than two hosts share
the same link. This type of configuration is similar to the one-to-many connection
type. Point-to-multipoint connections find popular use in wireless networks and IP
telephony. The channel is shared between the various hosts, either spatially or
temporally. One common scheme of spatial sharing of the channel is frequency
division multiple access (FDMA). Temporal sharing of channels include approaches
such as time division multiple access (TDMA). Each of the spectral and temporal
sharing approaches has various schemes and protocols for channel sharing in point-
to-multipoint networks. Point-to multipoint connections find popular use in present-
day networks, especially while enabling communication between a massive number
of connected devices.

Physical topology
Depending on the physical manner in which communication paths between the hosts
are connected, computer networks can have the following four broad topologies—
(Figure 1.2): Star, Mesh, Bus, and Ring.
(i) Star: In a star topology, every host has a point-to-point link to a central
controller or hub. The hosts cannot communicate with one another directly;
they can only do so through the central hub. The hub acts as the network
traffic exchange. For large-scale systems, the hub, essentially, has to be a
powerful server to handle all the simultaneous traffic flowing through it.
However, as there are fewer links (only one link per host), this topology is
cheaper and easier to set up. The main advantages of the star topology are
easy installation and the ease of fault identification within the network. If the
central hub remains uncompromised, link failures between a host and the hub
do not have a big effect on the network, except for the host that is affected.
However, the main disadvantage of this topology is the danger of a single
point of failure. If the hub fails, the whole network fails.
(ii) Mesh: In a mesh topology, every host is connected to every other host using a
dedicated link (in a point-to-point manner). This implies that for n hosts in a
mesh, there are a total of n(n−1)/2 dedicated full duplex links between the
hosts. This massive number of links makes the mesh topology expensive.
However, it offers certain specific advantages over other topologies. The first
significant advantage is the robustness and resilience of the system. Even if a
link is down or broken, the network is still fully functional as there remain
other pathways for the traffic to flow through. The second advantage is the
security and privacy of the traffic as the data is only seen by the intended
recipients and not by all members of the network. The third advantage is the
reduced data load on a single host, as every host in this network takes care of
its traffic load. However, owing to the complexities in forming physical
connections between devices and the cost of establishing these links, mesh
networks are used very selectively, such as in backbone networks.
(iii) Bus: A bus topology follows the point-to-multipoint connection. A backbone
cable or bus serves as the primary traffic pathway between the hosts. The
hosts are connected to the main bus employing drop lines or taps. The main
advantage of this topology is the ease of installation. However, there is a
restriction on the length of the bus and the number of hosts that can be
simultaneously connected to the bus due to signal loss over the extended bus.
The bus topology has a simple cabling procedure in which a single bus
(backbone cable) can be used for an organization. Multiple drop lines and taps
can be used to connect various hosts to the bus, making installation very easy
and cheap. However, the main drawback of this topology is the difficulty in
fault localization within the network.
(iv) Ring: A ring topology works on the principle of a point-to-point connection.
Here, each host is configured to have a dedicated point-to-point connection
with its two immediate neighboring hosts on either side of it through repeaters
at each host. The repetition of this system forms a ring. The repeaters at each
host capture the incoming signal intended for other hosts, regenerates the bit
stream, and passes it onto the next repeater. Fault identification and set up of
the ring topology is quite simple and straightforward. However, the main
disadvantage of this system is the high probability of a single point of failure.
If even one repeater fails, the whole network goes down.

1b. Explain the IoT planes, various enablers of IoT, and the complex interdependencies
among them with a block diagram. (10)
• We can divide the IoT paradigm into four planes: services, local connectivity, global
connectivity, and processing.
Service plane
• The service plane is composed of two parts: 1) things or devices and 2) low-power
connectivity
• Typically, the services offered in this layer are a combination of things and low-power
connectivity.
• The things may be wearables, computers, smartphones, household appliances, smart
glasses, factory machinery, vending machines, vehicles, UAVs, robots, and other such
contraptions (which may even be just a sensor).
• The immediate low-power connectivity, which is responsible for connecting the things in
implementation, may be legacy protocols such as WiFi, Ethernet, localor cellular. In
contrast, modern-day technologies are mainly wireless and often programmable such as
Zigbee, RFID, Bluetooth, 6LoWPAN, LoRA, DASH, Insteon, and others. The range of
these connectivity technologies is severely restricted; they are responsible for the
connectivity between the things of the IoT and the nearest hub or gateway to access the
Internet.
• The services offered fall under the control and purview of service providers.
Local connectivity plane
• The local connectivity is responsible for distributing Internet access to multiple local IoT
deployments. This distribution may be on the basis of the physical placement of the
things, on the basis of the application domains, or even on the basis of providers of
services.
• Services such as address management, device management, security, sleep scheduling,
and others fall within the scope of this plane.
• For example, in a smart home environment, the first floor and the ground floor may have
local IoT implementations, which have various things connected to the network via low-
power, low-range connectivity technologies. The traffic from these two floors merges
into a single router or a gateway. The total traffic intended for the Internet from a smart
home leaves through a single gateway or router, which may be assigned a single global IP
address (for the whole house). This helps in the significant conservation of already
limited global IP addresses.
• The local connectivity plane falls under the purview of IoT management as it directly
deals with strategies to use/reuse addresses based on things and applications.
• The modern-day “edge computing” paradigm is deployed in conjunction with these first
two planes: services and local connectivity
Global connectivity plane
• The penultimate plane of global connectivity plays a significant role in enabling IoT in
the real sense by allowing for worldwide implementations and connectivity between
things, users, controllers, and applications.
• The Web, data-centers, remote servers, Cloud, and others make up this plane.
• This plane also falls under the purview of IoT management as it decides how and when
to store data, when to process it, when to forward it, and in which form to forward it.
• The paradigm of “fog computing” lies between the planes of local connectivity and
global connectivity. It often serves to manage the load of global connectivity
infrastructure by offloading the computation nearer to the source of the data itself, which
reduces the traffic load on the global Internet.
Processing plane
• The final plane is processing plane which can be considered as a top-up of the basic IoT
networking framework.
• The continuous rise in the usefulness and penetration of IoT in various application areas
such as industries, transportation, healthcare, and others is the result of this plane.
• The various sub-domains of this plane include intelligence, conversion (data and format
conversion, and data cleaning), learning (making sense of temporal and spatial data
patterns), cognition (recognizing patterns and mapping it to already known patterns),
algorithms (various control and monitoring algorithms), visualization (rendering numbers
and strings in the form of collective trends, graphs, charts, and projections), and analysis
(estimating the usefulness of the generated information, making sense of the information
with respect to the application and place of data generation, and estimating future trends
based on past and present patterns of information obtained).
• Various computing paradigms such as “big data”, “machine Learning”, and others, fall
within the scope of this domain.
• The members in this plane fall under the control and preview of IoT tools, simply
because they wring-out useful and human-readable information from all the raw data that
flows from various IoT devices and deployments
2a. Explain networked communication between two hosts following the TCP/IP suite with a
block diagram.(8)
The Internet protocol suite is yet another conceptual framework that provides levels of
abstraction for ease of understanding and development of communication and networked
systems on the Internet. However, the Internet protocol suite predates the OSI model and
provides only four levels of abstraction: 1) Link layer, 2) Internet layer, 3) transport layer, and 4)
application layer.
(i) Link Layer: The first and base layer of the TCP/IP protocol suite is also known as the
network interface layer. This layer is synonymous with the collective physical and
data link layer of the OSI model. It enables the transmission of TCP/IP packets over
the physical medium. According to its design principles, the link layer is independent
of the medium in use, frame format, and network access, enabling it to be used with a
wide range of technologies such as the Ethernet, wireless LAN, and the asynchronous
transfer mode (ATM).
(ii) Internet Layer: Layer 2 of the TCP/IP protocol suite is somewhat synonymous to the
network layer of the OSI model. It is responsible for addressing, address translation,
data packaging, data disassembly and assembly, routing, and packet delivery tracking
operations. Some core protocols associated with this layer are address resolution
protocol (ARP), Internet protocol (IP), Internet control message protocol (ICMP), and
Internet group management protocol (IGMP). Traditionally, this layer was built upon
IPv4, which is gradually shifting to IPv6, enabling the accommodation of a much
more significant number of addresses and security measures.
(iii) Transport Layer: Layer 3 of the TCP/IP protocol suite is functionally synonymous
with the transport layer of the OSI model. This layer is tasked with the functions of
error control, flow control, congestion control, segmentation, and addressing in an
end-to-end manner; it is also independent of the underlying network. Transmission
control protocol (TCP) and user datagram protocol (UDP) are the core protocols upon
which this layer is built, which in turn enables it to have the choice of providing
connection-oriented or connectionless services between two or more hosts or
networked devices.
(iv) Application Layer: The functionalities of the application layer, layer 4, of the TCP/IP
protocol suite are synonymous with the collective functionalities of the OSI model’s
session, presentation, and application layers. This layer enables an end-user to access
the services of the underlying layers and defines the protocols for the transfer of data.
Hypertext transfer protocol (HTTP), file transfer protocol (FTP), simple mail transfer
protocol (SMTP), domain name system (DNS), routing information protocol (RIP),
and simple network management protocol (SNMP) are some of the core protocols
associated with this layer.

2b. Outline the interdependence and reach of IoT over various application domains and
networking paradigms. (8)
Figure shows the various technological interdependencies of IoT with other domains and
networking paradigms such as M2M, CPS, the Internet of environment (IoE), the Internet of
people (IoP), and Industry 4.0. Each of these networking paradigms is a massive domain on its
own, but the omnipresent nature of IoT implies that these domains act as subsets of IoT. The
paradigms are briefly discussed here
(i) M2M: The M2M or the machine-to-machine paradigm signifies a system of
connected machines and devices, which can talk amongst themselves without human
intervention. The communication between the machines can be for updates on
machine status (stocks, health, power status, and others), collaborative task
completion, overall knowledge of the systems and the environment, and others.
(ii) CPS: The CPS or the cyber physical system paradigm insinuates a closed control
loop—from sensing, processing, and finally to actuation—using a feedback
mechanism. CPS helps in maintaining the state of an environment through the
feedback control loop, which ensures that until the desired state is attained, the
system keeps on actuating and sensing. Humans have a simple supervisory role in
CPS-based systems; most of the ground-level operations are automated.
(iii) IoE: The IoE paradigm is mainly concerned with minimizing and even reversing the
ill-effects of the permeation of Internet-based technologies on the environment [3].
The major focus areas of this paradigm include smart and sustainable farming,
sustainable and energy-efficient habitats, enhancing the energy efficiency of systems
and processes, and others. In brief, we can safely assume that any aspect of IoT that
concerns and affects the environment, falls under the purview of IoE.
(iv) Industry 4.0: Industry 4.0 is commonly referred to as the fourth industrial revolution
pertaining to digitization in the manufacturing industry. The previous revolutions
chronologically dealt with mechanization, mass production, and the industrial
revolution, respectively. This paradigm strongly puts forward the concept of smart
factories, where machines talk to one another without much human involvement
based on a framework of CPS and IoT. The digitization and connectedness in
Industry 4.0 translate to better resource and workforce management, optimization of
production time and resources, and better upkeep and lifetimes of industrial systems.
(v) IoP: IoP is a new technological movement on the Internet which aims to decentralize
online social interactions, payments, transactions, and other tasks while maintaining
confidentiality and privacy of its user’s data. A famous site for IoP states that as the
introduction of the Bitcoin has severely limited the power of banks and governments,
the acceptance of IoP will limit the power of corporations, governments, and their spy
agencies

2c. Summarize the characteristic features of IoT systems. (4)


Ans. “The Internet of Things (IoT) is the network of physical objects that contain embedded
technology to communicate and sense or interact with their internal states or the external
environment.”
Typically, IoT systems can be characterized by the following features [2]:
• Associated architectures, which are also efficient and scalable.
• No ambiguity in naming and addressing.
• Massive number of constrained devices, sleeping nodes, mobile devices, and non-IP devices.
• Intermittent and often unstable connectivity
3a. Outline the basic differences between transducers, sensors, and actuators. (6)

3b. Compare mechanical, soft, and shape memory polymer based actuators (6)
1. Mechanical actuators
In mechanical actuation, the rotary motion of the actuator is converted into linear motion to
execute some movement. The use of gears, rails, pulleys, chains, and other devices are necessary
for these actuators to operate. These actuators can be easily used in conjunction with pneumatic,
hydraulic, or electrical actuators. They can also work in a standalone mode. The best example of
a mechanical actuator is a rack and pinion mechanism.
2. Soft actuators
Soft actuators (e.g., polymer-based) consists of elastomeric polymers that are used as embedded
fixtures in flexible materials such as cloth, paper, fiber, particles, and others [7]. The conversion
of molecular level microscopic changes into tangible macroscopic deformations is the primary
working principle of this class of actuators. These actuators have a high stake in modern-day
robotics. They are designed to handle fragile objects such as agricultural fruit harvesting, or
performing precise operations like manipulating the internal organs during robot-assisted
surgeries.
3. Shape memory polymers
Shape memory polymers (SMP) are considered as smart materials that respond to some external
stimulus by changing their shape, and then revert to their original shape once the affecting
stimulus is removed [6]. Features such as high strain recovery, biocompatibility, low density,
and biodegradability characterize these materials. SMP-based actuators function similar to our
muscles. Modern-day SMPs have been designed to respond to a wide range of stimuli such as
pH changes, heat differentials, light intensity, and frequency changes, magnetic changes, and
others. Photopolymer/light-activated polymers (LAP) are a particular type of SMP, which
require light as a stimulus to operate. LAP-based actuators are characterized by their rapid
response times. Using only the variation of light frequency or its intensity, LAPs can be
controlled remotely without any physical contact. The development of LAPs whose shape can
be changed by the application of a specific frequency of light have been reported. The polymer
retains its shape after removal of the activating light. In order to change the polymer back to its
original shape, a light stimulus of a different frequency has to be applied to the polymer.
3c. Classify sensing types based on the nature of the environment and the physical sensors.
(8)
Sensing can be broadly divided into four different categories based on the nature of the
environment being sensed and the physical sensors being used to do so (Figure): 1) scalar
sensing, 2) multimedia sensing, 3) hybrid sensing, and 4) virtual sensing.
Scalar sensing:
Scalar sensing encompasses the sensing of features that can be quantified simply by measuring
changes in the amplitude of the measured values with respect to time [3]. Quantities such as
ambient temperature, current, atmospheric pressure, rainfall, light, humidity, flux, and others are
considered as scalar values as they normally do not have a directional or spatial property
assigned with them. Simply measuring the changes in their values with passing time provides
enough information about these quantities. The sensors used for measuring these scalar
quantities are referred to as scalar sensors, and the act is known as scalar sensing. A simple
scalar temperature sensing of a fire detection event is shown in Figure (a).
Multimedia sensing
Multimedia sensing encompasses the sensing of features that have a spatial variance property
associated with the property of temporal variance [4]. Unlike scalar sensors, multimedia sensors
are used for capturing the changes in amplitude of a quantifiable property concerning space
(spatial) as well as time (temporal). Quantities such as images, direction, flow, speed,
acceleration, sound, force, mass, energy, and momentum have both directions as well as a
magnitude. Additionally, these quantities follow the vector law of addition and hence are
designated as vector quantities. They might have different values in different directions for the
same working condition at the same time. The sensors used for measuring these quantities are
known as vector sensors. A simple camera-based multimedia sensing using surveillance as an
example is shown in Figure (b).
Hybrid sensing
The act of using scalar as well as multimedia sensing at the same time is referred to as hybrid
sensing. Many a time, there is a need to measure certain vector as well as scalar properties of an
environment at the same time. Under these conditions, a range of various sensors are employed
(from the collection of scalar as well as multimedia sensors) to measure the various properties of
that environment at any instant of time, and temporally map the collected information to
generate new information.
For example, in an agricultural field, it is required to measure the soil conditions at regular
intervals of time to determine plant health. Sensors such as soil moisture and soil temperature
are deployed underground to estimate the soil’s water retention capacity and the moisture being
held by the soil at any instant of time. However, this setup only determines whether the plant is
getting enough water or not. There may be a host of other factors besides water availability,
which may affect a plant’s health. The additional inclusion of a camera sensor with the plant
may be able to determine the actual condition of a plant by additionally determining the color of
leaves. The aggregate information from soil moisture, soil temperature, and the camera sensor
will be able to collectively determine a plant’s health at any instant of time. Other common
examples of hybrid sensing include smart parking systems, traffic management systems, and
others. Figure(c) shows an example of hybrid sensing, where a camera and a temperature sensor
are collectively used to detect and confirm forest fires during wildlife monitoring.
Virtual sensing
Many a time, there is a need for very dense and large-scale deployment of sensor nodes spread
over a large area for monitoring of parameters. One such domain is agriculture [5]. Here, often,
the parameters being measured, such as soil moisture, soil temperature, and water level, do not
show significant spatial variations.
Hence, if sensors are deployed in the fields of farmer A, it is highly likely that the measurements
from his sensors will be able to provide almost concise measurements of his neighbor B’s fields;
this is especially true of fields which are immediately surrounding A’s fields. Exploiting this
property, if the data from A’s field is digitized using an IoT infrastructure and this system
advises him regarding the appropriate watering, fertilizer, and pesticide regimen for his crops,
this advisory can also be used by B for maintaining his crops. In short, A ’s sensors are being
used for actual measurement of parameters; whereas virtual data (which does not have actual
physical sensors but uses extrapolation-based measurements) is being used for advising B. This
is the virtual sensing paradigm. Figure(d) shows an example of virtual sensing. Two temperature
sensors S1 and S3 monitor three nearby events E1, E2, and E3 (fires). The event E2 does not
have a dedicated sensor for monitoring it; however, through the superposition of readings from
sensors S1 and S3, the presence of fire in E2 is inferred.

4a. Compare the common commercially available sensors used for IoT-based sensing
applications(6).
4b. Outline a simple actuation mechanism. (6)
An actuator can be considered as a machine or system’s component that can affect the movement
or control the said mechanism or the system. Control systems affect changes to the environment
or property they are controlling through actuators. The system activates the actuator through a
control signal, which may be digital or analog. It elicits a response from the actuator, which is in
the form of some form of mechanical motion. The control system of an actuator can be a
mechanical or electronic system, a software-based system (e.g., an autonomous car control
system), a human, or any other input. Figure 5.5 shows the outline of a simple actuation system.
A remote user sends commands to a processor. The processor instructs a motor controlled
robotic arm to perform the commanded tasks accordingly. The processor is primarily responsible
for converting the human commands into sequential machine-language command sequences,
which enables the robot to move. The robotic arm finally moves the designated boxes, which
was its assigned task. Monitoring Processing Actuation Environment Sensor node Motor-driven
mechanism
4c. Explain four common characteristics of actuators based on which they are selected. (8)
The correct choice of actuators is necessary for the long-term sustenance and continuity of
operations, as well as for increasing the lifetime of the actuators themselves. A set of four
characteristics can define all actuators:
• Weight: The physical weight of actuators limits its application scope. For example, the use of
heavier actuators is generally preferred for industrial applications and applications requiring no
mobility of the IoT deployment. In contrast, lightweight actuators typically find common usage
in portable systems in vehicles, drones, and home IoT applications. It is to be noted that this is
not always true. Heavier actuators also have selective usage in mobile systems, for example,
landing gears and engine motors in aircraft.
• Power Rating: This helps in deciding the nature of the application with which an actuator can
be associated. The power rating defines the minimum and maximum operating power an
actuator can safely withstand without damage to itself. Generally, it is indicated as the power-to-
weight ratio for actuators. For example, smaller servo motors used in hobby projects typically
have a maximum rating of 5 VDC, 500 mA, which is suitable for an operations-driven battery-
based power source. Exceeding this limit might be detrimental to the performance of the
actuator and may cause burnout of the motor. In contrast to this, servo motors in larger
applications have a rating of 460 VAC, 2.5 A, which requires standalone power supply systems
for operations. It is to be noted that actuators with still higher ratings are available and vary
according to application requirements.
• Torque to Weight Ratio: The ratio of torque to the weight of the moving part of an
instrument/device is referred to as its torque/weight ratio. This indicates the sensitivity of the
actuator. Higher is the weight of the moving part; lower will be its torque to weight ratio for a
given power.
• Stiffness and Compliance: The resistance of a material against deformation is known as its
stiffness, whereas compliance of a material is the opposite of stiffness. Stiffness can be directly
related to the modulus of elasticity of that material. Stiff systems are considered more accurate
than compliant systems as they have a faster response to the change in load applied to it. For
example, hydraulic systems are considered as stiff and non-compliant, whereas pneumatic
systems are considered as compliant.
5a. Explain event detection using an off-site remote processing topology with a block
diagram. (10)
The off-site processing paradigm, as opposed to the on-site processing paradigms, allows for
latencies (due to processing or network latencies); it is significantly cheaper than on-site
processing topologies. This difference in cost is mainly due to the low demands and
requirements of processing at the source itself. Often, the sensor nodes are not required to
process data on an urgent basis, so having a dedicated and expensive on-site processing
infrastructure is not sustainable for large-scale deployments typical of IoT deployments.
In the off-site processing topology, the sensor node is responsible for the collection and framing
of data that is eventually to be transmitted to another location for processing. Unlike the on-site
processing topology, the off-site topology has a few dedicated high-processing enabled devices,
which can be borrowed by multiple simpler sensor nodes to accomplish their tasks. At the same
time, this arrangement keeps the costs of large-scale deployments extremely manageable [5].
In the off-site topology, the data from these sensor nodes (data generating sources) is transmitted
either to a remote location (which can either be a server or a cloud) or to multiple processing
nodes. Multiple nodes can come together to share their processing power in order to
collaboratively process the data (which is important in case a feasible communication pathway
or connection to a remote location cannot be established by a single node).
Remote processing :
This is one of the most common processing topologies prevalent in present-day IoT solutions. It
encompasses sensing of data by various sensor nodes; the data is then forwarded to a remote
server or a cloud-based infrastructure for further processing and analytics. The processing of
data from hundreds and thousands of sensor nodes can be simultaneously offloaded to a single,
powerful computing platform; this results in massive cost and energy savings by enabling the
reuse and reallocation of the same processing resource while also enabling the deployment of
smaller and simpler processing nodes at the site of deployment [4]. This setup also ensures
massive scalability of solutions, without significantly affecting the cost of the deployment.
Figure shows the outline of one such paradigm, where the sensing of an event is performed
locally, and the decision making is outsourced to a remote processor (here, cloud). However,
this paradigm tends to use up a lot of network bandwidth and relies
heavily on the presence of network connectivity between the sensor nodes and the remote
processing infrastructure.
5b. Explain the data offloading strategies: Offload location and Offload decision making.
(10)
Offload location The choice of offload location decides the applicability, cost, and sustainability
of the IoT application and deployment. We distinguish the offload location into four types:
• Edge: Offloading processing to the edge implies that the data processing is facilitated to a
location at or near the source of data generation itself. Offloading to the edge is done to achieve
aggregation, manipulation, bandwidth reduction, and other data operations directly on an IoT
device [7].
• Fog: Fog computing is a decentralized computing infrastructure that is utilized to conserve
network bandwidth, reduce latencies, restrict the amount of data unnecessarily flowing through
the Internet, and enable rapid mobility support for IoT devices. The data, computing, storage and
applications are shifted to a place between the data source and the cloud resulting in
significantly reduced latencies and network bandwidth usage [8].
• Remote Server: A simple remote server with good processing power may be used with IoT-
based applications to offload the processing from resource constrained IoT devices. Rapid
scalability may be an issue with remote servers, and they may be costlier and hard to maintain in
comparison to solutions such as the cloud [4].
• Cloud: Cloud computing is a configurable computer system, which can get access to
configurable resources, platforms, and high-level services through a shared pool hosted
remotely. A cloud is provisioned for processing offloading so that processing resources can be
rapidly provisioned with minimal effort over the Internet, which can be accessed globally. Cloud
enables massive scalability of solutions as they can enable resource enhancement allocated to a
user or solution in an on-demand manner, without the user having to go through the pains of
acquiring and configuring new and costly hardware
Offload decision making
The choice of where to offload and how much to offload is one of the major deciding factors in
the deployment of an offsite-processing topology-based IoT deployment architecture. The
decision making is generally addressed considering data generation rate, network bandwidth, the
criticality of applications, processing resource available at the offload site, and other factors.
Some of these approaches are as follows.
• Naive Approach: This approach is typically a hard approach, without too much decision
making. It can be considered as a rule-based approach in which the data from IoT devices are
offloaded to the nearest location based on the achievement of certain offload criteria. Although
easy to implement, this approach is never recommended, especially for dense deployments, or
deployments where the data generation rate is high or the data being offloaded in complex to
handle (multimedia or hybrid data types). Generally, statistical measures are consulted for
generating the rules for offload decision making.
• Bargaining based approach: This approach, although a bit processing-intensive during the
decision making stages, enables the alleviation of network traffic congestion, enhances service
QoS (quality of service) parameters such as bandwidth, latencies, and others. At times, while
trying to maximize multiple parameters for the whole IoT implementation, in order to provide
the most optimal solution or QoS, not all parameters can be treated with equal importance.
Bargaining based solutions try to maximize the QoS by trying to reach a point where the
qualities of certain parameters are reduced, while the others are enhanced. This measure is
undertaken so that the achieved QoS is collaboratively better for the full implementation rather
than a select few devices enjoying very high QoS. Game theory is a common example of the
bargaining based approach. This approach does not need to depend on historical data for
decision making purposes.
• Learning based approach: Unlike the bargaining based approaches, the learning based
approaches generally rely on past behavior and trends of data flow through the IoT architecture.
The optimization of QoS parameters is pursued by learning from historical trends and trying to
optimize previous solutions further and enhance the collective behavior of the IoT
implementation. The memory requirements and processing requirements are high during the
decision making stages. The most common example of a learning based approach is machine
learning.
6a. Contrast between structured and unstructured data. Outline various data generating
and storage sources with a block schematic. (10)
The Internet is a vast space where huge quantities and varieties of data are generated regularly
and flow freely. As of January 2018, there are a reported 4.021 billion Internet users worldwide.
The massive volume of data generated by this huge number of users is further enhanced by the
multiple devices utilized by most users. In addition to these data-generating sources, non-human
data generation sources such as sensor nodes and automated monitoring systems further add to
the data load on the Internet. This huge data volume is composed of a variety of data such as e-
mails, text documents (Word docs, PDFs, and others), social media posts, videos, audio files, and
images, as shown in Figure
However, these data can be broadly grouped into two types based on how they can be accessed
and stored: 1) Structured data and 2) unstructured data.
Structured data
These are typically text data that have a pre-defined structure [1]. Structured data are associated
with relational database management systems (RDBMS). These are primarily created by using
length-limited data fields such as phone numbers, social security numbers, and other such
information. Even if the data is human or machine generated, these data are easily searchable
by querying algorithms as well as human generated queries. Common usage of this type of data
is associated with flight or train reservation systems, banking systems, inventory controls, and
other similar systems. Established languages such as Structured Query Language (SQL) are used
for accessing these data in RDBMS. However, in the context of IoT, structured data holds a
minor share of the total generated data over the Internet.
Unstructured data
In simple words, all the data on the Internet, which is not structured, is categorized as
unstructured. These data types have no pre-defined structure and can vary according to
applications and data-generating sources. Some of the common examples of human-generated
unstructured data include text, e-mails, videos, images, phone recordings, chats, and others [2].
Some common examples of machine-generated unstructured data include sensor data from
traffic, buildings, industries, satellite imagery, surveillance videos, and others. As already
evident from its examples, this data type does not have fixed formats associated with it, which
makes it very difficult for querying algorithms to perform a look-up. Querying languages such as
NoSQL are generally used for this data type.
6b. Outline an IoT deployment (processing offloading) with the various layers of processing
involving different application domains with a diagram. (10)
Figure shows the typical outline of an IoT deployment with the various layers of processing that
are encountered spanning vastly different application domains—from as near as sensing the
environment to as far as cloud-based infrastructure. Starting from the primary layer of sensing,
we can have multiple sensing types tasked with detecting an environment (fire, surveillance, and
others). The sensors enabling these sensing types are integrated with a processor using wired or
wireless connections (mostly, wired). In the event that certain applications require immediate
processing of the sensed data, an on-site processing topology is followed, similar to on-site
processing topology. However, for the majority of IoT applications, the bulk of the processing is
carried out remotely in order to keep the on-site devices simple, small, and economical.
Typically, for off-site processing, data from the sensing layer can be forwarded to the fog or
cloud or can be contained within the edge layer [6]. The edge layer makes use of devices within
the local network to process data that which is similar to the collaborative processing topology .
The devices within the local network, till the fog, generally communicate using short-range
wireless connections. In case the data needs to be sent further up the chain to the cloud, long-
range wireless connection enabling access to a backbone network is essential. Fog-based
processing is still considered local because the fog nodes are typically localized within a
geographic area and serve the IoT nodes within a much smaller coverage area as compared to the
cloud. Fog nodes, which are at the level of gateways, may or may not be accessed by the IoT
devices through the Internet.
Finally, the approach of forwarding data to a cloud or a remote server, as shown in the topology
in Figure 6.3, requires the devices to be connected to the Internet through long-range
wireless/wired networks, which eventually connect to a backbone network. This approach is
generally costly concerning network bandwidth, latency, as well as the complexity of the devices
and the network infrastructure involved. This section on data offloading is divided into three
parts: 1) offload location (which outlines where all the processing can be offloaded in the IoT
architecture), 2) offload decision making (how to choose where to offload the processing to and
by how much), and finally 3) offloading considerations (deciding when to offload).
7a. Explain the architecture of a smart irrigation management system. (6)
The architecture of this system consists of three layers: Sensing and actuating layer, remote
processing and service layer, and application layer. These layers perform dedicated tasks
depending on the requirements of the system. Figure depicts the architecture of the system. The
detailed functionalities of different layers of this system are as follows:
(i) Sensing and Actuating layer: This layer deals with different physical devices, such as
sensor nodes, actuators, and communication modules. In the system, a specially
designated sensor node works as a cluster head to collect data from other sensor nodes,
which are deployed on the field for sensing the value of soil moisture and water level.
A cluster head is equipped with two communication module: ZigBee (IEEE 802.15.4)
and General Packet Radio Service (GPRS). The communication between the deployed
sensor nodes and the cluster head takes place with the help of ZigBee. Further, the
cluster heads use GPRS to transmit data to the remote server. An electrically erasable
programmable read-only memory (EEPROM), integrated with the cluster head, stores
a predefined threshold value of water levels and soil moisture. When the sensed value
of the deployed sensor node drops below this predefined threshold value, a solenoid
(pump) activates to start the irrigation process. In the system, the standard EC-05 soil
moisture sensor is used along with the water level sensor, which is specifically designed
and developed for this project.
(ii) Processing and Service layer: This layer acts as an intermediate layer between the
sensing and actuating layer and the application layer. The sensed and process data is
stored in the server for future use. Moreover, these data are accessible at any time from
any remote location by authorized users. Depending on the sensed values from the
deployed sensor nodes, the pump actuates to irrigate the field.
(iii) Application layer: The farmer can access the status of the pump, whether it is in switch
on/off, and the value of different soil parameters from his/her cell phone. This
information is accessible with the help of the integrated GSM facility of the farmers’
cell phone. Additionally, an LED array indicator and LCD system is installed in the
farmers’ house. Using the LCD and LED, a farmer can easily track the condition of his
respective fields. Apart from this mechanism, a farmer can manually access field
information with the help of a Web-based application. Moreover, the farmer can control
the pump using his/her cell phone from a remote location
7b. Classify the deployment model of Cloud with relevant explanation. (6)
As per the National Institute of Standards and Technology (NIST) [1] and Cloud Computing
Standards Roadmap Working Group, the cloud model can be divided into

Deployment Model
(a) Private Cloud: This type of cloud is owned explicitly by an end user organization. The
internal resources of the organization maintain the private cloud.
(b) Community Cloud: This cloud forms with the collaboration of a set of organizations for a
specific community. For a community cloud, each organization has some shared interests.
(c) Public Cloud: The public cloud is owned by a third-party organization, which provides
services to the common public. The service of this cloud is available for any user, on a payment
basis.
(d) Hybrid Cloud: This type of cloud comprises two or more clouds (private, public, or
community).

7c. Explain the importance and metrics of Service-Level Agreement (SLA) in Cloud
Computing. (8)
Importance of SLA
An SLA is essential in cloud computing architecture for both CSP and customers. It is important
because of the following reasons:
• Customer Point of View: Each CSP has its SLA, which contains a detailed description of the
services. If a customer wants to use a cloud service, he/she can compare the SLAs of different
organizations. Therefore, a customer can choose a preferred CSP based on the SLAs.
• CSP Point of View: In many cases, certain performance issues may occur for a particular service,
because of which a CSP may not be able to provide the services efficiently. Thus, in such a
situation, a CSP can explicitly mention in the SLA that they are not responsible for inefficient
service.
Metrics for SLA
Depending on the type of services, an SLA is constructed with different metrics. However, a few
common metrics that are required to be included for constructing an SLA are as follows:
(i) Availability: This metric signifies the amount of time the service will be accessible for
the customer.
(ii) Response Time: The maximum time that will be taken for responding to a customer
request is measured by response time.
(iii) Portability: This metric indicates the flexibility of transferring the data to another
service.
(iv) Problem Reporting: How to report a problem, whom and how to be contacted, is
explained in this metric.
(v) Penalty: The penalty for not meeting the promises mentioned in the SLA.
8a. Classify virtualization based on the requirements of the users. Explain. (8)
Based on the requirements of the users, we categorized virtualization as shown in Figure.
(i) Hardware Virtualization: This type of virtualization indicates the sharing of hardware
resources among multiple users. For example, a single processor appears as many
different processors in a cloud computing architecture. Different operating systems can
be installed in these processors and each of them can work as stand-alone machines.
(ii) Storage Virtualization: In storage virtualization, the storage space from different
entities are accumulated virtually, and seem like a single storage location. Through
storage virtualization, a user’s documents or files exist in different locations in a
distributed fashion. However, the users are under the impression that they have a single
dedicated storage space provided to them.
(iii) Application Virtualization: A single application is stored at the cloud end. However, as
per requirement, a user can use the application in his/her local computer without ever
actually installing the application. Similar to storage virtualization, in application
virtualization, the users get the impression that applications are stored and executed in
their local computer.
(iv) Desktop Virtualization: This type of virtualization allows a user to access and utilize
the services of a desktop that resides at the cloud. The users can use the desktop from
their local desktop.

8b. Explain architecture of a sensor-cloud platform with block diagram.(8)


In a traditional cloud computing architecture, two actors, cloud service provider (CSP) and end
users (customer) play the key role. Unlike cloud computing, in sensor-cloud architecture, the
sensor owners play an important role along with the service provider and end users. However, a
service provider in sensor-cloud architecture is known as a sensor-cloud service provider
(SCSP). The detailed architecture of a sensor-cloud is depicted in Figure
Actors in sensor-cloud architecture
Typically, in a sensor-cloud architecture, three actors are present. We briefly describe the role of
each actor.
(i) End User: This actor is also known as a customer of the sensor-cloud services. Typically,
an end user registers him/herself with the infrastructure through a Web portal.
Thereafter, he/she chooses the template of the services that are available in the sensor-
cloud architecture to which he/she is registered. Finally, through the Web portal, the end
user receives the services, as shown in Figure . Based on the type and usage duration of
service, the end user pays the charges to the SCSP.
(ii) Sensor Owner: We have already discussed that the sensor-cloud architecture is based on
the concept of Se-aaS. Therefore, the deployment of the sensors is essential in order to
provide services to the end users. These sensors in a sensor-cloud architecture are owned
and deployed by the sensor owners, as depicted in Figure . A particular sensor owner can
own multiple homogeneous or heterogeneous sensor nodes. Based on the requirements
of the users, these sensor nodes are virtualized and assigned to serving multiple
applications at the same time. On the other hand, a sensor owner receives rent depending
upon the duration and usage of his/her sensor node(s).
(iii) Sensor-Cloud Service Provider (SCSP): An SCSP is responsible for managing the entire
sensor-cloud infrastructure (including management of sensor owners and end users
handling, resource handling, database management, cloud handling etc.), centrally. The
SCSP receives rent from end users with the help of a pre-defined pricing model. The
pricing scheme may include the infrastructure cost, sensor owners’ rent, and the revenue
of the SCSP. Typically, different algorithms are used for managing the entire
infrastructure. The SCSP receives the rent from the end users and shares a partial amount
with the sensor owners. The remaining amount is used for maintaining the infrastructure.
In the process, the SCSP earns a certain amount of revenue from the payment of the end
users.
8c. Explain the features of CloudSim (4)
Features: CloudSim has different features, which are listed as follows:
(1) The CloudSim simulator provides various cloud computing data centers along with different
data center network topologies in a simulation environment.
(2) Using CloudSim, virtualization of server hosts can be done in a simulation.
(3) A user is able to allocate virtual machines (VMs) dynamically.
(4) It allows users to define their own policies for the allocation of host resources to VMs.
(5) It provides flexibility to add or remove simulation components dynamically.
(6) A user can stop and resume the simulation at any instant of time.
9a. Explain fog framework for intelligent public safety in vehicular environments (fog-
FISVER) with a block diagram. (10)
The system highlights a fog framework for intelligent public safety in vehicular environments
(fog-FISVER) [1]. The primary aim of this system is to ensure smart transportation safety (STS)
in public bus services. The system works through the following three steps:
(i) The vehicle is equipped with a smart surveillance system, which is capable of
executing video processing and detecting criminal activity in real time.
(ii) A fog computing architecture works as the mediator between a vehicle and a police
vehicle.
(iii) A mobile application is used to report the crime to a nearby police agent.
Architecture The architecture of the fog-FISVER consists of different IoT components.
Moreover, the developers utilized the advantages of the low-latency fog computing
architecture for designing their system. Fog-FISVER is based on a three-tiered architecture,
as shown in Figure. We will discuss each of the tiers as follows:
(i) Tier1—In-vehicle FISVER STS Fog: In this system component, a fog node is placed
for detecting criminal activities. This tier accumulates the real sensed data from
within the vehicle and processes it to detect possible criminal activities inside the
vehicle. Further, this tier is responsible for creating crime-level metadata and
transferring the required information to the next tier. For performing all the activities,
Tier 1 consists of two subsystems: Image processor and event dispatcher
• Image Processor:
The image processor inside Tier 1 is a potent component, which has a capability
similar to the human eye for detecting criminal activities. Developers of the system
used a deep-learning-based approach for enabling image processing techniques in the
processor. To implement the fog computing architecture in the vehicle, a Raspberry-
Pi-3 processor board is used, which is equipped with a high-quality camera. Further,
this architecture uses template matching and correlation to detect the presence of
dangerous articles (such as a pistol or a knife) in the sub-image of a video frame.
Typically, the image processor stores a set of crime object templates in the fog-
FISVER STS fog infrastructure, which is present in Tier 2 of the system. The image
processor is divided into the following three parts:
(a) Crime definition downloader: This component periodically checks for the
presence of new crime object template definitions in fog-FISVER STS fog
infrastructure. If a new crime object template is available, it is stored locally.
(b) Crime definition storage: In order to use template matching, the crime object
template definition is required to be stored in the system. The crime definition storage
is used to store all the possible crime object template definitions.
(c) Algorithm launcher: This component initiates the instances of the registered
algorithm in order to match the template with the video captured by the camera
attached in the vehicles. If a crime object is matched with the video, criminal activity
is confirmed.
• Event dispatcher: This is another key component of Tier 1. The event dispatcher is
responsible for accumulating the data sensed from vehicles and the image processor.
After the successful detection of criminal activity, the information is sent to the fog-
FISVER STS fog infrastructure. The components of the event dispatcher are as
follows:
(a) Event notifier: It transfers the data to the fog-FISVER STS fog infrastructure, after
receiving it from the attached sensor nodes in the vehicle.
(b) Data gatherer: This is an intermediate component between the event notifier and
the physical sensor; it helps to gather sensed data.
(c) Virtual sensor interface: Multiple sensors that sense data from different locations
of the vehicle are present in the system. The virtual sensor interface helps to maintain
a particular procedure to gather data. This component also cooperates to register the
sensors in the system.
(ii) Tier 2—FISVER STS Fog Infrastructure: Tier 2 works on top of the fog architecture.
Primarily, this tier has three responsibilities—keep updating the new object template definitions,
classifying events, and finding the most suitable police vehicle to notify the event. FISVER STS
fog infrastructure is divided into two sub-components:
• Target Object Training: Practically, there are different types of crime objects. The
system needs to be up-to-dated regarding all crime objects. This sub component of
Tier 2 is responsible for creating, updating, and storing the crime object definition.
The algorithm launcher uses these definitions in Tier 1 for the template matching
process. The template definition includes different features of the crime object such
as color gradient and shape format. A new object definition is stored in the definition
database. The database requires to be updated based on the availability of new
template definitions.
• Notification Factory: This sub-component receives notification about the events in a
different vehicle with the installed system. Further, this component receives and
validates the events. In order to handle multiple events, it maintains a queue.
(ii) Tier 3 consists of mobile applications that are executed on the users’ devices. The
application helps a user, who witnesses a crime, to notify the police.
9b. Explain hardware components and front end design features of AmbuSens system.
(10)
Hardware
In the AmbuSens system, a variety of hardware components are used such as sensors,
communication units, and other computing devices.
• Sensors: The sensors used in the AmbuSens system are non-invasive. The description of the
sensors used for forming the WBAN in the AmbuSens system are as follows:
(i) Optical Pulse Sensing Probe: It senses the photoplethysmogram (PPG) signal and
transmits it to a GSR expansion module. Typically, PPG signals are sensed from the
ear lobe, fingers, or other location of the human body. Further, the GSR expansion
module transfers the sensed data to a device in real-time.
(ii) Electrocardiogram (ECG) unit and sensor: The ECG module used in AmbuSens is in
the form of a kit, which contains ECG electrodes, biophysical 9” leads, biophysical
18” leads, alcohol swabs, and wrist strap. Typically, the ECG sensor measures the
pathway of electrical impulses through the heart to sense the heart’s responses to
physical exertion and other factors affecting cardiac health.
(iii) Electromyogram (EMG) sensor: This sensor is used to analyze and measure the
biomechanics of the human body. Particularly, the EMG sensor is used to measure
different electrical activity related to muscle contractions; it also assesses nerve
conduction, and muscle response in injured tissue.
(iv) Temperature sensor: The body temperature of patients changes with the condition of
the body. Therefore, a temperature sensor is included in the AmbuSens system, which
can easily be placed on the body of the patient.
(v) Galvanic Skin Response (GSR) sensor: The GSR sensor is used for measuring the
change in electrical characteristics of the skin.
• Local Data Processing Unit (LDPU): In AmbuSens, all the sensors attached to the human
body sense and transmit the sensed data to a centralized device, which is called an LDPU. An
LDPU is a small processing board with limited computation capabilities. The connectivity
between the sensors and the LDPU follows a single-hop star topology. The LDPU is
programmed in such a way that it can receive the physiological data from multiple sensor
nodes, simultaneously. Further, it transmits the data to the cloud for long-term storage and
heavy processing.
• Communication Module: Each sensor node consists of a Bluetooth (IEEE 802.15.1 standard)
module. The communication between the sensor nodes and the LDPU takes place with the help
of Bluetooth, which supports a maximum communication range of 10 meters in line-of-sight.
The LDPU delivers the data to the cloud with 3G/4G communication.
Front End
In the AmbuSens system, three actors—doctor, paramedic/nurse, and patient—are able to
participate and use the services. The web interface is designed as per the requirements of the
actors of the system. Each of the actors has an option to log in and access the system. The
confidentiality of a patient and their physiological data is important in a healthcare system.
Therefore, the system provides different scopes for data accessibility based on the category of an
actor. For example, the detailed health data of a patient is accessible only to the assigned doctor.
These data may not be required for the nurse; therefore, a nurse is unable to access the same set
of data a doctor can access. The system provides the flexibility to a patient to log in to his/her
account and download the details of his/her previous medical/treatment details. Therefore, in
AmbuSens, the database is designed in an efficient way such that it can deliver the customized
data to the respective actor. Each of the users has to register with the system to avail of the
service of the AmbuSens. Therefore, in this system, the registration process is also designed in a
customized fashion, that is, the details of a user to be entered into the registration form is
different for different actors. For example, a doctor must enter his/her registration number in the
registration form.
10a. Explain the architecture and components of healthcare IoT with block diagrams. (12)
A typical architecture for healthcare IoT is shown in Figure. We divide the architecture into four
layers. The detailed description of these layers are as follows:

(i) Layer 1: We have already explained in previous chapters that sensors are one of the key
enablers of IoT infrastructure. Layer 1 contains different physiological sensors that are
placed on the human body. These sensors collect the values of various physiological
parameters. The physiological data are analyzed to extract meaningful information.
(ii) Layer 2: Layer 1 delivers data to Layer 2 for short-term storage and low-level
processing. The devices that belong to Layer 2 are commonly known as local processing
units (LPU) or centralized hubs. These units collect the sensed data from the
physiological sensors attached to the body and process it based on the architecture’s
requirement. Further, LPUs or the centralized hubs forward the data to Layer 3.
(iii) Layer 3: This layer receives the data from Layer 2 and performs applicationspecific high-
level analytics. Typically, this layer consists of cloud architecture or high-end servers.
The data from multiple patients, which may be from the same or different locations, are
accumulated in this layer. Post analysis of data, some inferences or results are provided
to the application in Layer 4.
(iv) Layer 4: The end-users directly interact with Layer 4 through receiver-side applications.
The modes of accessibility of these services by an end user are typically through
cellphones, computers, and tablets.
i) Sensors: We have already explained that Layer 1 mainly consists of physiological
sensors that collect the physiological parameters of the patient.
ii) Wireless Connectivity: Without proper connectivity and communication, the data
sensed by the physiological sensors are of no use in an IoT-based healthcare system.
Typically, the communication between the wearable sensors and the LPU is through
either wired or wireless connectivity. The wireless communication between the
physiological sensors and LPU occurs with the help of Bluetooth and ZigBee. On the
other hand, the communication between the LPU and the cloud or server takes place
with Internet connectivity such as Wi Fi and WLAN. In Layer 4 of the healthcare
IoT architecture, the healthcare data are received by the end users with different
devices such as laptops, desktops, and cellphones. These communication protocols
vary depending on the type of device in use. For example, when a service is received
by a cellphone, it uses GSM (global system for mobile communications). On the
other hand, if the same service is received on a desktop, it can be through Ethernet or
Wi-Fi. Communication and connectivity in healthcare IoT is an essential component.
iii) Privacy and Security: The privacy and security of health data is a major concern in
healthcare IoT services. In a healthcare IoT architecture, several devices connect with
the external world. Moreover, between LPU and the server/cloud, different
networking devices work via network hops (from one networked device to another) to
transmit the data. If any of these devices are compromised, it may result in the theft of
health data of a patient, leading to serious security breaches and ensuing lawsuits. In
order to increase the security of the healthcare data, different healthcare service
providers and organizations are implementing healthcare data encryption and
protection schemes [3, 4].
iv) Analytics: For converting the raw data into information, analytics plays an important
role in healthcare IoT. Several actors, such as doctors, nurses, and patients, access the
healthcare information in a different customized format. This customization allows
each actor in the system to access only the information pertinent to their job/role. In
such a scenario, analytics plays a vital role in providing different actors in the system
access to meaningful information extracted from the raw healthcare data . Analytics is
also used for diagnosing a disease from the raw physiological data available [1, 2].
v) Cloud and Fog Computing: In a healthcare IoT system, several physiological sensors
are attached to a patient’s body. These sensors continuously produce a huge amount
of heterogeneous data. For storing these huge amounts of heterogeneous health data,
efficient storage space is essential. These data are used for checking the patient’s
history, current health status, and future for diagnosing different diseases and the
symptoms of the patient. Typically, the cloud storage space is scalable, where
payment is made as per the usage of space. Consequently, to store health data in a
healthcare IoT system, cloud storage space is used. Analytics on the stored data in
cloud storage space is used for drawing various inferences. The major challenges in
storage are security and delay in accessing the data. Therefore, cloud and fog
computing play a pivotal role in the storage of these massive volumes of
heterogeneous data.
vi) Interface: The interface is the most important component for users in a healthcare IoT
system. Among IoT applications, healthcare IoT is a very crucial and sensitive
application. Thus, the user interface must be designed in such a way that it can depict
all the required information clearly and, if necessary, reformat or represent it such
that it is easy to understand. Moreover, an interface must also contain all the useful
information related to the services.
10b. Summarize the advantages of Machine Learning (ML) in IoT. (6)
(i) Self-learner: An ML-empowered system is capable of learning from its prior and run-
time experiences, which helps in improving its performance continuously. For
example, an ML-assisted weather monitoring system predicts the weather report of
the next seven days with high accuracy from data collected in the last six months. The
system offers even better accuracy when it analyzes weather data that extends back to
three more months.
(ii) Time-efficient: ML tools are capable of producing faster results as compared to
human interpretation. For example, the weather monitoring system generates a
weather prediction report for the upcoming seven days, using data that goes back to
6–9 months. A manual analysis of such sizeable data for predicting the weather is
difficult and time-consuming. Moreover, the manual process of data analysis also
affects accuracy. In such a situation, ML is beneficial in predicting the weather with
less delay and accuracy as compared to humans.
(iii) Self-guided: An ML tool uses a huge amount of data for producing its results. These
tools have the capability of analyzing the huge amount of data for identifying trends
autonomously. As an example, when we search for a particular item on an online e-
commerce website, an ML tool analyzes our search trends. As a result, it shows a
range of products similar to the original item that we searched for initially.
(iv) Minimum Human Interaction Required: In an ML algorithm, the human does not
need to participate in every step of its execution. The ML algorithm trains itself
automatically, based on available data inputs. For instance, let us consider a
healthcare system that predicts diseases. In traditional systems, humans need to
determine the disease by analyzing different symptoms using standard “if– else”
observations. However, the ML algorithm determines the same disease, based on the
health data available in the system and matching the same with the symptoms of the
patient.
(v) Diverse Data Handling: Typically, IoT systems consist of different sensors and
produce diverse and multi-dimensional data, which are easily analyzed by ML
algorithms. For example, consider the profit of an industry in a financial year. Profits
in such industries depend on the attendance of laborers, consumption of raw
materials, and performance of heavy machineries. The attendance of laborers is
associated with an RFID (radio frequency identification)-based system. On the other
hand, industrial sensors help in the detection of machinery failures, and a scanner
helps in tracking the consumption of raw materials. ML algorithms use these diverse
and multi-dimensional data to determine the profit of the industry in the financial
year.
(vi) Diverse Applications: ML is flexible and can be applied to different application
domains such as healthcare, industry, smart traffic, smart home, and many others.
Two similar ML algorithms may serve two different applications.

You might also like