Unit-2 Etgbe PDF
Unit-2 Etgbe PDF
Unit-2 Etgbe PDF
AI for Good is an ITU initiative supporting institutions employing AI to tackle some of the world’s
greatest economic and social challenges. For example, the University of Southern California launched
the Centre for Artificial Intelligence in Society, with the goal of using AI to address socially relevant
problems such as homelessness. At Stanford, researchers are using AI to analyze satellite images to
identify which areas have the highest poverty levels.
1. Agriculture
In agriculture new AI advancements show improvements in gaining yield and to increase the research
and development of growing crops. New artificial intelligence now predicts the time it takes for a crop
like a tomato to be ripe and ready for picking thus increasing efficiency of farming. These advances go
on including Crop and Soil Monitoring, Agricultural Robots, and Predictive Analytics. Crop and soil
monitoring uses new algorithms and data collected on the field to manage and track the health of crops
making it easier and more sustainable for the farmers.
Due to the increase in population and the growth of demand for food in the future there will need to be
at least a 70% increase in yield from agriculture to sustain this new demand. More and more of the
public perceives that the adaption of these new techniques and the use of Artificial intelligence will help
reach that goal.
Several large financial institutions have invested in AI engines to assist with their investment practices.
BlackRock’s AI engine, Aladdin, is used both within the company and to clients to help with investment
decisions. Its wide range of functionalities includes the use of natural language processing to read text
such as news, broker reports, and social media feeds. It then gauges the sentiment on the companies
mentioned and assigns a score. Banks such as UBS and Deutsche Bank use an AI engine called Sqreem
(Sequential Quantum Reduction and Extraction Model) which can mine data to develop consumer
profiles and match them with the wealth management products they’d most likely want. Goldman Sachs
uses Kensho, a market analytics platform that combines statistical computing with big data and natural
language processing. Its machine learning systems mine through hoards of data on the web and assess
correlations between world events and their impact on asset prices. Information Extraction, part of
artificial intelligence, is used to extract information from live news feed and to assist with investment
decisions.
3. Personal finance
Several products are emerging that utilize AI to assist people with their personal finances. For example,
Digit is an app powered by artificial intelligence that automatically helps consumers optimize their
spending and savings based on their own personal habits and goals. The app can analyze factors such as
monthly income, current balance, and spending habits, then make its own decisions and transfer money
to the savings account. Wallet.AI, an upcoming startup in San Francisco, builds agents that analyze data
that a consumer would leave behind, from Smartphone check-ins to tweets, to inform the consumer
about their spending behaviour.
4. Portfolio Management
Robo-advisors are becoming more widely used in the investment management industry. Robo-advisors
provide financial advice and portfolio management with minimal human intervention. This class of
financial advisers work based on algorithms built to automatically develop a financial portfolio
according to the investment goals and risk tolerance of the clients. It can adjust to real-time changes in
the market and accordingly calibrate the portfolio.
Another application of AI is in the human resources and recruiting space. There are three ways AI is
being used by human resources and recruiting professionals: to screen resumes and rank candidates
according to their level of qualification, to predict candidate success in given roles through job matching
platforms, and rolling out recruiting chat bots that can automate repetitive communication tasks.
Typically, resume screening involves a recruiter or other HR professional scanning through a database
of resumes.
Some AI applications are geared towards the analysis of audiovisual media content such as movies, TV
programs, advertisement videos or user-generated content. The solutions often involve computer vision,
which is a major application area of AI.
Typical use case scenarios include the analysis of images using object recognition or face recognition
techniques, or the analysis of video for recognizing relevant scenes, objects or faces. The motivation for
using AI-based media analysis can be among other things the facilitation of media search, the creation
of a set of descriptive keywords for a media item, media content policy monitoring (such as verifying
the suitability of content for a particular TV viewing time), speech to text for archival or other purposes,
and the detection of logos, products or celebrity faces for the placement of relevant advertisements.
Machine Learning
Machine learning (ML) is the study of computer algorithms that can improve automatically through
experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning
algorithms build a model based on sample data, known as training data, in order to make predictions or
decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide
variety of applications, such as in medicine, email filtering, speech recognition, and computer vision,
where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
A subset of machine learning is closely related to computational statistics, which focuses on making
predictions using computers; but not all machine learning is statistical learning. The study of
mathematical optimization delivers methods, theory and application domains to the field of machine
learning. Data mining is a related field of study, focusing on exploratory data analysis through
unsupervised learning. Some implementations of machine learning use data and neural networks in a
way that mimics the working of a biological brain. In its application across business problems, machine
learning is also referred to as predictive analytics.
History:
The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in
the field of computer gaming and artificial intelligence. Also the synonym self-teaching computers was
used in this time period. A representative book of the machine learning research during the 1960s was
the Nilsson’s book on Learning Machines, dealing mostly with machine learning for pattern
classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and
Hart in 1973. In 1981 a report was given on using teaching strategies so that a neural network learns to
recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the
machine learning field: “A computer program is said to learn from experience E with respect to some
class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves
with experience E. “This definition of the tasks in which machine learning is concerned offers a
fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan
Turing’s proposal in his paper “Computing Machinery and Intelligence”, in which the question “Can
machines think?” is replaced with the question “Can machines do what we (as thinking entities) can
do?”.
Modern day machine learning has two objectives, one is to classify data based on models which have
been developed, the other purpose is to make predictions for future outcomes based on these models. A
hypothetical algorithm specific to classifying data may use computer vision of moles coupled with
supervised learning in order to train it to classify the cancerous moles. Whereas, a machine learning
algorithm for stock trading may inform the trader of future potential predictions.
Artificial intelligence
As a scientific endeavour, machine learning grew out of the quest for artificial intelligence. In the early
days of AI as an academic discipline, some researchers were interested in having machines learn from
data. They attempted to approach the problem with various symbolic methods, as well as what was then
termed “neural networks”; these were mostly perceptron’s and other models that were later found to be
reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed,
especially in automated medical diagnosis.
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI
and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data
acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out
of favour. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive
logic programming, but the more statistical line of research was now outside the field of AI proper, in
pattern recognition and information retrieval. Neural networks research had been abandoned by AI and
computer science around the same time. This line, too, was continued outside the AI/CS field, as
“connectionism”, by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their
main success came in the mid-1980s with the reinvention of backpropagation.
Machine learning (ML), reorganized as a separate field, started to flourish in the 1990s. The field
changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature.
It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and
models borrowed from statistics and probability theory.
The difference between ML and AI is frequently misunderstood. ML learns and predicts based on
passive observations, whereas AI implies an agent interacting with the environment to learn and take
actions that maximize its chance of successfully achieving its goals.
As of 2020, many sources continue to assert that ML remains a subfield of AI. Others have the view that
not all ML is part of AI, but only an ‘intelligent subset’ of ML should be considered AI.
Data mining
Machine learning and data mining often employ the same methods and overlap significantly, but while
machine learning focuses on prediction, based on known properties learned from the training data, data
mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step
of knowledge discovery in databases). Data mining uses many machine learning methods, but with
different goals; on the other hand, machine learning also employs data mining methods as
“Unsupervised Learning” or as a pre-processing step to improve learner accuracy. Much of the
confusion between these two research communities (which do often have separate conferences and
separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work
with: in machine learning, performance is usually evaluated with respect to the ability to reproduce
known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery
of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed
(unsupervised) method will easily be outperformed by other supervised methods, while in a typical
KDD task, supervised methods cannot be used due to the unavailability of training data.
Optimization
Machine learning also has intimate ties to optimization: many learning problems are formulated as
minimization of some loss function on a training set of examples. Loss functions express the
discrepancy between the predictions of the model being trained and the actual problem instances (for
example, in classification, one wants to assign a label to instances, and models are trained to correctly
predict the pre-assigned labels of a set of examples).
Generalization
The difference between optimization and machine learning arises from the goal of generalization: while
optimization algorithms can minimize the loss on a training set, machine learning is concerned with
minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms
is an active topic of current research, especially for deep learning algorithms.
Statistics
Machine learning and statistics are closely related fields in terms of methods, but distinct in their
principal goal: statistics draws population inferences from a sample, while machine learning finds
generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from
methodological principles to theoretical tools, have had a long pre-history in statistics. He also
suggested the term data science as a placeholder to call the overall field.
Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model,
wherein “algorithmic model” means more or less the machine learning algorithms like Random forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they
call statistical learning.
Approaches
Machine learning approaches are traditionally divided into three broad categories, depending on the
nature of the “Signal” or “Feedback” available to the learning system:
Supervised learning: The computer is presented with example inputs and their desired outputs,
given by a “teacher”, and the goal is to learn a general rule that maps inputs to outputs.
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to
find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden
patterns in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in which
it must perform a certain goal (such as driving a vehicle or playing a game against an opponent).
As it navigates its problem space, the program is provided feedback that’s analogous to rewards,
which it tries to maximize.
Building robust machine learning models requires substantial computational resources to process the
features and labels. Coding a complex model requires significant effort from data scientists and software
engineers. Complex models can require substantial computing power to execute and can take longer to
derive a usable result.
This represents a trade-off for businesses. They can choose a faster response but a potentially less
accurate outcome. Or they can accept a slower response but receive a more accurate result from the
model. But these compromises aren’t all bad news. The decision of whether to go for a higher cost and
more accurate model over a faster response comes down to the use case.
For example, making recommendations to shoppers on a retail shopping site requires real-time
responses, but can accept some unpredictability in the result. On the other hand, a stock trading system
requires a more robust result. So, a model that uses more data and performs more computations is likely
to deliver a better outcome when a real-time result is not needed.
As Machine Learning as a Service (MLaaS) offerings enter the market, the complexity and quality of
trade-offs will get greater attention. Researchers from the University of Chicago looked at the
effectiveness of MLaaS and found that “they can achieve results comparable to standalone classifiers if
they have sufficient insight into key decisions like classifiers and feature selection”.
Data plays a significant role in the machine learning process. One of the significant issues that machine
learning professionals face is the absence of good quality data. Unclean and noisy data can make the
whole process extremely exhausting. We don’t want our algorithm to make inaccurate or faulty
predictions. Hence the quality of data is essential to enhance the output. Therefore, we need to ensure
that the process of data preprocessing which includes removing outliers, filtering missing values, and
removing unwanted features, is done with the utmost level of perfection.
Let’s say for a child, to make him learn what an apple is, all it takes for you to point to an apple and say
apple repeatedly. Now the child can recognize all sorts of apples.
Well, machine learning is still not up to that level yet; it takes a lot of data for most of the algorithms to
function properly. For a simple task, it needs thousands of examples to make something out of it, and for
advanced tasks like image or speech recognition, it may need lakhs(millions) of examples.
Over-fitting refers to a machine learning model trained with a massive amount of data that negatively
affect its performance. It is like trying to fit in Oversized jeans. Unfortunately, this is one of the
significant issues faced by machine learning professionals. This means that the algorithm is trained with
noisy and biased data, which will affect its overall performance. Let’s understand this with the help of
an example. Let’s consider a model trained to differentiate between a cat, a rabbit, a dog, and a tiger.
The training data contains 1000 cats, 1000 dogs, 1000 tigers, and 4000 Rabbits. Then there is a
considerable probability that it will identify the cat as a rabbit. In this example, we had a vast amount of
data, but it was biased; hence the prediction was negatively affected.
Machine learning models operate within specific contexts. For example, ML models that power
recommendation engines for retailers operate at a specific time when customers are looking at certain
products. However, customer needs change over time, and that means the ML model can drift away
from what it was designed to deliver.
Models can decay for a number of reasons. Drift can occur when new data is introduced to the model.
This is called data drift. It can also occur when our interpretation of the data changes. This is concept
drift.
To accommodate this drift, you need a model that continuously updates and improves itself using data
that comes in. That means you need to keep checking the model.
That requires the collection of features and labels and to react to changes so the model can be updated
and retrained. While some aspects of the retraining can be conducted automatically, some human
intervention is needed. It’s critical to recognise that the deployment of a machine learning tool is not a
one-off activity.
Machine learning tools require regular review and update to remain relevant and continue to deliver
value.
Creating a model is easy. Building a model can be automatic. However, maintaining and updating the
models requires a plan and resources.
Machine learning models are part of a longer pipeline that starts with the features that are used to train
the model. Then there is the model itself, which is a piece of software that can require modification and
updates. That model requires labels so that the results of an input can be recognised and used by the
model. And there may be a disconnect between the model and the final signal in a system.
In many cases when an unexpected outcome is delivered, it’s not the machine learning that has broken
down but some other part of the chain. For example, a recommendation engine may have offered a
product to a customer, but sometimes the connection between the sales system and the recommendation
could be broken, and it takes time to find the bug. In this case, it would be hard to tell the model if the
recommendation was successful. Troubleshooting issues like this can be quite labour intensive.
Machine learning offers significant benefits to businesses. The ability to predict future outcomes to
anticipate and influence customer behaviour and to support business operations are substantial.
However, ML also brings challenges to businesses. By recognising these challenges and developing
strategies to address them, companies can ensure they are prepared and equipped to handle them and get
the most out of machine learning technology.
7. Slow Implementation
This is one of the common issues faced by machine learning professionals. The machine learning
models are highly efficient in providing accurate results, but it takes a tremendous amount of time. Slow
programs, data overload, and excessive requirements usually take a lot of time to provide accurate
results. Further, it requires constant monitoring and maintenance to deliver the best output.
Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement
learning, recurrent neural networks and convolutional neural networks have been applied to fields
including computer vision, speech recognition, natural language processing, machine translation,
bioinformatics, drug design, medical image analysis, material inspection and board game programs,
where they have produced results comparable to and in some cases surpassing human expert
performance.
Artificial neural networks (ANNs) were inspired by information processing and distributed
communication nodes in biological systems. ANNs have various differences from biological brains.
Specifically, artificial neural networks tend to be static and symbolic, while the biological brain of most
living organisms is dynamic (plastic) and analogue.
The adjective “deep” in deep learning refers to the use of multiple layers in the network. Early work
showed that a linear perceptron cannot be a universal classifier, but that a network with a nonpolynomial
activation function with one hidden layer of unbounded width can. Deep learning is a modern variation
which is concerned with an unbounded number of layers of bounded size, which permits practical
application and optimized implementation, while retaining theoretical universality under mild
conditions. In deep learning the layers are also permitted to be heterogeneous and to deviate widely
from biologically informed connectionist models, for the sake of efficiency, trainability and
understandability, whence the “structured” part.
Most modern deep learning models are based on artificial neural networks, specifically convolutional
neural networks (CNN)s, although they can also include propositional formulas or latent variables
organized layer-wise in deep generative models such as the nodes in deep belief networks and deep
Boltzmann machines.
In deep learning, each level learns to transform its input data into a slightly more abstract and composite
representation. In an image recognition application, the raw input may be a matrix of pixels; the first
representational layer may abstract the pixels and encode edges; the second layer may compose and
encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may
recognize that the image contains a face. Importantly, a deep learning process can learn which features
to optimally place in which level on its own. This does not completely eliminate the need for hand-
tuning; for example, varying numbers of layers and layer sizes can provide different degrees of
abstraction.
The word “Deep” in “Deep learning” refers to the number of layers through which the data is
transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP)
depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal
connections between input and output. For a feedforward neural network, the depth of the CAPs is that
of the network and is the number of hidden layers plus one (as the output layer is also parameterized).
For recurrent neural networks, in which a signal may propagate through a layer more than once, the
CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow
learning from deep learning, but most researchers agree that deep learning involves CAP depth higher
than 2. CAP of depth 2 has been shown to be a universal approximator in the sense that it can emulate
any function. Beyond that, more layers do not add to the function approximator ability of the network.
Deep models (CAP > 2) are able to extract better features than shallow models and hence, extra layers
help in learning the features effectively.
Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning
helps to disentangle these abstractions and pick out which features improve performance.
Deep neural networks are generally interpreted in terms of the universal approximation theorem or
probabilistic inference.
The classic universal approximation theorem concerns the capacity of feedforward neural networks with
a single hidden layer of finite size to approximate continuous functions. In 1989, the first proof was
published by George Cybenko for sigmoid activation functions and was generalised to feed-forward
multi-layer architectures in 1991 by Kurt Hornik. Recent work also showed that universal
approximation also holds for non-bounded activation functions such as the rectified linear unit.
The universal approximation theorem for deep neural networks concerns the capacity of networks with
bounded width but the depth is allowed to grow. Lu proved that if the width of a deep neural network
with ReLU activation is strictly larger than the input dimension, then the network can approximate any
Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural
network is not a universal approximator.
The probabilistic interpretation derives from the field of machine learning. It features inference, as well
as the optimization concepts of training and testing, related to fitting and generalization, respectively.
More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative
distribution function. The probabilistic interpretation led to the introduction of dropout as regularizer in
neural networks. The probabilistic interpretation was introduced by researchers including Hopfield,
Widrow and Narendra and popularized in surveys such as the one by Bishop.
Architectures:
Deep Neural Network: It is a neural network with a certain level of complexity (having multiple hidden
layers in between input and output layers). They are capable of modeling and processing non-linear
relationships.
Deep Belief Network (DBN): It is a class of Deep Neural Network. It is multi-layer belief networks.
1. Learn a layer of features from visible units using Contrastive Divergence algorithm.
2. Treat activations of previously trained features as visible units and then learn features of
features.
3. Finally, the whole DBN is trained when the learning for the final hidden layer is achieved.
Recurrent (perform same task for every element of a sequence) Neural Network: Allows for parallel and
sequential computation. Similar to the human brain (large feedback network of connected neurons).
They are able to remember important things about the input they received and hence enables them to be
more precise.
Limitations:
Learning through observations only
The issue of biases
Advantages:
Reduces need for feature engineering.
Best in-class performance on problems.
Eliminates unnecessary costs.
Identifies defects easily that are difficult to detect.
Disadvantages:
Computationally expensive to train.
Large amount of data required.
No strong theoretical foundation.
Applications:
Automatic Text Generation: Corpus of text is learned and from this model new text is generated,
word-by-word or character-by-character. Then this model is capable of learning how to spell, punctuate,
form sentences, or it may even capture the style.
The first to use the concept of a “singularity” in the technological context was John von Neumann.
Stanislaw Ulam reports a discussion with von Neumann “centered on the accelerating progress of
technology and changes in the mode of human life, which gives the appearance of approaching some
essential singularity in the history of the race beyond which human affairs, as we know them, could not
continue”. Subsequent authors have echoed this viewpoint.
Public figures such as Stephen Hawking and Elon Musk have expressed concern that full artificial
intelligence (AI) could result in human extinction. The consequences of the singularity and its potential
benefit or harm to the human race have been intensely debated.
Some machines are programmed with various forms of semi-autonomy, including the ability to locate
their own power sources and choose targets to attack with weapons. Also, some computer viruses can
evade elimination and, according to scientists in attendance, could therefore be said to have reached a
“cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as
depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.
Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human,
scientific and technological problems will be tackled and solved with brainpower far superior to that of
humans. He notes that artificial systems are able to share data more directly than humans, and predicts
that this would result in a global network of super-intelligence that would dwarf human capability.
Robinson also discusses how vastly different the future would potentially look after such an intelligence
explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than
humanity captures, so capturing more of that solar energy would hold vast promise for civilizational
growth.
The term “Technological Singularity” reflects the idea that such change may happen suddenly, and that
it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence
explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because
AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning
AI goal-systems with human values.
The rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence
for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of
exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from
the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at
higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU
designs and multi-cell processors. While Kurzweil used Modis’ resources, and Modis’ work was around
accelerating change, Modis distanced himself from Kurzweil’s thesis of a “Technological singularity”,
claiming that it lacks scientific rigor.
Some intelligence technologies, like “Seed AI”, may also have the potential to not just make themselves
faster, but also more efficient, by modifying their source code. These improvements would make further
improvements possible, which would make further improvements possible, and so on.
The mechanism for a recursively self-improving set of algorithms differs from an increase in raw
computation speed in two ways. First, it does not require external influence: machines designing faster
hardware would still require humans to create the improved hardware, or to program factories
appropriately. An AI rewriting its own source code could do so while contained in an AI box.
Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome.
While speed increases seem to be only a quantitative difference from human intelligence, actual
algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the
changes that human intelligence brought: humans changed the world thousands of times more rapidly
than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive
departure and acceleration from the previous geological rates of change, and improved intelligence
could cause change to be as different again.
There are substantial dangers associated with an intelligence explosion singularity originating from a
recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant
under self-improvement, potentially causing the AI to optimise for something other than what was
originally intended. Secondly, AIs could compete for the same scarce resources humankind uses to
survive.
While not actively malicious, there is no reason to think that AIs would actively promote human goals
unless they could be programmed as such, and if not, might use the resources currently used to support
humankind to promote its own goals, causing human extinction.
Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for
a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are
more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the
case of a software-limited singularity, intelligence explosion would actually become more likely than
with a hardware-limited singularity, because in the software-limited case, once human-level AI is
developed, it could run serially on very fast hardware, and the abundance of cheap hardware would
make AI research less constrained. An abundance of accumulated hardware that can be unleashed once
the software figures out how to use it has been called “computing overhang.”
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he
calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new
technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to
“technological change so rapid and profound it represents a rupture in the fabric of human history”.
Kurzweil believes that the singularity will occur by approximately 2045. His predictions differ from
Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving
superhuman intelligence.
Since there is no direct evolutionary motivation for an AI to be friendly to humans, the challenge is in
evaluating whether the artificial intelligence driven singularity will under evolutionary pressure promote
their own survival over ours. The reality remains that artificial intelligence evolution will have no
inherent tendency to produce or create outcomes valued by humans and there is little reason to expect an
outcome desired by mankind from any super intelligent machine.
We humans are living a paradox as the achievements of artificial intelligence advances are shaping
human ecosystems with more dangerous and more valuable opportunities than ever before. Whether
Amid the rise of data collection and analysis, one of augmented reality’s primary goals is to highlight
specific features of the physical world, increase understanding of those features, and derive smart and
accessible insight that can be applied to real-world applications. Such big data can help inform
companies’ decision-making and gain insight into consumer spending habits, among others.
The primary value of augmented reality is the manner in which components of the digital world blend
into a person’s perception of the real world, not as a simple display of data, but through the integration
of immersive sensations, which are perceived as natural parts of an environment. The earliest functional
AR systems that provided immersive mixed reality experiences for users were invented in the early
1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force’s Armstrong
Laboratory in 1992. Commercial augmented reality experiences were first introduced in entertainment
and gaming businesses. Subsequently, augmented reality applications have spanned commercial
industries such as education, communications, medicine, and entertainment. In education, content may
be accessed by scanning or viewing an image with a mobile device or by using markerless AR
techniques.
Augmented reality is used to enhance natural environments or situations and offer perceptually enriched
experiences. With the help of advanced AR technologies (e.g. adding computer vision, incorporating
AR cameras into smartphone applications and object recognition) the information about the surrounding
real world of the user becomes interactive and digitally manipulated. Information about the environment
and its objects is overlaid on the real world. This information can be virtual. Augmented Reality is any
experience which is artificial and which adds to the already existing reality. or real, e.g. seeing other real
sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with
where they actually are in space. Augmented reality also has a lot of potential in the gathering and
sharing of tacit knowledge. Augmentation techniques are typically performed in real time and in
semantic contexts with environmental elements. Immersive perceptual information is sometimes
combined with supplemental information like scores over a live video feed of a sporting event. This
combines the benefits of both augmented reality technology and heads up display technology (HUD).
Applications:
1. Archaeology
AR has been used to aid archaeological research. By augmenting archaeological features onto the
modern landscape, AR allows archaeologists to formulate possible site configurations from extant
structures. Computer generated models of ruins, buildings, landscapes or even ancient people have been
recycled into early archaeological AR applications. For example, implementing a system like VITA
(Visual Interaction Tool for Archaeology) will allow users to imagine and investigate instant excavation
results without leaving their home.
2. Architecture
3. Commerce:
AR is used to integrate print and video marketing. Printed marketing material can be designed with
certain “trigger” images that, when scanned by an AR-enabled device using image recognition, activate
a video version of the promotional material. A major difference between augmented reality and
straightforward image recognition is that one can overlay multiple media at the same time in the view
screen, such as social media share buttons, the in-page video even audio and 3D objects. Traditional
print-only publications are using augmented reality to connect different types of media.
AR systems are being used as collaborative tools for design and planning in the built environment. For
example, AR can be used to create augmented reality maps, buildings and data feeds projected onto
tabletops for collaborative viewing by built environment professionals. Outdoor AR promises that
designs and plans can be superimposed on the real-world, redefining the remit of these professions to
bring in-situ design into their process. Design options can be articulated on site, and appear closer to
reality than traditional desktop mechanisms such as 2D maps and 3d models.
5. Industrial manufacturing
AR is used to substitute paper manuals with digital instructions which are overlaid on the manufacturing
operator’s field of view, reducing mental effort required to operate. AR makes machine maintenance
efficient because it gives operators direct access to a machine’s maintenance history. Virtual manuals
help manufacturers adapt to rapidly-changing product designs, as digital instructions are more easily
edited and distributed compared to physical manuals.
6. Education
In educational settings, AR has been used to complement a standard curriculum. Text, graphics, video,
and audio may be superimposed into a student’s real-time environment. Textbooks, flashcards and other
educational reading material may contain embedded “markers” or triggers that, when scanned by an AR
device, produced supplementary information to the student rendered in a multimedia format. The 2015
Virtual, Augmented and Mixed Reality: 7th International Conference mentioned Google Glass as an
example of augmented reality that can replace the physical classroom. First, AR technologies help
learners engage in authentic exploration in the real world, and virtual objects such as texts, videos, and
pictures are supplementary elements for learners to conduct investigations of the real-world
surroundings.
Currently, standard virtual reality systems use either virtual reality headsets or multi-projected
environments to generate realistic images, sounds and other sensations that simulate a user’s physical
presence in a virtual environment. A person using virtual reality equipment is able to look around the
artificial world, move around in it, and interact with virtual features or items. The effect is commonly
created by VR headsets consisting of a head-mounted display with a small screen in front of the eyes,
but can also be created through specially designed rooms with multiple large screens. Virtual reality
typically incorporates auditory and video feedback, but may also allow other types of sensory and force
feedback through haptic technology.
Virtual reality applications are applications that make use of virtual reality (VR), an immersive sensory
experience that digitally simulates a virtual environment. Applications have been developed in a variety
of domains, such as education, architectural and urban design, digital marketing and activism,
engineering and robotics, entertainment, virtual communities, fine arts, healthcare and clinical therapies,
heritage and archaeology, occupational safety, social science and psychology.
An example of a nature-oriented virtual environment made with real-time rendering engine Unity.
Studies on exposure to nature environments shows how it is able to produce relaxation, recover attention
capacity and cognitive function, reduce stress and stimulate positive mood.
Immersive virtual reality technology is able to replicate believable restorative nature experiences, either
using 360-degree video footage or environments created from 3D real-time rendering often developed
using game engines (for example Unreal Engine or Unity) This is useful for users who are deprived
from accessing certain areas, due to e.g. physical restraints or complications, such as senior citizens or
nursing home residents. Restorative virtual environments are able replicate and mediate real world
experiences using video footage, replicate these using 3D rendering or can be based loosely on real
world environment using real-time 3D rendering.
Immersive VR environment, used to motivate senior citizens to exercise regularly, by driving along the
path and exploring the nature surroundings
VR began to appear in rehabilitation in the 2000s. For Parkinson’s disease, evidence of its benefits
compared to other rehabilitation methods is lacking. A 2018 review on the effectiveness of VR mirror
therapy and robotics found no benefit. Virtual reality exposure therapy (VRET) is a form of exposure
therapy for treating anxiety disorders such as post traumatic stress disorder (PTSD) and phobias. Studies
have indicated that combining VRET with behavioral therapy, patients experience a reduction of
symptoms. In some cases, patients no longer met the DSM-V criteria for PTSD.
Virtual Reality is also tested in the field of behavioral activation therapy. BA therapy encourages patient
to change their mood by scheduling positive activities into the day-to-day life. Due to a lack of access to
trained providers, physical constraints or financial reasons, many patients are not able to attend BA
therapy. Researchers are trying to overcome these challenges by providing BA via Virtual Reality. The
idea of the concept is to enable especially elderly adults to participate in engaging activities that they
wouldn’t be able to attend without VR. Possibly, the so called “BA-inspired VR protocols” will mitigate
the lower mood, life satisfaction, and likelihood of depressions.
3. VR in Military
Both the military from the UK as well as the US have employed virtual reality in their training as it
enables them to take up a wide range of imitations. Virtual Reality is utilized for all departments of
service ranging from the navy, the army, the air force, marines to the coast guard. Virtual Reality can
effectively transport a trainee into a variety of varying scenarios, locations as well as environments with
the purpose of facilitating training.
4. VR in Sports
Virtual Reality has been steadily shifting the sports industry for all its participants. This technology can
be employed by coaches as well as players for training effectively across various sports, with them
being able to view as well as experience particular scenarios repeatedly and enhancing their
performance every time.
VR is also adopted to serve as a training aid for assisting in assessing athletic performance and
examining techniques. It’s also been known to enhance the cognitive capabilities of athletes while
injured by allowing them to virtually experience gameplay situations.
5. Education:
VR is also deployed in the education sector for teaching and learning scenarios. It aids the students in
conversing together, in the vicinity of a 3D environment. The students can also be carried on virtual
field trips such as to museums, embarking on tours of the solar system as well as traveling back in time
to varying eras.
Virtual reality can prove to be specifically advantageous for students having special needs. Research
has discovered that VR could prove to be a motivating platform to safely train children and teach them
social skills including children having autism disorders. For instance, the technology company, Floreo,
executed virtual reality situations that enable children to absorb and train themselves with skills like
making eye contact, pointing as well as developing social connections.
There are many practical applications of mixed reality, including design, entertainment, military
training, and remote working. There are also different display technologies used to facilitate the
interaction between users and mixed reality applications.
Applications:
1. Education
Simulation-based learning includes VR and AR based training and interactive, experiential learning.
There are many potential use cases for Mixed Reality in both educational settings and professional
training settings. Notably in education, AR has been used to simulate historical battles, providing an
unparalleled immersive experience for students and potentially enhanced learning experiences. In
addition, AR has shown effectiveness in university education for health science and medical students
within disciplines that benefit from 3D representations of models, such as physiology and anatomy.
2. Entertainment
From television shows to game consoles, mixed reality has many applications in the field of
entertainment.
The 2004 British game show Bamzooki called upon child contestants to create virtual “Zooks” and
watch them compete in a variety of challenges. The show used mixed reality to bring the Zooks to life.
The television show ran for one season, ending in 2010.
The 2003 game show FightBox also called upon contestants to create competitive characters and used
mixed reality to allow them to interact. Unlike Bamzoomi’s generally non-violent challenges, the goal
of FightBox was for new contestants to create the strongest fighter to win the competition.
3. Military training
The first fully immersive mixed reality system was the Virtual Fixtures platform, which was developed
in 1992 by Louis Rosenberg at the Armstrong Laboratories of the United States Air Force. It enabled
human users to control robots in real-world environments that included real physical objects and 3D
virtual overlays (“fixtures”) that were added enhance human performance of manipulation tasks.
Published studies showed that by introducing virtual objects into the real world, significant performance
increases could be achieved by human operators.
4. Remote Working
Mixed reality allows a global workforce of remote teams to work together and tackle an organization’s
business challenges. No matter where they are physically located, an employee can wear a headset and
noise-canceling headphones and enter a collaborative, immersive virtual environment. As these
applications can accurately translate in real time, language barriers become irrelevant. This process also
increases flexibility. While many employers still use inflexible models of fixed working time and
location, there is evidence that employees are more productive if they have greater autonomy over
where, when, and how they work. Some employees prefer loud work environments, while others need
silence. Some work best in the morning; others work best at night. Employees also benefit from
autonomy in how they work because of different ways of processing information. The classic model for
learning styles differentiates between Visual, Auditory, and Kinesthetic learners.
Block Chain
A blockchain is a growing list of records, called blocks, that are linked together using cryptography.
Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data
(generally represented as a Merkle tree). The timestamp proves that the transaction data existed when
the block was published in order to get into its hash. As blocks each contain information about the block
previous to it, they form a chain, with each additional block reinforcing the ones before it. Therefore,
blockchains are resistant to modification of their data because once recorded, the data in any given block
cannot be altered retroactively without altering all subsequent blocks.
Blockchain seems complicated, and it definitely can be, but its core concept is really quite simple. A
blockchain is a type of database. To be able to understand blockchain, it helps to first understand what a
database actually is.
Storage Structure
One key difference between a typical database and a blockchain is the way the data is structured. A
blockchain collects information together in groups, also known as blocks, that hold sets of information.
Blocks have certain storage capacities and, when filled, are chained onto the previously filled block,
forming a chain of data known as the “blockchain.” All new information that follows that freshly added
block is compiled into a newly formed block that will then also be added to the chain once filled.
A database structures its data into tables whereas a blockchain, like its name implies, structures its data
into chunks (blocks) that are chained together. This makes it so that all blockchains are databases but not
all databases are blockchains. This system also inherently makes an irreversible timeline of data when
implemented in a decentralized nature. When a block is filled it is set in stone and becomes a part of this
timeline. Each block in the chain is given an exact timestamp when it is added to the chain.
Infrastructure (hardware)
Networking (node discovery, information propagation and verification)
Consensus (proof of work, proof of stake)
Data (blocks, transactions)
Application (smart contracts/decentralized applications, if applicable)
Uses
Blockchain technology can be integrated into multiple areas. The primary use of blockchains is as a
distributed ledger for cryptocurrencies such as bitcoin; there were also a few other operational products
which had matured from proof of concept by late 2016. As of 2016, some businesses have been testing
the technology and conducting low-level implementation to gauge blockchain’s effects on
organizational efficiency in their back office.
Features of Blockchain:
Blockchain is Decentralized as well as an open ledger. Ledger is the record of the transactions done and
because it is visible to everyone, therefore is called an open ledger. No individual or any organisation is
in charge of the transactions. Each and every connection in the blockchain network has a same copy of
the ledger.
Data stored in blockchain is immutable and cannot be changed easily as explained above. Also, the data
is added to the block after it is approved by everyone in the network and thus allowing secure
transactions. Those who validate the transactions and add them in block are called miners.
Blockchain provide a peer-to-peer network. This characteristic of blockchain allows the transactions to
involve only two parties, the sender and the receiver. Thus, it removes the requirement of ‘third party
authorisation’ because everyone in the network is themselves able to authorise the transactions.
Immutability
There are some exciting blockchain features but among them “Immutability” is undoubtedly one of the
key features of blockchain technology. But why is this technology uncorrupted? Let’s start with a
connecting blockchain with immutability.
Immutability means something that can’t be changed or altered. This is one of the top blockchain
features that help to ensure that the technology will remain as it is a permanent, unalterable network.
Scope:
User Control: With decentralization, users now have control over their properties. They don’t
have to rely on any third party to maintain their assets. All of them can do it simultaneously by
themselves.
Less Failure: Everything in the blockchain is fully organized, and as it doesn’t depend on
human calculations it’s highly fault-tolerant. So, accidental failures of this system are not a usual
output.
Less Prone to Breakdown: As decentralized is one of the key features of blockchain
technology, it can survive any malicious attack. This is because attacking the system is more
expensive for hackers and not an easy solution. So, it’s less likely to breakdown.
Zero Scams: As the system runs on algorithms, there is no chance for people to scam you out of
anything. No one can utilize blockchain for their personal gains.
No Third-Party: Decentralized nature of the technology makes it a system that doesn’t rely on
third-party companies; No third-party, no added risk.
Authentic Nature: This nature of the system makes it a unique kind of system for every kind of
person. And hackers will have a hard time cracking it.
Transparency: The decentralized nature of technology creates a transparent profile of every
participant. Every change on the blockchain is viewable and makes it more concrete.
Challenges in Adopting Block chain
1. Low Scalability
Another one of the challenges of implementing blockchain is scalability. In reality, blockchains work
fine for a small number of users. But what happens when a mass integration will take place? Ethereum
and Bitcoin now have the highest number of users on the network, and needless to say, they are having a
hard time dealing with the situation.
The blockchain is complex. That’s why it takes more time to process any transactions. Also, the
encryption of the system makes it even slower. Although they claim to be faster than traditional
payment methods, still, in some cases, they can’t deliver it.
Completing a transaction can take up to several hours. So, if you want to pay for a cup of coffee, it will
cause you trouble. It’s most suited for making large transactions where time isn’t a vital element. It’s an
element of risk. And wasn’t it supposed to remove the ‘unsecured’ nature of blockchains from the
equation?
Theoretically, the principle extends to blockchain networks that use something other than store value for
example, logging transactions or interactions in the IoT environment.
These channels in fact, even computer files can become slow and impractical. It’s not the case always. It
only slows down when the network is staked with too many users. The more it grows, the slower it gets.
Energy consumption is another blockchain adoption challenge. Most of the blockchain technology
follow bitcoins infrastructure and use Proof of Work as a consensus algorithm.
However, Proof of Work is not as great as it looks. To keep the system live, it will need computational
power. You probably heard about mining.
Mining will require you to solve complex equations using your computer. So, your PC will take more
and more electricity to overcome this situation when you start mining.
4. Lack of Privacy
Blockchain and privacy don’t go really well with each other. The public ledger system fuels the system,
so full privacy is not the first concern.
But can any organization function without privacy? Well, no. Many companies that work with the
privacy needs to have defined boundaries. Their consumers trust them with sensitive information.
This is an essential requirement in the case of bitcoin and other cryptocurrencies. On the other hand, this
raises some concerns for governments and companies. Governments and companies always need to
protect and restrict access to their data for various reasons.
a) The first of which is sintering whereby the material is heated without being liquified to create
complex high-resolution objects. Direct metal laser sintering uses metal powder whereas
selective laser sintering uses a laser on thermoplastic powders so that the particles stick together.
b) The second AM technology fully melts the materials, this includes direct laser metal sintering
which uses a laser to melt layers of metal powder and electron beam melting, which uses
electron beams to melt the powders.
c) The third broad type of technology is stereolithography, which uses a process called
photopolymerisation, whereby an ultraviolet laser is fired into a vat of photopolymer resin to
create torque-resistant ceramic parts able to endure extreme temperatures.
Advantages:
Similar to standard 3D printing, AM allows for the creation of bespoke parts with complex geometries
and little wastage. Ideal for rapid prototyping, the digital process means that design alterations can be
done quickly and efficiently during the manufacturing process. Unlike with more traditional subtractive
manufacturing techniques, the lack of material wastage provides cost reduction for high value parts,
while AM has also been shown to reduce lead times.
In addition, parts that previously required assembly from multiple pieces can be fabricated as a single
object which can provide improved strength and durability. AM can also be used to fabricate unique
objects or replacement pieces where the original parts are no longer produced.
3. Part flexibility: Additive manufacturing is appealing to companies that need to create unusual or
complex components that are difficult to manufacture using traditional processes. AM enables the
design and creation of nearly any geometric form, ones that reduce the weight of an object while still
maintaining stability. Part flexibility is another major waste reduction aspect of AM. The ability to
develop products on-demand, inherently reduces inventory and other waste.
4. Legacy parts: AM has gifted companies the ability to recreate impossible-to-find, no longer
manufactured, legacy parts. For example, the restoration of classic cars has greatly benefited from
additive manufacturing technology. Where legacy parts were once difficult and expensive to find, they
can now be produced through the scanning and X-ray analysis of original material and parts. In
combination with the use of CAD software, this process facilitates fast and easy reverse engineering to
create legacy parts.
5. Inventory stock reduction: AM can reduce inventory, eliminating the need to hold surplus inventory
stock and associated carrying costs. With additive manufacturing, components are printed on demand,
meaning there is no over-production, no unsold finished goods, and a reduction in inventory stock.
6. Energy savings: In conventional manufacturing, machinery and equipment often require auxiliary
tools that have greater energy needs. AM uses fewer resources, having less need for ancillary
equipment, and thereby reducing manufacturing waste material. AM reduces the number of raw
materials needed to manufacture a product. As such, there is lower energy consumption associated with
raw material extraction, and AM has fewer energy needs overall.
7. Customisation: AM manufacturing offers design innovation and creative freedom without the cost
and time constraints of traditional manufacturing. The ability to easily alter original specifications
means that AM offers greater opportunity for businesses to provide customised designs to their clients.
With the ease to digitally adjust design, product customisation becomes a simple proposition. Short
production runs are then easily tailored to specific needs.
Disadvantages:
1. Production costs: Production costs are high. Materials for AM are frequently required in the form of
exceptionally fine or small particles that can considerably increase the raw material cost of a project.
Additionally, the inferior surface quality often associated with AM means there is an added cost to
undertake any surface finishes and the post-processing required to meet quality specifications and
standards.
2. Cost of entry: With additive manufacturing, the cost of entry is still prohibitive to many
organisations and, in particular, smaller businesses. The capital costs to purchase necessary equipment
can be substantial and many manufacturers have already invested significant capital into the plant and
equipment for their traditional operations. Making the switch is not necessarily an easy proposition and
certainly not an inexpensive one.
3. Additional materials: Currently there is a limit to the types of materials that can be processed within
AM specifications and these are typically pre-alloy materials in a base powder. The mechanical
properties of a finished product are entirely dependent upon the characteristics of the powder used in the
process. All the materials and traits required in an AM component have to be included early in the mix.
It is, therefore, impossible to successfully introduce additional materials and properties later in the
process.
5. It’s slow: As mentioned, additive manufacturing technology has been around since the eighties, yet
even in 2021, AM is still considered a niche process. That is largely because AM still has slow build
rates and doesn’t provide an efficient way to scale operations to produce a high volume of parts.
Depending on the final product sought, additive manufacturing may take up to 3 hours to produce a
shape that a traditional process could create in seconds. It is virtually impossible to realise economies of
scale.
1. Medical: The rapidly innovating medical industry utilises AM solutions to deliver breakthroughs in
functional prototypes, surgical grade components, and true to life anatomical models. AM in the medical
field is producing advancements in the areas of orthopaedic implant and dental devices, as well as tools
and instrumentation such as seamless medical carts, anatomical models, custom saw and drill guides,
and custom surgical tools.
Material development in the medical industry is critical with the certified biocompatible materials
potentially revolutionising areas of customised implants, and the life-saving devices and pre-surgical
tools to increase patient results.
2. Transportation: The transportation industry requires parts that withstand extreme speeds and heat,
while still being lightweight enough to avoid preventable drag. The benefit of additive manufacturing’s
ability to develop lightweight components has led to more efficient vehicles.
Many of the AM applications transforming the transportation industry include complex ductwork that is
unable to be fabricated using conventional methods, resilient prototypes, custom interior features,
grilles, and large panelling.
3. Consumer products: Marketing teams, designers, and graphic artists function to form ideas and
deliver products to market as quickly as possible while adapting to fluctuating trends and consumer
demand. Part of this process is spent simulating the look and feel of the final product.
4. Aerospace: With some of the most demanding industry standards in terms of performance, the
aerospace industry was one of the first to adopt additive manufacturing. The commercial and military
aerospace domain needs flight-worthy components that are made from high-performance materials.
Key AM applications that have developed in the gas, oil and energy industries include various control-
valve components, pressure gauge pieces, turbine nozzles, rotors, flow meter parts, and pump
manifolds. With the capability to develop corrosion resistant metal materials AM has the potential to
create customised parts for use under-water or other harsh environments, associated with the industry.
Many of the AM applications transforming the transportation industry include complex ductwork that is
unable to be fabricated using conventional methods, resilient prototypes, custom interior features,
grilles, and large panelling.
AM technology will continue to evolve product design and on-demand manufacturing. As design
software becomes more integrated and easier to use, the benefits of additive manufacturing will grow to
significantly influence an increasing number of industries.
AM can reduce the impact of these trade issues through its ability to produce parts on demand and
locally. In addition, complex geometries can be produced using the technology meaning that an
assembly of several parts is produced at once. Producing an assembly all at once involves the
procurement of a single raw material rather than procuring multiple parts and then assembling them.
This is a natural ability of AM since it has virtually no geometric restrictions, unlike other
manufacturing techniques. GE, for example, has used the technology in their newest TurboProp engine.
In this engine they replaced over 850 metal parts with just 12 3D printed complex parts.
Clearly, any delay in receiving a part presents an immediate bottleneck around production of a product
(or assembly) that includes this part impacting manufacturing and delivery schedules. Cutting down the
number of needed suppliers reduces this risk and the procurement department is happy with that shorter
list of suppliers. This is very clear when the parts are highly specialized (like in an engine) but also
applies to simpler products.
From Virtual to Reality: 3D printed parts when and where required on demand
The assembly replacement ability of AM is just one of the technology’s many advantages these
advantages enhance or compound each other. Probably the most important efficiency-enhancing AM
capability within supply chains is its ability to enable virtual inventories and a digital (for the most part)
rather than physical supply chain. This is quite literally the ability to access and pull parts from a digital
(rather than physical) inventory and then quickly and effortlessly 3D print them anywhere at any time in
the exact quantity desired. The digital inventory can be stored on a local disk, in a central disk, or even
in the cloud.
This has several positive implications, the first of which is the huge cost saving that arises from
eradicating the need for large physical inventories. Let’s face it, physical inventory is the weak spot in
any supply chain; it has no benefits beyond the availability of parts and is a burden for companies that
pay enormous amounts of money to maintain it. Similarly, from a logistics perspective using AM with
virtual inventories cuts out the headache and costs of balancing excess and shortages in physical
inventory at individual locations. Indeed, the logistical benefits are even greater as virtual inventories
simplify and streamline the entire distribution network at the geographic level. Think about it there’s no
longer any physical inventory, which means the traditional central-to-region-to-local distribution model
is eradicated, as is the need to do projections, which, of course, have to be exact, lest the company
suffers from more delays and more costs. In contrast, working digitally takes no time at all and is so
much cheaper.
Mass customization
Mass customization is the process of delivering market goods and services that are modified to satisfy
a specific customer’s needs. Mass customization is a marketing and manufacturing technique that
combines the flexibility and personalization of custom-made products with the low unit costs associated
with mass production. Other names for mass customization include made-to-order or built-to-order.
Mass customization, in marketing, manufacturing, call centres, and management, is the use of flexible
computer-aided manufacturing systems to produce custom output. Such systems combine the low unit
costs of mass production processes with the flexibility of individual customization.
Mass customization is the method of “effectively postponing the task of differentiating a product for a
specific customer until the latest possible point in the supply network”. The impacts of mass
customization when postponed to the stage of retail, online shopping. They found that users perceive
greater usefulness and enjoyment with a mass customization interface vs. a more typical shopping
interface, particularly in a task of moderate complexity. From collaborative engineering perspective,
mass customization can be viewed as collaborative efforts between customers and manufacturers, who
have different sets of priorities and need to jointly search for solutions that best match customers’
individual specific needs with manufacturers’ customization capabilities.
Types:
2. Adaptive customization: Firms produce a standardized product, but this product is customizable in
the hands of the end-user (the customers alter the product themselves). Example: Lutron lights, which
are programmable so that customers can easily customize the aesthetic effect.
3. Transparent customization: Firms provide individual customers with unique products, without
explicitly telling them that the products are customized. In this case there is a need to accurately assess
customer needs. Example: Google AdWords and AdSense
4. Cosmetic customization: Firms produce a standardized physical product, but market it to different
customers in unique ways. Example: Soft Drink served in: A can, 1.25L bottle, 2L bottle.
Processes
The process of mass customization entails an interlinked set of activities to capture individual
requirements and translate them into a physical product to be produced and delivered to the client.
Companies provide their customers with a toolkit for product innovation during the process.
Mass customization attains its goals if the product is developed and tailored to the user requirements at a
reduced cost. The development of sub-processes helps transform the various customer requirements into
generic product architecture from which several customized products can be derived.
Modularity facilitates the creation of customized-product variety. Other than minimizing the
development lead times considerably, modularity enables companies to realize economies of scope,
economies of substitution, and economies of scale.
Companies use additional concepts to increase the re-usability of customized products, such as platform
and commonality approaches. Under the commonality approach, end users can use multiple comments
on a product for various purposes.
Similarly, a product platform strategy can help companies customize products into several end variants
of the product family. The originator of the innovation gives customers the ability to develop new
product concepts on their own.
Customer experience
Customer experience (CX) is a totality of cognitive, affective, sensory, and behavioral consumer
responses during all stages of the consumption process including pre-purchase, consumption, and post-
purchase stages. Pine and Gilmore described the experience economy as the next level after
commodities, goods, and services with memorable events as the final business product. Four realms of
experience include esthetic, escapist, entertainment, and educational components.
Different dimensions of customer experience include senses, emotions, feelings, perceptions, cognitive
evaluations, involvement, memories, as well as spiritual components, and behavioral intentions. The
pre-consumption anticipation experience can be described as the amount of pleasure or displeasure
received from savoring future events, while the remembered experience is related to a recollection of
memories about previous events and experiences of a product or service.
Stages:
Product orientation: Companies just manufacture goods and offer them the best way possible.
Market orientation: Some consideration on customer needs and segmentation arises,
developing different marketing mix bundles for each one.
Customer experience: Adding to the other two factors some recognition of the importance of
providing an emotionally positive experience to customers.
Authenticity: This is the top maturity stage of companies. Products and service emerge from
real soul of brand and connect naturally and on long term sustainable basis with clients and other
stakeholders.
Customer experience management (CEM or CXM) is the process that companies use to oversee and
track all interactions with a customer during their relationship. This involves the strategy of building
around the needs of individual customers. According to Jeananne Rae, companies are realizing that
“building great consumer experiences is a complex enterprise, involving strategy, integration of
technology, orchestrating business models, brand management and CEO commitment.”
Managing the communication
The classical linear communication model includes having one sender or source sending out a message
that goes through the media (television, magazines) then to the receiver. The classical linear model is a
form of mass marketing which targets a large number of people where only a few may be customers;
this is a form non-personal communication. The adjusted model shows the source sending a message
either to the media or directly to an opinion leader/s and/or opinion former (Model, actress, credible
source, trusted figure in society, YouTuber/reviewer), which send a decoded message to the receiver.
The adjusted model is a form of interpersonal communication where feedback is almost instantaneous
with receiving the message. The adjusted model means that there are many more platforms of marketing
with the use of social media, which connects people with more touchpoints. Marketers use digital
experience to enhance the customer experience. Enhancing digital experiences influences changes to the
CEM, the customer journey map and IMC. The adjusted model allows marketers to communicate a
message designed specifically for the ‘followers’ of the particular opinion leader or opinion former,
sending a personalised message and creating a digital experience.
Persuasion techniques
Persuasion techniques are used when trying to send a message in order for an experience to take place.
Marcom Projects (2007) came up with five mind shapers to show how humans view things. The five
mind shapers of persuasion include:
Frames: only showing what they want you to see (a paid ad post)
Setting and context: The surrounding objects of items for sale
Filters: Previous beliefs that shape thoughts after an interaction
Social influence: How behaviours of others impact us
Belief (placebo effect): The expectation
The D4 Company Analysis is an audit tool that considers the four aspects of strategy, people,
technology and processes in the design of a CRM strategy. The analysis includes four main steps.
“Define the existing customer relationship management processes within the company.
Determine the perceptions of how the company manages their customer relationships, both
internally and externally.
Design the ideal customer relationship management solutions relative to the company or
industry.
Deliver a strategy for the implementation of the recommendations based on the findings”.
The modern business environment is constantly evolving. As a result of this rapid change, there’s an
increase in the amount of information that needs to be processed and problems that need to be solved.
Now, more than ever, there is a demand for resilient and agile leaders who can effectively adapt to
change and drive innovation.
Although business leaders do not have control over the external factors impacting their businesses, they
can prepare themselves and their organizations to better respond to, and navigate through, change. The
Neuroscience for Business online short course takes a scientific approach to leadership. Drawing on the
importance of neuroscience principles like neuroplasticity, it looks at promoting organizational and
personal resilience, leadership development, and business performance.
Gain an in-depth understanding of the brain and the tools you need to rewire it to maximize your
leadership potential. Over six weeks, you’ll learn how to change and refine the way you think, in order
to enhance how you engage with and motivate others, and boost personal and organizational
performance. With key insights from industry experts, you’ll gain a better understanding of the areas for
improvement in your business, and create a strategy that maps out your vision for your organization, as
well as the steps required to achieve it.
In addition to conducting traditional research in laboratory settings, neuroscientists have also been
involved in the promotion of awareness and knowledge about the nervous system among the general
public and government officials. Such promotions have been done by both individual neuroscientists
and large organizations. For example, individual neuroscientists have promoted neuroscience education
among young students by organizing the International Brain Bee, which is an academic competition for
high school or secondary school students worldwide. In the United States, large organizations such as
the Society for Neuroscience have promoted neuroscience education by developing a primer called
Brain Facts, collaborating with public school teachers to develop Neuroscience Core Concepts for K-12
teachers and students, and cosponsoring a campaign with the Dana Foundation called Brain Awareness
Week to increase public awareness about the progress and benefits of brain research. In Canada, the
CIHR Canadian National Brain Bee is held annually at McMaster University.
Neuroscience educators formed Faculty for Undergraduate Neuroscience (FUN) in 1992 to share best
practices and provide travel awards for undergraduates presenting at Society for Neuroscience meetings.
The field has evolved due to the convergence of multiple technologies, including ubiquitous computing,
commodity sensors, increasingly powerful embedded systems, and machine learning. Traditional fields
of embedded systems, wireless sensor networks, control systems, automation (including home and
building automation), independently and collectively enable the Internet of things. In the consumer
market, IoT technology is most synonymous with products pertaining to the concept of the “Smart
home”, including devices and appliances (such as lighting fixtures, thermostats, home security systems
and cameras, and other home appliances) that support one or more common ecosystems, and can be
controlled via devices associated with that ecosystem, such as smartphones and smart speakers. The IoT
can also be used in healthcare systems.
There are a number of concerns about the risks in the growth of IoT technologies and products,
especially in the areas of privacy and security, and consequently, industry and governmental moves to
address these concerns have begun, including the development of international and local standards,
guidelines, and regulatory frameworks.
Applications
The extensive set of applications for IoT devices is often divided into consumer, commercial, industrial,
and infrastructure spaces.
1. Consumer applications: A growing portion of IoT devices are created for consumer use, including
connected vehicles, home automation, wearable technology, connected health, and appliances with
remote monitoring capabilities.
2. Smart home: IoT devices are a part of the larger concept of home automation, which can include
lighting, heating and air conditioning, media and security systems and camera systems. Long-term
benefits could include energy savings by automatically ensuring lights and electronics are turned off or
by making the residents in the home aware of usage.
3. Elder care: One key application of a smart home is to provide assistance for those with disabilities
and elderly individuals. These home systems use assistive technology to accommodate an owner’s
specific disabilities. Voice control can assist users with sight and mobility limitations while alert
systems can be connected directly to cochlear implants worn by hearing-impaired users. They can also
be equipped with additional safety features. These features can include sensors that monitor for medical
emergencies such as falls or seizures. Smart home technology applied in this way can provide users with
more freedom and a higher quality of life.
4. Transportation: The IoT can assist in the integration of communications, control, and information
processing across various transportation systems. Application of the IoT extends to all aspects of
transportation systems (i.e. the vehicle, the infrastructure, and the driver or user). Dynamic interaction
between these components of a transport system enables inter- and intra-vehicular communication,
smart traffic control, smart parking, electronic toll collection systems, logistics and fleet management,
vehicle control, safety, and road assistance.
5. Building and home automation: IoT devices can be used to monitor and control the mechanical,
electrical and electronic systems used in various types of buildings (e.g., public and private, industrial,
institutions, or residential) in home automation and building automation systems. In this context, three
main areas are being covered in literature:
The integration of the Internet with building energy management systems in order to create
energy-efficient and IOT-driven “smart buildings”.
The possible means of real-time monitoring for reducing energy consumption and monitoring
occupant behaviors.
The integration of smart devices in the built environment and how they might be used in future
applications.
Industrial applications
Also known as IIoT, industrial IoT devices acquire and analyze data from connected equipment,
operational technology (OT), locations, and people. Combined with operational technology (OT)
monitoring devices, IIoT helps regulate and monitor industrial systems. Also, the same implementation
can be carried out for automated record updates of asset placement in industrial storage units as the size
of the assets can vary from a small screw to the whole motor spare part, and misplacement of such assets
can cause a percentile loss of manpower time and money.
1. Manufacturing: The IoT can connect various manufacturing devices equipped with sensing,
identification, processing, communication, actuation, and networking capabilities. Network control and
management of manufacturing equipment, asset and situation management, or manufacturing process
control allow IoT to be used for industrial applications and smart manufacturing. IoT intelligent systems
enable rapid manufacturing and optimization of new products, and rapid response to product demands.
Digital control systems to automate process controls, operator tools and service information systems to
optimize plant safety and security are within the purview of the IIoT. IoT can also be applied to asset
management via predictive maintenance, statistical evaluation, and measurements to maximize
reliability Industrial management systems can be integrated with smart grids, enabling energy
optimization. Measurements, automated controls, plant optimization, health and safety management, and
other functions are provided by networked sensors.
2. Agriculture: There are numerous IoT applications in farming such as collecting data on temperature,
rainfall, humidity, wind speed, pest infestation, and soil content. This data can be used to automate
farming techniques, take informed decisions to improve quality and quantity, minimize risk and waste,
and reduce the effort required to manage crops. For example, farmers can now monitor soil temperature
and moisture from afar, and even apply IoT-acquired data to precision fertilization programs. The
overall goal is that data from sensors, coupled with the farmer’s knowledge and intuition about his or
her farm, can help increase farm productivity, and also help reduce costs.
3. Maritime: IoT devices are in use monitoring the environments and systems of boats and yachts.
Many pleasure boats are left unattended for days in summer, and months in winter so such devices
provide valuable early alerts of boat flooding, fire, and deep discharge of batteries. The use of global
internet data networks such as Sigfox, combined with long-life batteries, and microelectronics allows
the engine rooms, bilge, and batteries to be constantly monitored and reported to a connected Android &
Apple applications for example.