Challenges in Deploying Machine Learning
Challenges in Deploying Machine Learning
Additional Key Words and Phrases: Machine learning applications, sofware deployment
ACM Reference Format:
Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence. 2022. Challenges in Deploying Machine Learning:
a Survey of Case Studies. ACM Comput. Surv. 1, 1, Article 1 (January 2022), 29 pages. https://doi.org/10.1145/3533378
1 INTRODUCTION
Machine learning (ML) has evolved from being an area of academic research to an applied field.
According to a recent global survey conducted by McKinsey, machine learning is increasingly
adopted in standard business processes with nearly 25 percent year-over-year growth [1] and
with growing interest from the general public, business leaders [2] and governments [3].
This shift comes with challenges. Just as with any other field, there are significant differences
between what works in an academic setting and what is required by a real world system. Cer-
tain bottlenecks and invalidated preconceptions should always be expected in the course of that
process. As more solutions are developed and deployed, practitioners report their experience in
various forms, including publications and blog posts. Motivated by such reports and our personal
experiences, in this study we aim to survey the challenges in deploying machine learning in pro-
duction1 with the objective of understanding what parts of the deployment process cause the most
difficulties. First, we provide an overview of the machine learning deployment workflow. Second,
1 By production we understand the setting where a product or a service are made available for use by its intended audience.
Authors’ addresses: Andrei Paleyes, University of Cambridge, Department of Computer Science and Technology, Cam-
bridge, United Kingdom, ap2169@cam.ac.uk; Raoul-Gabriel Urma, Cambridge Spark, Cambridge, United Kingdom,
raoul@cambridgespark.com; Neil D. Lawrence, University of Cambridge, Department of Computer Science and Technol-
ogy, Cambridge, United Kingdom, ndl21@cam.ac.uk.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses,
contact the owner/author(s).
© 2022 Copyright held by the owner/author(s).
0360-0300/2022/1-ART1
https://doi.org/10.1145/3533378
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:2 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
we review case studies to extract problems and concerns practitioners have at each particular de-
ployment stage. Third, we discuss cross-cutting aspects that affect every stage of the deployment
workflow: ethical considerations, law, end users’ trust and security. Finally, we conclude with a
brief discussion of potential solutions to these issues and further work.
Ours is not the first survey of machine learning in production. A decade ago, such surveys were
already conducted, albeit with a different purpose in mind. Machine learning was mainly a research
discipline, and it was uncommon to see ML solutions deployed for a business problem outside of
“Big Tech” companies in the information technology industry. So, the purpose of such a survey
was to show that ML can be used to solve a variety of problems, and illustrate it with examples,
as was done by Pěchouček and Mařík [4]. Nowadays the focus has changed: machine learning
is commonly adopted in many industries, and the question becomes not “Where is it used?”, but
rather “How difficult is it to use?”
One approach to assess the current state of machine learning deployment for businesses is a
survey conducted among professionals. Such surveys are primarily undertaken by private compa-
nies, and encompass a variety of topics. Algorithmia’s report ([5, 6]) goes deep into the deployment
timeline, with the majority of companies reporting between 8 and 90 days to deploy a single model,
and 18% taking even more time. A report by IDC [7] surveyed 2,473 organizations and found that
a significant portion of their attempted AI deployments fail, quoting lack of expertise, bias in data
and high costs as primary reasons. O’Reilly has conducted an interview study that focused on ML
practitioners’ work experience and tools they use [8]. Broader interview-based reports have also
been produced by dotscience [9] and dimensional research [10].
While there are a number of business reports on the topic, the challenges of the entire machine
learning deployment pipeline are not covered nearly as widely in the academic literature. There
is a burgeoning field of publications focusing on specific aspects of deployed ML, such as Bhatt
et al. [11] which focuses on explainable ML or Amershi et al. [12] which discusses software en-
gineering aspects of deploying ML. A large number of industry-specific surveys also exist, and
we review some of these works in the appropriate sections below. However, the only general pur-
pose survey we found is Baier et al. [13], which combines a literature review and interviews with
industry partners. In contrast with that paper, which concentrates on the experience of the infor-
mation technology sector, we aim at covering case studies from a wide variety of industries, thus
increasing the breadth of our survey. We also discuss the most commonly reported challenges in
the literature in much greater detail.
In our survey we consider three main types of papers:
• Case study papers that report experience from a single ML deployment project. Such works
usually go deep into discussing each challenge the authors faced and how it was overcome.
• Review papers that describe applications of ML in a particular field or industry. These re-
views normally give a summary of challenges that are most commonly encountered during
the deployment of the ML solutions in the reviewed field.
• “Lessons learned” papers where authors reflect on their past experiences of deploying ML
in production.
To ensure that this survey focuses on the current challenges, only papers published in the last five
years are considered, with only a few exceptions to this rule. We also refer to other types of papers
where it is appropriate, e.g. practical guidance reports, interview studies and regulations. We have
not conducted any interviews ourselves as part of this study. Further work and extensions are
discussed at the end of this paper.
The main contribution of our paper is to show that practitioners face challenges at each stage
of the machine learning deployment workflow. This survey supports our goal to raise awareness
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:3
in the academic community of the variety of problems that practitioners face when deploying
machine learning and start a discussion on what can be done to address these problems.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:4 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
Table 1. Continued: All considerations, issues and concerns explored in this study. Each is assigned to the
stage and step of the deployment workflow where it is commonly encountered.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:5
3 DATA MANAGEMENT
Data is an integral part of any machine learning solution. The overall effectiveness of the solution
depends on the training and test data as much as on the algorithm. The process of creating quality
datasets is usually the very first stage in any production ML pipeline. Unsurprisingly, practitioners
face a range of issues while working with data as reported by Polyzotis et al. [18]. Consequently,
this stage consumes time and energy that is often not anticipated beforehand. In this section, we
describe issues concerning four steps within data management: data collection, data preprocessing,
data augmentation and data analysis. Note that we consider storage infrastructure challenges, such
as setting up databases and query engines, beyond the scope of this survey. We refer readers to
the survey by Cai et al. [19] for further discussion of big data storage.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:6 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
relevant aspects of each property monitored by the Fire Department. Authors particularly high-
light data joining as a difficult problem. Given the fact that buildings can be identified through their
geospatial location, each dataset contained spatial data specifying the building’s address. Spatial
information can be presented in different formats, and sometimes contains minor differences such
as different spellings. All this needed to be cleaned and corrected. These corrections can be highly
time consuming and proved to be so in the Firebird project.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:7
not trained to recognize an unsafe situation and make the right decision in it. A practical example
of this issue can be seen in the area of autonomous vehicle control [35]. There simulations are
often used for training, but complex interactions, such as friction, can be hard to model, and small
variations in simulation may result in the agent not being transferable to the real world.
Budd et al. [30] show that interface design directly impacts the quality of applications built to
collect annotations for unlabeled data. They discuss a range of projects that collected labels for
medical images, all of which benefited from a well designed user interface. The authors conclude
that the end user interface plays a large part in the overall success of the annotation applications3 .
4 MODEL LEARNING
Model learning is the stage of the deployment workflow that enjoys the most attention within
the academic community. As an illustration of the scale of the field’s growth, the number of sub-
missions to NeurIPS, a primary conference on ML methods, has quadrupled in six years, going
from 1678 submissions in 2014 to 6743 in 2019 [38]. Nevertheless, there are still plenty of practical
considerations that affect the model learning stage. In this section, we discuss issues concerning
three steps within model learning: model selection, training and hyper-parameter selection.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:8 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
Another advantage that less complex models can offer is their relatively modest hardware re-
quirements. This becomes a key decision point in resource constrained environments, as shown
by Wagstaff et al. [40]. They worked on deploying ML models to a range of scientific instruments
onboard the Europa Clipper spacecraft. Spacecraft design is always a trade-off between the total
weight, robustness and the number of scientific tools onboard. Therefore computational resources
are scarce and their usage has to be as small as possible. These requirements naturally favor the
models that are light on computational demands. The team behind Europa Clipper used machine
learning for three anomaly detection tasks, some models took time series data as input and some
models took images, and on all three occasions simple threshold or PCA based techniques were
implemented. They were specifically chosen because of their robust performance and low demand
for computational power.
A further example of a resource-constrained environment is wireless cellular networks, where
energy, memory consumption and data transmission are very limited. Most advanced techniques,
such as deep learning, are not considered yet for practical deployment, despite being able to handle
high dimensional mobile network data [41].
The ability to interpret the output of a model into understandable business domain terms often
plays a critical role in model selection, and can even outweigh performance considerations. For
that reason decision trees (DT) [42], which can be considered a fairly basic ML algorithm, are
widely used in practice. DT presents its branching logic in a transparent way that resembles human
decision process, which allows for interpretation and auditing. Hansson et al. [43] describe several
cases in manufacturing that adopt DT because of their interpretability.
Banking is another industry where DT finds extensive use. As an illustrative example, it is used
by Keramati et al. [44] where the primary goal of the ML application is predicting churn in an
interpretable way through if-then rules. While it is easy to imagine more complicated models
learning the eventual input-output relationship for this specific problem, interpretability is key
requirement here because of the need to identify the features of churners. The authors found DT
to be the best model to fulfill this requirement.
Nevertheless, deep learning (DL) is commonly used for practical background tasks that require
analysis of a large amount of previously acquired data. This notion is exemplified by the field of un-
manned aerial vehicles (UAV) [45]. Data collected by image sensors is the most common UAV data
modality being exploited by DL, because of the sensors’ low cost, low weight, and low power con-
sumption. DL offers excellent capabilities for processing and presentation of raw images acquired
from sensors, but computational resource demands still remain the main blocker for deploying DL
as an online processing instrument on board UAVs.
4.2 Training
Model training is the process of feeding the chosen model with a collected dataset in order to learn
certain patterns or representations of the data. One of the biggest concerns with the model training
stage is the economic cost associated with carrying out the training procedure due to the compu-
tational resources required. This is often true in the field of natural language processing (NLP),
as illustrated by Sharir et al. [46]. The authors observe that while the cost of individual floating-
point operations is decreasing, the overall cost of training NLP is only growing. They took one of
the state-of-the-art models in the field, BERT [47], and found out that depending on the chosen
model size full training procedure can cost anywhere between $50k and $1.6m in cloud computing
resources, which is unaffordable for most research institutions and many companies. The authors
observe that training dataset size, number of model parameters and number of operations used by
the training procedure are all contributing towards the overall cost. Of particular importance here
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:9
is the second factor: novel NLP models are already using billions of parameters, and this number
is expected to increase further in the near future [48].
A related concern is raised by Strubell et al. [49] regarding the impact the training of ML models
has on the environment. By consuming more and more computational resources, ML model train-
ing is driving up energy consumption and greenhouse gas emissions. According to the estimates
provided in the paper, one full training cycle utilizing neural architecture search emits an amount
of CO2 comparable to what four average cars emit in their whole lifetime. The authors stress how
important it is for researchers to be aware of such impact of model training, and argue that the com-
munity should give higher priority to computationally efficient hardware and algorithms. Similar
concerns around environmental impact are voiced by Bender et al. [50].
As more businesses start using ML techniques on their users’ data, concerns are being raised
over the privacy of data and how well individuals’ sensitive information is preserved over the
course of the model training process [51]. Illustrating the gravity of this concern, Shokri et al.
constructed an attack for membership inference, that is to determine if a given input record was
a part of the model’s training dataset. They verified it on models trained on leading ML-as-a-
service providers [52], achieving anywhere from 70% up to 94% accuracy of membership inference.
Consequently companies have to consider privacy-aware approaches, which most likely come at
a cost of the model accuracy. Navigating the trade-off between privacy and utility is considered an
open challenge for practitioners dealing with sensitive data [53]. Some of the ways this trade-off
is resolved in practice are differential privacy that explicitly corrupts the data [54], homomorphic
encryption that restricts the class of learning algorithm but allows for training on encrypted data
[55], and federated learning that distributes training across personal devices to preserve privacy,
but thereby constrains the algorithms that can be used for model fitting [56].
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:10 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
and memory constraints imposed by such devices. This creates a need for customized hardware-
aware optimization techniques that efficiently optimize for the accuracy of the model and the
hardware jointly.
5 MODEL VERIFICATION
Verification is considered an essential step in any software development cycle as it ensures the
quality of the product and reduces maintenance costs. As is the case with any software, ML mod-
els should generalize well to unseen inputs, demonstrate reasonable handling of edge cases and
overall robustness, as well as satisfy all functional requirements. In this section, we discuss issues
concerning three steps within model verification: requirement encoding, formal verification and
test-based verification.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:11
6 MODEL DEPLOYMENT
Machine learning systems running in production are complex software systems that have to be
maintained over time. This presents developers with another set of challenges, some of which are
shared with running regular software services, and some are unique to ML.
There is a separate discipline in engineering, called DevOps, that focuses on techniques and
tools required to successfully maintain and support existing production systems. Consequently,
there is a necessity to apply DevOps principles to ML systems. However, even though some of the
DevOps principles apply directly, there are also a number of challenges unique to productionizing
machine learning. This is discussed in detail by Dang et al. [69] which uses the term AIOps5 for
DevOps tasks for ML systems. Some of the challenges mentioned include lack of high quality
telemetry data as well as no standard way to collect it, difficulty in acquiring labels which makes
5 Readers might have also encountered term MLOps (https://ml-ops.org/).
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:12 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
supervised learning approaches inapplicable6 and lack of agreed best practices around handling of
machine learning models. In this section, we discuss issues concerning three steps within model
deployment: integration, monitoring and updating.
6.1 Integration
The model integration step constitutes of two main activities: building the infrastructure to run
the model and implementing the model itself in a form that can be consumed and supported. While
the former is a topic that belongs almost entirely in systems engineering and therefore lies out of
scope of this work, the latter is of interest for our study, as it exposes important aspects at the
intersection of ML and software engineering. In fact, many concepts that are routinely used in
software engineering are now being reinvented in the ML context.
Code reuse is a common topic in software engineering, and ML can benefit from adopting the
same mindset. Reuse of data and models can directly translate into savings in terms of time, effort
or infrastructure. An illustrative case is an approach Pinterest took towards learning image em-
beddings [70]. There are three models used in Pinterest internally which use similar embeddings,
and initially they were maintained completely separately, in order to make it possible to iterate on
the models individually. However, this created engineering challenges, as every effort in working
with these embeddings had to be multiplied by three. Therefore the team decided to investigate
the possibility of learning a universal set of embeddings. It turned out to be possible, and this reuse
ended up simplifying their deployment pipelines as well as improving performance on individual
tasks.
A broad selection of engineering problems that machine learning practitioners now face is given
in Sculley et al. [71]. Many of them are known anti-patterns in engineering7 , but are currently wide-
spread in machine learning software. Some of these issues, such as abstraction boundary erosion
and correction cascades, are caused by the fact that ML is used in cases where the software has to
take an explicit dependency on external data. Others, such as glue code or pipeline jungles, stem
from the general tendency in the field to develop general-purpose software packages. Yet another
source of problems discussed in the paper is the configuration debt: in addition to all configura-
tions a regular software system may require ML systems to add a sizable number of ML-specific
configuration settings that have to be set and maintained.
Researchers and software engineers often find themselves working together on the same project
aiming to reach a business goal with a machine learning approach. On the surface there seems to
be a clear separation of responsibilities: researchers produce the model while engineers build the
infrastructure to run it. In reality, their areas of concern often overlap when considering the devel-
opment process, model inputs and outputs and performance metrics. Contributors in both roles
often work on the same code. Thus it is beneficial to include researchers in the whole develop-
ment journey, making sure they own the product codebase along with the engineers, use the same
version control and participate in code reviews. Despite obvious onboarding and slow-start chal-
lenges, this approach was seen to bring long term benefits in terms of speed and quality of product
delivery [12].
6.2 Monitoring
Monitoring is one of the issues associated with maintaining machine learning systems as reported
by Sculley et al. [71]. While monitoring is crucial for the maintenance of any software service,
6 Please refer
to Section 3.3 for detailed discussion about data labeling.
7 In
software engineering an anti-pattern is understood as a common response to a recurring problem that is considered
ineffective or counterproductive.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:13
the ML community is in the early stages of understanding what are the key metrics of data and
models to monitor and how to trigger system alarms when they deviate from normal behavior.
Monitoring of evolving input data, prediction bias and overall performance of ML models is an
open problem. Another maintenance issue highlighted by this paper that is specific to data-driven
decision making is feedback loops. ML models in production can influence their own behavior over
time via regular retraining. While making sure the model stays up to date, it is possible to create
a feedback loop where the input to the model is being adjusted to influence its behavior. This can
be done intentionally, as well as inadvertently, which is a unique challenge when running live ML
systems.
Klaise et al. [72] point out the importance of outlier detection as a key instrument to flag model
predictions that cannot be used in a production setting. The authors name two reasons for such
predictions to occur: the inability of the models to generalize outside of the training dataset and
overconfident predictions on out-of-distribution instances due to poor calibration. Deployment of
the outlier detector can be a challenge in its own right, because labeled outlier data is scarce, and
the detector training often becomes a semi-supervised or even an unsupervised problem.
Additional insight on monitoring of ML systems can be found in Ackermann et al. [73]. This
paper describes an early intervention system (EIS) for two police departments in the US. On the
surface their monitoring objectives seem completely standard: data integrity checks, anomaly de-
tection and performance metrics. One would expect to be able to use out-of-the-box tooling for
these tasks. However, the authors explain that they had to build all these checks from scratch in
order to maintain good model performance. For instance, the data integrity check meant verifying
updates of a certain input table and checksums on historical records, the performance metric was
defined in terms of the number of changes in top k outputs, and anomalies were tracked on rank-
order correlations over time. All of these monitoring tools required considerable investigation and
implementation. This experience report highlights a common problem with currently available
end-to-end ML platforms: the final ML solutions are usually so sensitive to a problem’s specifics
that out-of-the-box tooling does not fit their needs well.
As a final remark we note that there is an overlap between the choice of metrics for monitoring
and validation. The latter topic is discussed in Section 5.1.
6.3 Updating
Once the initial deployment of the model is completed, it is often necessary to be able to update
the model later on in order to make sure it always reflects the most recent trends in data and the
environment. There are multiple techniques for adapting models to new data, including sched-
uled regular retraining and continual learning [74]. Nevertheless in the production setting model
updating is also affected by practical considerations.
A particularly important problem that directly impacts the quality and frequency of model up-
date procedure is the concept drift, also known as dataset shift [75]. Concept drift in ML is un-
derstood as changes observed in joint distribution 𝑝 (𝑋, 𝑦), where 𝑋 is the model input and 𝑦 is
the model output. Such changes can occur discretely, for example after some outside event that
affects the input data distribution, or continuously, when data is gradually changing over time.
Undetected, this phenomenon can have major adverse effects on model performance, as is shown
by Jameel et al. [76] for classification problems or by Celik and Vanschoren [77] in the AutoML
context. Concept drift can arise due to a wide variety of reasons. For example, the finance industry
faced turbulent changes as the financial crisis of 2008 was unfolding, and if advanced detection
techniques were employed they could have provided additional insights into the ongoing crisis,
as explained by Masegosa et al. [78]. Changes in data can also be caused by an inability to avoid
fluctuations in the data collection procedure, as described in the paper by Langenkämper et al.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:14 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
[79] which studies the effects of slight changes in marine images on deep learning models’ perfor-
mance. Data shifts can have noticeable consequences even when occurring at a microscopic scale,
as Zenisek et al. [80] show in their research on predictive maintenance for wear and tear of indus-
trial machinery. Even though concept drift has been known for decades [81], these examples show
that it remains a critical problem for applications of ML today. Consequently, detection of concept
drift becomes a growing concern for teams that maintain ML models in production [82, 83], which
makes this challenge directly related to monitoring discussed in the previous section.
On top of the question of when to retrain the model to keep it up to date, there is an infrastruc-
tural question on how to deliver the model artifact to the production environment. In software
engineering such tasks are commonly solved with continuous delivery (CD), which is an approach
for accelerating the development cycle by building an automated pipeline for building, testing and
deploying software changes. CD for machine learning solutions is complicated because, unlike
in regular software products where changes only happen in the code, ML solutions experience
change along three axes: the code, the model and the data. An attempt to formulate CD for ML as
a separate discipline can be seen in Sato et al. [63]. This work describes the pieces involved and the
tools that can be used at each step of building the full pipeline. A direct illustration of the benefits
that a full CD pipeline can bring to the real-life ML solution can be found in Wider and Deger [84].
While updating is necessary for keeping a model up to date with recent fluctuations in the data,
it may also inflict damage on users or downstream systems because of the changes in the model’s
behavior, even without causing obvious software errors. To study updates in teams where AI is
used to support human decision making, Bansal et al. [85] introduced the notion of compatibil-
ity of an AI update. Authors define an update as compatible only if it does not violate the user’s
trust characterized via model behavior. The authors proceeded to show that updating an AI model
to increase accuracy, at the expense of compatibility, may degrade overall AI-Human team perfor-
mance. This line of research is continued by Srivastava et al. [86], who provide a detailed empirical
study of backward compatibility issues in ML systems. They show how ML model updates may
become backward incompatible due to optimization stochasticity or noisy training datasets. These
findings motivate the need for de-noising and compatibility-aware training methods as the means
to ensure reliable updates of deployed ML models.
7 CROSS-CUTTING ASPECTS
In this section we describe three additional aspects that ML projects have to consider: ethics, law,
end users’ trust, security. These aspects can affect every stage of the deployment pipeline.
7.1 Ethics
Ethical considerations should always inform data collection and modeling activities8 . As stated
in the report on ethical AI produced by the Alan Turing Institute [87], “it is essential to establish
a continuous chain of human responsibility across the whole AI project delivery workflow”. If
researchers and developers do not follow this recommendation, complications may come up due
to a variety of reasons, some of which we illustrate in this section.
Since ML models use previously seen data to make decisions, they can rely on hidden biases that
already exist in data - a behavior that is hard to foresee and detect. This effect is discussed in detail
by O’Neil [88] in the field of criminal justice. Models that calculate a person’s criminal “risk score”
are often marketed as a way to remove human bias. Nevertheless, they use seemingly neutral
demographic information like a neighborhood that often ends up serving as a proxy for more
8 Here by “ethics” we understand moral principles and techniques that inform the responsible development and use of ML
or AI solutions.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:15
sensitive data such as race. As a result, people are disadvantaged on the basis of race or income.
Machine translation is another example of an area where such hidden biases exist in data and can
be exploited via an ML system. Prates et al. [89] show a strong tendency towards male defaults
in popular online translation service in particular for fields typically associated with unbalanced
gender distribution, such as STEM.
Likewise, Soden et al. [90] mention the aggravation of social inequalities through the use of
biased training datasets as one of the main hurdles in applying ML to Disaster Risk Management
(DRM). It is argued that ML causes privacy and security9 concerns through a combination of pre-
viously distinct datasets. Reducing the role of both experts and the general public is also seen as
an ethical issue by DRM professionals, because they feel it increases the probability of error or
misuse. Similar worries about unintentional or malign misuse of ML decision making systems are
expressed by Muthiah et al. [91]. Their software for predicting civil unrest, called EMBERS, is de-
signed to be used as a forecasting and communication tool, however authors remark that it can
also be potentially misused by governments, either due to a misunderstanding of its role in society,
or deliberately.
ML models for facial analysis often become subjects to criticism due to their unethical behavior.
For example, Buolamwini and Gebru [92] analyzed popular datasets for facial analysis and discov-
ered them to be imbalanced on the basis of skin colour. They have also evaluated 3 commercial
classification systems and showed darker-skinned females to be the most misclassified group. Au-
thors conclude that urgent attention towards gender and skin type is needed for businesses that
want to build genuinely fair facial analysis algorithms. Such questions about fairness and account-
ability of ML algorithms and data are studied by the branch of machine learning known as Fairness
in ML [93].
An interesting ethical aspect arises in the usage of ML in the field of creative arts, discussed by
Anantrasirichai and Bull [94]. When a trained model is used to create a piece of visual art, it is
not entirely clear where the authorship of this piece resides. The questions of originality therefore
requires special attention. Closely related to this question is the growing concern of fake content
being generated with ML, such as deepfake images and video, which can be easily used for the
wrong purposes [95].
7.2 Law
As ML grows its influence on society’s everyday life, it is natural to expect more regulations to
govern how ML models should function and how businesses, governments and other bodies can
use them. Such legal frameworks can sometimes be used to guide decisions on ethics, although in
general ethics and legal should be considered separate aspects.
Various countries have produced regulations to protect personal data rights. Typically, the more
sensitive the information collected from the individual, the stronger the regulations governing its
use. Examples of such regulations include the General Data Protection Regulation in the European
Union [96] and ethical screening laws in a range of Asian countries [97]. One domain that deals
with some of the most sensitive information is healthcare. According to Han et al. [98], many
countries have strict laws in place to protect the data of patients, which makes the adoption of ML
in healthcare particularly difficult. On one hand there is no doubt that these rules are absolutely
necessary to make sure people are comfortable with their data being used. On the other hand, the
amount of reviews, software updates and cycles of data collection/annotation that are required
make it exceptionally hard to keep up with technical advances in ML, as Han et al. [98] explain
following their experience deploying ML solutions in the healthcare sector in Japan.
9 We discuss related cross-cutting security concerns in Section 7.4.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:16 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
Legislation takes time to develop, and often cannot keep up with the speed of progress in ML.
This phenomenon is well known in policymaking and has been discussed in the context of ML as
well as other technological advances by Marchant [99] or more recently by the World Economic
Forum [100, 101]. As Malan [101] explains, by the time regulations are written they can already
be out of date, resulting in a cat-and-mouse game that is wasteful on resources and causes legal
framework abuses. Additionally, it is generally challenging to formulate specific and unambiguous
laws for such a rapidly developing area as ML. For example, Wachter et al. show that GDPR lacks
precise language as well as explicit and well-defined rights and safeguards, therefore failing to
guarantee the ‘right to explanation’ [102]. As a result of these challenges, ML applications often
have to abide by the existing laws of the area where they are deployed. Minssen et al. analyze
the current regulatory approaches to medical ML in the US and Europe, and discuss how existing
laws evaluate ML applications [103]. At the same time governments have to provide ML adoption
plans. For instance, the US Food and Drug Administration released an action plan outlining steps
the agency plans to take to produce a regulatory framework for medical ML solutions [104].
Companies should not be focusing solely on the technological side of their solutions, as Deep-
Mind and Royal Free NHS Foundation Trust discovered while working on Streams, an applica-
tion for automatic review of test results for serious conditions. Their initial collaboration was not
specific enough on the use of patient data and on patient involvement overall, which triggered
an investigation on their compliance with data protection regulations. The revised collaboration
agreement was far more comprehensive and included a patient and public engagement strategy in
order to ensure data is being used ethically [105].
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:17
is also emphasized by Mutembesa et al. [110] as they analyzed their experience of deploying a
nation-wide cassava disease surveillance system in Uganda.
While the projects described above focused on engagement and accountability, in other circum-
stances explainability is the key to building the trust of the target audience. This is often the case
when the users have experience and an understanding of ML. Rudin [111] called the ML commu-
nity to stop using black box models and explaining their behavior afterward, and instead design
models that are inherently interpretable. Bhatt et al. [11] analyzed explainability as a feature of ma-
chine learning models deployed within enterprises, and found that it is a must-have requirement
for most stakeholders, including executives, ML engineers, regulators, and others. Moreover, their
survey showed that explainability score is a desired model metric, along with measures of fair-
ness and robustness. Explainability is also necessary in cases where it is demanded by the existing
regulations10 , and users will not trust decisions that are made automatically without provided ex-
planations. Wang et al. [112] describe such a requirement in the context of credit risk scoring. They
observed that the XGBoost algorithm outperforms traditional scorecard approaches, but lacks the
necessary explainability component. This prompted the authors to develop a custom loan decision
explanation technique for XGBoost, subsequently deployed by QuickBooks Capital.
A poorly designed user interface can be one of the main obstacles in the adoption of any new
technology. While this problem is not specific to ML, it is nevertheless worth mentioning it as
a challenge ML applications face. For example, Wang et. al. studied the deployment of the AI-
powered medical diagnosis tool "Brilliant Doctor" in rural China and discovered that the majority
of doctors could not use it productively. One of the main reasons quoted was the UX design that
did not take into account particularities of the environment (screen sizes, interaction with other
software in the clinic) where it was installed, often resulting in an unusable software [113]. On the
contrary, investing time in specialised user interfaces with tailored user experience can pay off
with fast user adoption. Developers of Firebird [27], a system that helps identify priority targets
for fire inspection in the city of Atlanta, USA, found that the best way to avoid resistance from
the end users while transitioning to an ML solution as a replacement of the previously used pen-
and-paper method was to develop a user interface that presented the results of modelling in a
way that the end users (fire officers and inspectors in the Fire dept) found most useful and clear.
Similarly, authors of EMBERS [91], a system that forecasts population-level events (such as protest),
in Latin America, noticed that their users have two modes of using the system: (a) high recall:
obtain most events, and then filter them using other methods; (b) high precision: focus on a specific
area or a specific hypothesis. To improve the user experience and thus increase their confidence
in the product, the user interface was improved to easily support both modes. This case study
emphasizes the importance of context-aware personalization for ML systems’ interfaces, one of
the key observations delivered by “Project explAIn” [109].
7.4 Security
Machine Learning opens up opportunities for new types of security attacks across the whole ML
deployment workflow [114]. Specialised adversarial attacks for ML can occur on the model itself,
the data used for training and also the resulting predictions. The field of adversarial machine
learning studies the effect of such attacks against ML models and how to protect against them
[115, 116]. Recent work from Siva et al. found that industry practitioners are not equipped to
protect, detect and respond to attacks on their ML systems [117]. In this section, we describe the
three most common attacks reported in practice which affect deployed ML models: data poisoning,
model stealing and model inversion. We focus specifically on adversarial machine learning and
10 Effects of regulations on ML deployment are also discussed in Section 5.2
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:18 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
consider other related general security concerns in deploying systems such as access control and
code vulnerabilities beyond the scope of our work.
In data poisoning, the goal of the adversarial attack is to deliberately corrupt the integrity of the
model during the training phase in order to manipulate the produced results. In data poisoning
scenarios an attacker is usually assumed to have access to the data that will ultimately be used
for training, for example by sending emails to victims to subvert their spam filter [118]. Poisoning
attacks are particularly relevant in situations where the machine learning model is continuously
updated with new incoming training data. Jagielski et al. reported that in a medical setting using
a linear model, the introduction of specific malicious samples with an 8% poisoning rate in the
training set resulted in incorrect dosage for half of the patients [119].
Data poisoning can also occur as a result of a coordinated collective effort that exploits feedback
loops we have discussed in Section 6.2, as happened with Microsoft’s Twitter bot Tay [120]. Tay
was designed to improve its understanding of the language over time but was quickly inundated
with a large number of deliberately malevolent tweets. Within 16 hours of its release a troubling
percentage of Tay’s messages were abusive or offensive, and the bot was taken down.
Another type of adversarial attack is reverse engineering a deployed model by querying its in-
puts (e.g. via a public prediction API) and monitoring the outputs. The adversarial queries are
crafted to maximize the extraction of information about the model in order to train a substitute
model. This type of attack is referred to as model stealing. In a nutshell, this attack results in the
loss of intellectual property which could be a key business advantage for the defender. Tramèr et
al. [121] have shown that it is possible to replicate models deployed in production from ML ser-
vices offered by Google, Amazon and Microsoft across a range of ML algorithms including logistic
regression, decision trees, SVMs and neural networks. In their work, they report the number of
queries ranging from 650 to 4013 to extract an equivalent model and in time ranging from 70s to
2088s.
A related attack is that of model inversion where the goal of the adversarial attack is to recover
parts of the private training set, thereby breaking its confidentiality. Fredrikson et al. have shown
that they could recover training data by exploiting models that report confidence values along with
their predictions [122]. Veale et al. [123] emphasize the importance of protecting against model
inversion attacks as a critical step towards compliance with data protection laws such as GDPR.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:19
For example, this is most likely the case for operational maintenance of ML models, discussed in
Sections 6.2 and 6.3. Many platforms on the market offer end-to-end experience for the user, taking
care of such things as data storage, retraining and deployment. Examples include AWS SageMaker
[125], Microsoft ML [126], Uber Michelangelo [127], TensorFlow TFX [128], MLflow [129] and
more. A typical ML platform would include, among other features, a data storage facility, model
hosting with APIs for training and inference operations, a set of common metrics to monitor model
health and an interface to accept custom changes from the user. By offering managed infrastructure
and a range of out-of-the-box implementations for common tasks such platforms greatly reduce
the operational burden associated with maintaining the ML model in production.
Quality assurance, which is the focus of Section 5, also looks to be an area where better tools
can be of much assistance. Models can greatly benefit from the development of a test suite to verify
their behavior, and the community actively develops tools for that purpose. Jenga [130] ensures
model’s robustness against errors in data, which very commonly occur in practice as was men-
tioned in Section 3. CheckList methodology [131] provides a formal approach towards assessing
the quality of NLP models. The Data Linter [132] inspects ML data sets to identify potential issues
in the data.
As discussed in Section 3.3, obtaining labels is often a problem with real world data. Weak
supervision has emerged as a separate field of ML which looks for ways to address this challenge.
Consequently, a number of weak supervision libraries are now actively used within the community,
and show promising results in industrial applications. Some of the most popular tools include
Snorkel [133], Snuba [134] and cleanlab [135].
A growing field of AutoML [136] aims to address challenges around a model selection and hyper-
parameter tuning, discussed in Sections 4.1 and 4.3. There is a large variety of tools that provide
general-purpose implementations of AutoML algorithms, such as Auto-keras [137], Auto-sklearn
[138] or TPOT [139]. However practical reports of applying AutoML to real world problems indi-
cate that practitioners need to exercise extreme caution, as AutoML methods might not be ready
for decision making in high-stakes areas yet [140, 141].
Given the potential damage an unnoticed dataset shift can cause to the quality of predictions of
deployed ML models (see Section 6.3), there are many techniques to detect and mitigate its effects.
A large variety of methods based on dimensionality reduction and statistical hypothesis testing is
reviewed by Rabanser at al. [142], and are now implemented in software libraries (e.g. Alibi De-
tect [143]) and services (e.g. Azure ML11 and AWS Sagemaker12 ), available for use in deployment’s
monitoring suites. The community has also made strides in developing strategies for dealing with
the shift once it was detected. We refer interested readers to the works on using domain adaptation
[144], meta-learning [145] and transfer learning [146, 147] as ways of addressing dataset shift.
Using specific tools for solving individual problems is a straightforward approach. However
practitioners need to be aware that by using a particular tool they introduce an additional depen-
dency into their solution. While a single additional dependency seems manageable, their number
can quickly grow and become a maintenance burden. Besides, as we mentioned above, new tools
for ML are being released constantly, thus presenting practitioners with the dilemma of choosing
the right tool by learning its strengths and shortcomings.
11 https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-datasets
12 https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-monitor-feature-attribution-drift.html
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:20 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:21
a lack of standard practices, and in that context MLTRL suggests a possible way to define such
standards.
A very widespread practice in software engineering is to define a set of guidelines and best
practices to help developers make decisions at various stages of the development process. These
guidelines can cover a wide range of questions, from variable names to execution environment
setup. For example, Zinkevich [158] compiled a collection of best practices for machine learning
that are utilized in Google. While this cannot be viewed as a coherent paradigm for doing ML
deployment, this document gives practical advice on a variety of important aspects that draw
from the real life experiences of engineers and researchers in the company. Among others, rules
and suggestions for such important deployment topics as monitoring (discussed in Section 6.2),
end user experience (Section 7.3) and infrastructure(Section 6.1) are described.
Besides serving as a collection of advice for common problems, guidelines can also be used as
a way to unify approaches towards deploying ML in a single area. The Association of German
Engineers (VDI) has released a series of guidelines on various aspects of big data applications
in the manufacturing industry [159]. These documents cover a wide range of subjects, including
data quality, modeling, user interfaces, and more. The series aims to harmonize the available tech-
nologies used in the industry, facilitate cooperation and implementation. Such initiatives can help
bridge the gap between ML solutions and regulations in a particular applied area discussed in
Section 7.2 of this survey.
Holistic approaches are created with ML application in mind, and therefore they have the po-
tential to offer significant ease of deploying ML. But it should be noted that all such approaches
assume significant time investment, because they represent significant changes to current norms
in project management and development. Therefore a careful assessment of risks versus benefits
should be carried out before adopting any of them.
9 FURTHER WORK
Even though the set of challenges we reviewed covers every stage of the ML deployment workflow,
it is far from complete. Identifying other, especially non-technical, challenges is a natural direction
of further work and could be augmented by conducting interviews with industry representatives
about their experiences of deploying ML.
In this paper, we reviewed reports from a variety of industries, which shows the ubiquity and
variety of challenges with deploying ML in production. An interesting extension would be the
comparative analysis of industries. Quantitative and qualitative analysis of most commonly re-
ported challenges may open interesting transferability opportunities, as approaches developed in
one field may be applicable in the other.
Our work includes a brief discussion of existing tools in Section 8.1. However, the community
would benefit from a comprehensive review of currently available tools and services, mapped to
challenges reported in our study. This new work could be combined with our survey to enable
practitioners to identify the problem they are facing and choose the most appropriate tool that
addresses that problem.
10 CONCLUSION
In this survey, we showed that practitioners deal with challenges at every step of the ML deploy-
ment workflow due to practical considerations of deploying ML in production. We discussed chal-
lenges that arise during the data management, model learning, model verification and model de-
ployment stages, as well as considerations that affect the whole deployment pipeline including
ethics, end users’ trust, law and security. We illustrated each stage with examples across different
fields and industries by reviewing case studies, experience reports and the academic literature.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:22 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
We argue that it is worth the academic community’s time and focus to think about these prob-
lems, rather than expect each applied field to figure out their own approaches. We believe that
ML researchers can drive improvements to the ML deployment experience by exploring holistic
approaches and taking into account practical considerations.
ML shares a lot of similarities with traditional computer science disciplines, and consequently
faces similar challenges, albeit with its own peculiarities. Therefore ML as a field would benefit
from a cross-disciplinary dialog with such fields as software engineering, human-computer inter-
action, systems, policymaking. Many pain points we have described in this work were already
experienced by communities in these fields, and the ML community should turn to them for solu-
tions and inspiration.
As an observation that follows from the process of collecting papers to review in this survey, we
note the relative shortage of deployment experience reports in the academic literature. Valuable
knowledge obtained by industry ML practitioners goes unpublished. We would like to encourage
organisations to prioritize sharing such reports, as they provide valuable information for the wider
community, but also as a way to self-reflect, collect feedback and improve on their own solutions.
Nevertheless, several venues focused ML deployment already exist, and we encourage interested
readers to follow them:
• International Conference Knowledge Discovery and Data Mining (ACM SIGKDD), Applied
Data Science track.
• Conference on Innovative Applications of Artificial Intelligence (IAAI).
• European Conference on Machine Learning and Principles and Practice of Knowledge Dis-
covery in Databases (ECML PKDD), Applied Data Science track.
• IEEE International Conference on Machine Learning and Applications (ICMLA).
• Applied Artificial Intelligence journal.
• Neural Computing and Applications journal.
We hope this survey will encourage discussions within the academic community about prag-
matic approaches to deploying ML in production.
ACKNOWLEDGMENTS
A.P. and N.L. are grateful for funding from a Senior AI Fellowship from the Alan Turing Institute
(ATI) and UK Research & Innovation (UKRI). We would like to thank our reviewers for thought-
ful and detailed comments that helped improve the paper. We would also like to thank Jessica
Montgomery, Diana Robinson and Ferenc Huszar for insightful discussions.
REFERENCES
[1] Arif Cam, Michael Chui, and Bryce Hall. Global AI survey: AI proves its worth, but few scale impact. McKinsey
Analytics, 2019.
[2] Thomas H. Davenport and Rajeev Ronanki. Artificial intelligence for the real world. Harvard business review,
96(1):108–116, 2018.
[3] Royal Society (Great Britain). Machine Learning: The Power and Promise of Computers that Learn by Example: an
Introduction. Royal Society, 2017.
[4] Michal Pěchouček and Vladimír Mařík. Industrial deployment of multi-agent technologies: review and selected case
studies. Autonomous agents and multi-agent systems, 17(3):397–431, 2008.
[5] Kyle Wiggers. Algorithmia: 50% of companies spend between 8 and 90 days deploying a single AI model, 2019. Avail-
able at https://venturebeat.com/2019/12/11/algorithmia-50-of-companies-spend-upwards-of-three-months-deploying-a-single-ai
[6] Lawrence E. Hecht. Add it up: How long does a machine learning deployment take?, 2019. Available at
https://thenewstack.io/add-it-up-how-long-does-a-machine-learning-deployment-take/.
[7] Kyle Wiggers. IDC: For 1 in 4 companies, half of all AI projects fail, 2019. Available at
https://venturebeat.com/2019/07/08/idc-for-1-in-4-companies-half-of-all-ai-projects-fail/.
[8] Ben Lorica and Nathan Paco. The State of Machine Learning Adoption in the Enterprise. O’Reilly Media, 2018.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:23
[9] The state of development and operations of AI applications. Dotscience, 2019. Available at
https://dotscience.com/assets/downloads/Dotscience_Survey-Report-2019.pdf .
[10] Artificial intelligence and machine learning projects are obstructed by data issues. dimensional research, 2019. Avail-
able at https://telecomreseller.com/wp-content/uploads/2019/05/EMBARGOED-UNTIL-800-AM-ET-0523-Dimensional-Research-
[11] Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri,
José MF Moura, and Peter Eckersley. Explainable machine learning in deployment. In Proceedings of the 2020 Con-
ference on Fairness, Accountability, and Transparency, pages 648–657, 2020.
[12] Saleema Amershi, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan,
Besmira Nushi, and Thomas Zimmermann. Software engineering for machine learning: A case study. In 2019
IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), pages
291–300. IEEE, 2019.
[13] Lucas Baier, Fabian Jöhren, and Stefan Seebacher. Challenges in the deployment and operation of machine learning
in practice. In ECIS, 2019.
[14] Rob Ashmore, Radu Calinescu, and Colin Paterson. Assuring the machine learning lifecycle: Desiderata, methods,
and challenges. ACM Computing Surveys (CSUR), 54(5):1–39, 2021.
[15] Colin Shearer. The CRISP-DM model: the new blueprint for data mining. Journal of data warehousing, 5(4):13–22,
2000.
[16] WAR Roald Bradley Severtson, L Franks, and G Ericson. What is the team data science process? Microsoft Azure,
2017.
[17] Hironori Takeuchi and Shuichiro Yamamoto. Business analysis method for constructing business–AI alignment
model. Procedia Computer Science, 176:1312–1321, 2020.
[18] Neoklis Polyzotis, Sudip Roy, Steven Euijong Whang, and Martin Zinkevich. Data lifecycle challenges in production
machine learning: a survey. ACM SIGMOD Record, 47(2):17–28, 2018.
[19] Hongming Cai, Boyi Xu, Lihong Jiang, and Athanasios V Vasilakos. IoT-based big data storage systems in cloud
computing: perspectives and challenges. IEEE Internet of Things Journal, 4(1):75–87, 2016.
[20] The DELVE Initiative. Data readiness: Lessons from an emergency. Technical report, 2020.
[21] Jimmy Lin and Dmitriy Ryaboy. Scaling big data mining infrastructure: the Twitter experience. ACM SIGKDD
Explorations Newsletter, 14(2):6–19, 2013.
[22] Robert C. Martin. The single responsibility principle. The principles, patterns, and practices of Agile Software Devel-
opment, pages 149–154, 2002.
[23] Ziawasch Abedjan, Xu Chu, Dong Deng, Raul Castro Fernandez, Ihab F. Ilyas, Mourad Ouzzani, Paolo Papotti,
Michael Stonebraker, and Nan Tang. Detecting data errors: Where are we and what needs to be done? Proceed-
ings of the VLDB Endowment, 9(12):993–1004, 2016.
[24] Rajashree Y Patil and RV Kulkarni. A review of data cleaning algorithms for data warehouse systems. International
Journal of Computer Science and Information Technologies, 3(5):5212–5214, 2012.
[25] Fakhitah Ridzuan and Wan Mohd Nazmee Wan Zainon. A review on data cleansing methods for big data. Procedia
Computer Science, 161:731–738, 2019.
[26] Alfredo Nazabal, Christopher KI Williams, Giovanni Colavizza, Camila Rangel Smith, and Angus Williams. Data
engineering for data analytics: a classification of the issues, and case studies. arXiv preprint arXiv:2004.12929, 2020.
Unpublished.
[27] Michael Madaio, Shang-Tse Chen, Oliver L. Haimson, Wenwen Zhang, Xiang Cheng, Matthew Hinds-Aldrich,
Duen Horng Chau, and Bistra Dilkina. Firebird: Predicting fire risk and prioritizing fire inspections in Atlanta.
In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages
185–194, 2016.
[28] F. Pacheco, E. Exposito, M. Gineste, C. Baudoin, and J. Aguilar. Towards the deployment of machine learning solu-
tions in network traffic classification: A systematic survey. IEEE Communications Surveys & Tutorials, 21(2):1988–
2014, 2019.
[29] Maurizio Dusi, Francesco Gringoli, and Luca Salgarelli. Quantifying the accuracy of the ground truth associated
with internet traffic traces. Computer Networks, 55(5):1158–1167, 2011.
[30] Samuel Budd, Emma C Robinson, and Bernhard Kainz. A survey on active learning and human-in-the-loop deep
learning for medical image analysis. Medical Image Analysis, 71:102062, 2021.
[31] Jun Du and Charles X Ling. Active learning with human-like noisy oracle. In 2010 IEEE International Conference on
Data Mining, pages 797–802. IEEE, 2010.
[32] Julia Peyre, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Weakly-supervised learning of visual relations. In Pro-
ceedings of the IEEE International Conference on Computer Vision, pages 5179–5188, 2017.
[33] Wendi Ren, Yinghao Li, Hanting Su, David Kartchner, Cassie Mitchell, and Chao Zhang. Denoising multi-source
weak supervision for neural text classification. arXiv preprint arXiv:2010.04582, 2020. Unpublished.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:24 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
[34] Gabriel Dulac-Arnold, Nir Levine, Daniel J Mankowitz, Jerry Li, Cosmin Paduraru, Sven Gowal, and Todd Hes-
ter. Challenges of real-world reinforcement learning: definitions, benchmarks and analysis. Machine Learning,
110(9):2419–2468, 2021.
[35] Sampo Kuutti, Richard Bowden, Yaochu Jin, Phil Barber, and Saber Fallah. A survey of deep learning applications to
autonomous vehicle control. IEEE Transactions on Intelligent Transportation Systems, 22(2):712–733, 2020.
[36] Sean Kandel, Andreas Paepcke, Joseph M. Hellerstein, and Jeffrey Heer. Enterprise data analysis and visualization:
An interview study. IEEE Transactions on Visualization and Computer Graphics, 18(12):2917–2926, 2012.
[37] Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. Data scientists in software teams: State of
the art and challenges. IEEE Transactions on Software Engineering, 44(11):1024–1038, 2017.
[38] Diego Charrez. NeurIPS 2019 stats, 2019. Available at https://medium.com/@dcharrezt/neurips-2019-stats-c91346d31c8f .
[39] Malay Haldar, Mustafa Abdool, Prashant Ramanathan, Tao Xu, Shulin Yang, Huizhong Duan, Qing Zhang, Nick
Barrow-Williams, Bradley C. Turnbull, Brendan M. Collins, and et al. Applying deep learning to AirBnB search.
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Jul 2019.
[40] Kiri L. Wagstaff, Gary Doran, Ashley Davies, Saadat Anwar, Srija Chakraborty, Marissa Cameron, Ingrid Daubar,
and Cynthia Phillips. Enabling onboard detection of events of scientific interest for the Europa Clipper spacecraft.
In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’19,
page 2191–2201, New York, NY, USA, 2019. Association for Computing Machinery.
[41] Ursula Challita, Henrik Ryden, and Hugo Tullberg. When machine learning meets wireless cellular networks: De-
ployment, challenges, and applications. IEEE Communications Magazine, 58(6):12–18, 2020.
[42] J. Ross Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986.
[43] Karl Hansson, Siril Yella, Mark Dougherty, and Hasan Fleyeh. Machine learning algorithms in heavy process manu-
facturing. American Journal of Intelligent Systems, 6(1):1–13, 2016.
[44] Abbas Keramati, Hajar Ghaneei, and Seyed Mohammad Mirmohammadi. Developing a prediction model for cus-
tomer churn from electronic banking services using data mining. Financial Innovation, 2(1):10, 2016.
[45] Adrian Carrio, Carlos Sampedro, Alejandro Rodriguez-Ramos, and Pascual Campoy. A review of deep learning
methods and applications for unmanned aerial vehicles. Journal of Sensors, 2017, 2017.
[46] Or Sharir, Barak Peleg, and Yoav Shoham. The cost of training NLP models: A concise overview. arXiv preprint
arXiv:2004.08900, 2020. Unpublished.
[47] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding. In NAACL, 2019.
[48] Nathan Benaich and Ian Hogarth. State of AI report 2020. 2020. Available at www.stateof .ai.
[49] Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP.
2019.
[50] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic
parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and
Transparency, pages 610–623, 2021.
[51] Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael P Wellman. Sok: Security and privacy in machine
learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P), pages 399–414. IEEE, 2018.
[52] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine
learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE, 2017.
[53] Brendan Avent, Javier González, Tom Diethe, Andrei Paleyes, and Borja Balle. Automatic discovery of privacy–utility
Pareto fronts. Proceedings on Privacy Enhancing Technologies, 2020(4):5–23, 2020.
[54] Cynthia Dwork. Differential privacy. In International Colloquium on Automata, Languages, and Programming, pages
1–12. Springer, 2006.
[55] Craig Gentry. Fully homomorphic encryption using ideal lattices. In Proceedings of the forty-first annual ACM
symposium on Theory of computing, pages 169–178, 2009.
[56] Jakub Konečnỳ, Brendan McMahan, and Daniel Ramage. Federated optimization: Distributed optimization beyond
the datacenter. OPT Workshop on Optimization for Machine Learning, NIPS, 2015.
[57] Li Yang and Abdallah Shami. On hyperparameter optimization of machine learning algorithms: Theory and practice.
Neurocomputing, 415:295–316, 2020.
[58] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-
based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1):6765–6816, 2017.
[59] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian optimization of machine learning algorithms.
In Advances in neural information processing systems, pages 2951–2959, 2012.
[60] Bobak Shahriari, Alexandre Bouchard-Cote, and Nando Freitas. Unbounded Bayesian optimization via regularization.
In Arthur Gretton and Christian C. Robert, editors, Proceedings of the 19th International Conference on Artificial
Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 1168–1176, Cadiz, Spain,
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:25
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:26 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
[84] Arif Wider and Christian Deger. Getting smart: Applying continuous delivery to data science to drive car sales, 2017.
Available at https://www.thoughtworks.com/insights/blog/getting-smart-applying-continuous-delivery-data-science-drive-car-s
[85] Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S Weld, Walter S Lasecki, and Eric Horvitz. Updates in human-AI
teams: Understanding and addressing the performance/compatibility tradeoff. In Proceedings of the AAAI Conference
on Artificial Intelligence, volume 33, pages 2429–2437, 2019.
[86] Megha Srivastava, Besmira Nushi, Ece Kamar, Shital Shah, and Eric Horvitz. An empirical analysis of backward
compatibility in machine learning systems. In Proceedings of the 26th ACM SIGKDD International Conference on
Knowledge Discovery & Data Mining, pages 3272–3280, 2020.
[87] David Leslie. Understanding artificial intelligence ethics and safety: A guide for the responsible design and imple-
mentation of AI systems in the public sector. Available at SSRN 3403301, 2019.
[88] Cathy O’Neil. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway
Books, 2016.
[89] Marcelo OR Prates, Pedro H Avelar, and Luís C Lamb. Assessing gender bias in machine translation: a case study
with Google Translate. Neural Computing and Applications, 32(10):6363–6381, 2020.
[90] Robert Soden, Dennis Wagenaar, Dave Luo, and Annegien Tijssen. Taking ethics, fairness, and bias seriously in
machine learning for disaster risk management, 2019. Unpublished.
[91] Sathappan Muthiah, Patrick Butler, Rupinder Paul Khandpur, Parang Saraf, Nathan Self, Alla Rozovskaya, Liang
Zhao, Jose Cadena, Chang-Tien Lu, Anil Vullikanti, et al. Embers at 4 years: Experiences operating an open source
indicators forecasting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Dis-
covery and Data Mining, pages 205–214, 2016.
[92] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classi-
fication. In Conference on fairness, accountability and transparency, pages 77–91. PMLR, 2018.
[93] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. NIPS Tutorial, 1, 2017.
[94] Nantheera Anantrasirichai and David Bull. Artificial intelligence in the creative industries: a review. Artificial
Intelligence Review, pages 1–68, 2021.
[95] Yisroel Mirsky and Wenke Lee. The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR),
54(1):1–41, 2021.
[96] John Mark Michael Rumbold and Barbara Pierscionek. The effect of the general data protection regulation on medical
research. Journal of medical Internet research, 19(2):e47, 2017.
[97] Syed Mohamed Aljunid, Samrit Srithamrongsawat, Wen Chen, Seung Jin Bae, Raoh-Fang Pwu, Shunya Ikeda, and
Ling Xu. Health-care data collecting, sharing, and using in Thailand, China mainland, South Korea, Taiwan, Japan,
and Malaysia. Value in Health, 15(1):S132–S138, 2012.
[98] Changhee Han, Leonardo Rundo, Kohei Murao, Takafumi Nemoto, and Hideki Nakayama. Bridging the gap between
AI and healthcare sides: towards developing clinically relevant AI-powered diagnosis systems. In IFIP International
Conference on Artificial Intelligence Applications and Innovations, pages 320–333. Springer, 2020.
[99] Gary E Marchant. The growing gap between emerging technologies and the law. In The growing gap between
emerging technologies and legal-ethical oversight, pages 19–33. Springer, 2011.
[100] D Malan et al. Values and the fourth industrial revolution: Connecting the dots between value, values, profit and
purpose. In World Economic Forum, 2016.
[101] Daniel Malan. The law can’t keep up with new tech. Here’s how to close the gap. In World Economic Forum, volume 21,
2018.
[102] Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. Why a right to explanation of automated decision-making
does not exist in the general data protection regulation. International Data Privacy Law, 7(2):76–99, 2017.
[103] Timo Minssen, Sara Gerke, Mateo Aboy, Nicholson Price, and Glenn Cohen. Regulatory responses to medical ma-
chine learning. Journal of Law and the Biosciences, 2020.
[104] Keri Stephens. FDA releases artificial intelligence/machine learning action plan. AXIS Imaging News, 2021.
[105] Mustafa Suleyman and Dominic King. The Information Commissioner, the Royal Free, and what we’ve learned, 2017.
[106] M-C Laï, M Brian, and M-F Mamzer. Perceptions of artificial intelligence in healthcare: findings from a qualitative
survey study among actors in France. Journal of Translational Medicine, 18(1):1–13, 2020.
[107] Ramprakash Ramamoorthy, P Satya Madhuri, and Malini Christina Raj. AI from labs to production - challenges and
learnings. Santa Clara, CA, May 2019. USENIX Association.
[108] Mark Sendak, Madeleine Clare Elish, Michael Gao, Joseph Futoma, William Ratliff, Marshall Nichols, Armando
Bedoya, Suresh Balu, and Cara O’Brien. “The human body is a black box”: Supporting clinical decision-making
with deep learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20,
page 99–109, New York, NY, USA, 2020. Association for Computing Machinery.
[109] Information Commissioner’s Office. Project ExplAIn interim report, 2019. Available at
https://ico.org.uk/about-the-ico/research-and-reports/project-explain-interim-report/.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:27
[110] Daniel Mutembesa, Christopher Omongo, and Ernest Mwebaze. Crowdsourcing real-time viral disease and pest
information: A case of nation-wide cassava disease surveillance in a developing country. In Sixth AAAI Conference
on Human Computation and Crowdsourcing, 2018.
[111] Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable
models instead. Nature Machine Intelligence, 1(5):206–215, 2019.
[112] Wei Wang, Christopher Lesner, Alexander Ran, Marko Rukonic, Jason Xue, and Eric Shiu. Using small business
banking data for explainable credit risk scoring. In Proceedings of the AAAI Conference on Artificial Intelligence,
volume 34, pages 13396–13401, 2020.
[113] Dakuo Wang, Liuping Wang, Zhan Zhang, Ding Wang, Haiyi Zhu, Yvonne Gao, Xiangmin Fan, and Feng Tian.
“Brilliant AI Doctor” in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. In
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–18, 2021.
[114] Ram Shankar Siva Kumar, David O Brien, Kendra Albert, Salomé Viljöen, and Jeffrey Snover. Failure modes in
machine learning systems. arXiv preprint arXiv:1911.11034, 2019. Unpublished.
[115] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern
Recognition, 84:317–331, 2018.
[116] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. International Conference
on Learning Representations, 2017.
[117] Ram Shankar Siva Kumar, Magnus Nystrom, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru,
Matt Swann, and Sharon Xia. Adversarial machine learning-industry perspectives. Available at SSRN 3532474, 2020.
[118] Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D Joseph, Benjamin IP Rubinstein, Udam Saini, Charles
Sutton, J Doug Tygar, and Kai Xia. Exploiting machine learning to subvert your spam filter. LEET, 8:1–9, 2008.
[119] Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. Manipulating machine
learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and
Privacy (SP), pages 19–35. IEEE, 2018.
[120] Oscar Schwartz. In 2016 Microsoft’s racist chatbot revealed the dangers of online conversation. IEEE Spectrum, 11,
2019.
[121] Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. Stealing machine learning models
via prediction APIs. In 25th {USENIX} Security Symposium ({USENIX} Security 16), pages 601–618, 2016.
[122] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information
and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications
Security, pages 1322–1333, 2015.
[123] Michael Veale, Reuben Binns, and Lilian Edwards. Algorithms that remember: model inversion attacks and data
protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,
376(2133):20180083, 2018.
[124] Chip Huyen. What I learned from looking at 200 machine learning tools, 2020. Available at
https://huyenchip.com/2020/06/22/mlops.html.
[125] Kumar Venkateswar. Using amazon SageMaker to operationalize machine learning. Santa Clara, CA, 2019. USENIX
Association.
[126] AML Team. AzureML: Anatomy of a machine learning service. In Conference on Predictive APIs and Apps, pages
1–13, 2016.
[127] Jeremy Hermann and Mike Del Balso. Meet Michelangelo: Uber’s machine learning platform. URL
https://eng.uber.com/michelangelo, 2017.
[128] Denis Baylor, Kevin Haas, Konstantinos Katsiapis, Sammy Leong, Rose Liu, Clemens Menwald, Hui Miao, Neoklis
Polyzotis, Mitchell Trott, and Martin Zinkevich. Continuous training for production ML in the TensorFlow extended
(TFX) platform. In 2019 {USENIX} Conference on Operational Machine Learning (OpML 19), pages 51–53, 2019.
[129] Matei Zaharia, Andrew Chen, Aaron Davidson, Ali Ghodsi, Sue Ann Hong, Andy Konwinski, Siddharth Murching,
Tomas Nykodym, Paul Ogilvie, Mani Parkhe, et al. Accelerating the machine learning lifecycle with MLflow. IEEE
Data Eng. Bull., 41(4):39–45, 2018.
[130] Sebastian Schelter, Tammo Rukat, and Felix Biessmann. JENGA-a framework to study the impact of data errors on
the predictions of machine learning models. 2021.
[131] Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. Beyond accuracy: Behavioral testing of
NLP models with CheckList. In Association for Computational Linguistics (ACL), 2020.
[132] Nick Hynes, D Sculley, and Michael Terry. The Data Linter: Lightweight, automated sanity checking for ML data
sets. In NIPS MLSys Workshop, 2017.
[133] Stephen H Bach, Daniel Rodriguez, Yintao Liu, Chong Luo, Haidong Shao, Cassandra Xia, Souvik Sen, Alex Ratner,
Braden Hancock, Houman Alborzi, et al. Snorkel DryBell: A case study in deploying weak supervision at industrial
scale. In Proceedings of the 2019 International Conference on Management of Data, pages 362–375, 2019.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
1:28 Andrei Paleyes, Raoul-Gabriel Urma, and Neil D. Lawrence
[134] Paroma Varma and Christopher Ré. Snuba: Automating weak supervision to label training data. In Proceedings of
the VLDB Endowment. International Conference on Very Large Data Bases, volume 12, page 223. NIH Public Access,
2018.
[135] Curtis Northcutt, Lu Jiang, and Isaac Chuang. Confident learning: Estimating uncertainty in dataset labels. Journal
of Artificial Intelligence Research, 70:1373–1411, 2021.
[136] Yaliang Li, Zhen Wang, Bolin Ding, and Ce Zhang. AutoML: A perspective where industry meets academy. In
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 4048–4049, 2021.
[137] Haifeng Jin, Qingquan Song, and Xia Hu. Auto-keras: An efficient neural architecture search system. In Proceedings
of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1946–1956, 2019.
[138] Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. Efficient
and robust automated machine learning. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors,
Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
[139] Randal S Olson and Jason H Moore. TPOT: A tree-based pipeline optimization tool for automating machine learning.
In Workshop on automatic machine learning, pages 66–74. PMLR, 2016.
[140] Ashkan Ebadi, Yvan Gauthier, Stéphane Tremblay, and Patrick Paul. How can automated machine learning help
business data science teams? In 2019 18th IEEE International Conference On Machine Learning And Applications
(ICMLA), pages 1186–1191. IEEE, 2019.
[141] Livia Faes, Siegfried K Wagner, Dun Jack Fu, Xiaoxuan Liu, Edward Korot, Joseph R Ledsam, Trevor Back, Reena
Chopra, Nikolas Pontikos, Christoph Kern, et al. Automated deep learning design for medical image classification by
health-care professionals with no coding experience: a feasibility study. The Lancet Digital Health, 1(5):e232–e242,
2019.
[142] Stephan Rabanser, Stephan Günnemann, and Zachary Chase Lipton. Failing loudly: An empirical study of methods
for detecting dataset shift. In NeurIPS, 2019.
[143] Arnaud Van Looveren, Janis Klaise, Giovanni Vacanti, Oliver Cobb, Ashley Scillitoe, and Robert Samoilescu. Alibi
Detect: Algorithms for outlier, adversarial and drift detection, 3 2022.
[144] João Gama, Indrė Žliobaitė, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. A survey on concept
drift adaptation. ACM computing surveys (CSUR), 46(4):1–37, 2014.
[145] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks.
In International conference on machine learning, pages 1126–1135. PMLR, 2017.
[146] Peilin Zhao, Steven CH Hoi, Jialei Wang, and Bin Li. Online transfer learning. Artificial intelligence, 216:76–102,
2014.
[147] Ge Xie, Yu Sun, Minlong Lin, and Ke Tang. A selective transfer learning method for concept drift adaptation. In
International Symposium on Neural Networks, pages 353–361. Springer, 2017.
[148] A Zujus. AI project development–how project managers should prepare. Tratto il giorno Agosto, 31:2019, 2018.
[149] Amine Barrak, Ellis E Eghan, and Bram Adams. On the co-evolution of ML pipelines and source code–empirical study
of DVC projects. In 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER),
pages 422–433. IEEE, 2021.
[150] Neil D Lawrence. Data readiness levels. arXiv preprint arXiv:1705.02245, 2017. Unpublished.
[151] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III,
and Kate Crawford. Datasheets for datasets. Communications of the ACM, 64(12):86–92, 2021.
[152] Ruslan Kuprieiev, Saugat Pachhai, Dmitry Petrov, Paweł Redzyński, Casper da Costa-Luis, Peter Rowlands, Alexan-
der Schepanovski, Ivan Shcheklein, Batuhan Taskaya, Jorge Orpinel, Gao, Fábio Santos, David de la Iglesia Castro,
Aman Sharma, Zhanibek, Dani Hodovic, Nikita Kodenko, Andrew Grigorev, Earl, Nabanita Dash, George Vyshnya,
maykulkarni, Max Hora, Vera, Sanidhya Mangal, Wojciech Baranowski, Clemens Wolff, and Kurian Benoy. DVC:
Data version control - Git for data & models, February 2022.
[153] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer,
Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In Proceedings of the conference on
fairness, accountability, and transparency, pages 220–229, 2019.
[154] Neil Lawrence. Modern data oriented programming, 2019. Available at
http://inverseprobability.com/talks/notes/modern-data-oriented-programming.html.
[155] Tom Borchert. Milan: An evolution of data-oriented programming, 2020. Available at
https://tborchertblog.wordpress.com/2020/02/13/28/.
[156] Andrei Paleyes, Christian Cabrera, and Neil D Lawrence. Towards better data discovery and collection with flow-
based programming. Data-centric AI workshop, NeurIPS, 2021.
[157] Alexander Lavin, Ciarán M. Gilligan-Lee, Alessya Visnjic, Siddha Ganju, Dava Newman, Sujoy Ganguly, Danny
Lange, Atılım Güneş Baydin, Amit Sharma, Adam Gibson, Yarin Gal, Eric P. Xing, Chris Mattmann, and James Parr.
Technology readiness levels for machine learning systems, 2021. Unpublished.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.
Challenges in Deploying Machine Learning: a Survey of Case Studies 1:29
[158] Martin Zinkevich. Rules of machine learning: Best practices for ML engineering. URL: https://developers. google.
com/machine-learning/guides/rules-of-ml, 2017.
[159] Verein Deutscher Ingenieure (VDI). Einsatz von Big Data in produzierenden Industrien, 2019. Available at
https://www.vdi.de/news/detail/einsatz-von-big-data-in-produzierenden-industrien.
ACM Comput. Surv., Vol. 1, No. 1, Article 1. Publication date: January 2022.