Practical Swift Concurrency
Practical Swift Concurrency
Practical Swift Concurrency: Make the most of Concurrency in Swift 6.2 and beyond 6
Chapter overview 8
Chapter 1 - Understanding concurrency in programming . . . . . . . . . . . . . . . 8
Chapter 2 - Looking at Asynchronous programming in Swift pre-Concurrency . . . 8
Chapter 3 - Awaiting your first async methods . . . . . . . . . . . . . . . . . . . . 8
Chapter 4 - Understanding Swift Concurrency’s tasks . . . . . . . . . . . . . . . . . 9
Chapter 5 - Existing code and Swift Concurrency . . . . . . . . . . . . . . . . . . . 9
Chapter 6 - Preventing data races with Swift Concurrency . . . . . . . . . . . . . . 10
Chapter 7 - Working with asynchronous sequences . . . . . . . . . . . . . . . . . . 10
Chapter 8 - Async algorithms and Combine . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 9 - Performing and awaiting work in parallel . . . . . . . . . . . . . . . . . 11
Chapter 10 - Swift Concurrency and your unit tests . . . . . . . . . . . . . . . . . . 12
Chapter 11 - Debugging and profiling your asynchronous code . . . . . . . . . . . 12
Chapter 12 - Adopting the Swift 6 language mode . . . . . . . . . . . . . . . . . . 13
Donny Wals 3
Practical Swift Concurrency
Donny Wals 4
Practical Swift Concurrency
Donny Wals 5
Practical Swift Concurrency
Donny Wals 6
Practical Swift Concurrency
with the result of this book and I can only hope that you, the reader, will agree with me.
If you find any mistakes, errors, or inconsistencies in this book don’t hesitate to send me an
email at feedback@donnywals.com. I’ve put a lot of care and attention into this book but
I’m only human and I need your feedback to make this book the best resource it can be. Make
sure you also reach out if you have any questions that aren’t answered by this book even
though you hoped it would so I can answer your questions directly, or possibly update the
book if needed.
Cheers,
Donny
Donny Wals 7
Practical Swift Concurrency
Chapter overview
Donny Wals 8
Practical Swift Concurrency
and sync contexts, and how you can get yourself into an async context.
We’ll also dig in and take a brief look at what happens when you await something; this ties
back into chapter one. You will also learn how you can write your own async functions, and
how these functions don’t have to suspend.
By the end of this chapter you will be able to start writing your very first async code, and you
will have a sense of confidence when doing this!
Donny Wals 9
Practical Swift Concurrency
next chapters you will get a proper introduction into AsyncSequence to iterate over multiple
emitted values.
Donny Wals 10
Practical Swift Concurrency
Donny Wals 11
Practical Swift Concurrency
Donny Wals 12
Practical Swift Concurrency
Donny Wals 13
Practical Swift Concurrency
func getFullName() {
let givenName = getGivenName()
let familyName = getFamilyName()
print("\(givenName) \(familyName)")
}
Donny Wals 14
Practical Swift Concurrency
getFullName()
Regardless of whether you understand exactly what readLine() is or does, I’m sure you
can reason about what the execution of this code looks like.
Tip:
The sample code above is included in the code bundle for this chapter and it builds as a
command line tool. If you open the project and run it you will see each question printed
in the Xcode console. You can then type an answer right in the console and press enter to
resume the program.
First, getFullName() is called. In turn, that function calls getGivenName() which will
print some text to the standard output (usually Xcode’s console when you’re running your
app through Xcode) and then the function waits for input (that’s what readLine() does).
This user input is returned and then getFamilyName() is called. The getFamilyName()
function also prints text to the console and awaits user input again. This input is then returned
as well and we use the provided given name and family name to print the user’s full name to
the console.
Even though our code has to wait a while for the user to provide input, our code runs syn-
chronously. This means that while we’re waiting for input when readLine() is called,
nothing else is happening in our program.
It’s a single-threaded, non-concurrent program.
If you were to visualize this example in a diagram it would look a bit like this:
Donny Wals 15
Practical Swift Concurrency
We can see that we move from block to block in a linear fashion. It’s relatively straightforward
to reason about what our program is doing, what it will do next, and what it did previously.
When we think about this example in complete isolation, you can imagine that our CPU is
doing one thing at a time. This means that our simple program only requires a single CPU
core to run, and that CPU core is entirely dedicated to our program. This in turn means that
while our program is running no other programs are doing background work. It also means
that if the CPU only had a single core our OS would be pretty much unresponsive while our
program is running.
This is obviously not how computers work because while any program is running we can have
other programs running at the same time, and our OS should always be responsive. A single
program taking over the entire CPU is undesirable and hard to imagine these days.
You might be thinking “Ah! So that’s why a CPU has multiple cores. That means it can do
multiple things at once” and you would be partially correct. However, CPUs didn’t have more
than a single core for a very long time. Affordable consumer CPUs most certainly didn’t have
eight or ten cores like they do today until quite recently.
To allow a computer to run smoothly without programs taking over the entire CPU, we make
use of concurrency. A CPU, can do multiple things concurrently. The way this works is that
the CPU will give each process, or application, some time to run before it switches to another
process until it eventually comes back to the original program that the CPU was running.
If we visualize what our actual CPU usage looks like in the simple example you saw earlier, it
would look a little like this:
Donny Wals 16
Practical Swift Concurrency
Notice how our app is still super predictable in terms of what it does and when. But in between
every block of our program, the CPU will briefly switch to doing some other task.
The ability to run multiple tasks in this fashion is called concurrency. We perform work on
multiple tasks concurrently. This doesn’t mean we perform this work at the same time. It just
means that we’re working on multiple things at the same time.
Now, imagine that in one of these “Run other app” blocks an app is performing a very lengthy
task that can’t be interrupted. Operations that run in this manner where they must run from
start to finish without interruption are sometimes called atomic operations.
In other words, imagine that “Run other app” is performing a very slow atomic operation. This
means that the CPU would be stuck crunching numbers for this other app, and our app is now
frozen until the CPU finds time to perform some work for us again.
This is less than ideal, and to fix that we need to have multiple CPUs, and multiple threads.
Donny Wals 17
Practical Swift Concurrency
Understanding multi-threading
A thread in computing can be thought of as a single execution context that’s often used to
group related tasks or jobs together. For example, all of our application code would be grouped
into a single thread. Each of the other apps running on our system will have one or more
threads too. The OS will spawn several threads to deal with user input, UI rendering, and
performing background tasks too.
This is true regardless of the number of CPUs that are available. You can have more than one
thread of execution on a single CPU, and the CPU will run these threads concurrently. In other
words, the CPU will allow each thread to run for a little while and then schedules the next
thread, and so on.
When we introduce multiple CPU cores, we can allow these cores to run multiple threads in
parallel. The difference between concurrency and parallelism is subtle but important. You
already know that concurrency means to work on multiple tasks at the same time. Working
on multiple tasks in parallel means that you’re not alternating between doing multiple tasks
like you are in concurrency, but you’re literally doing multiple things at once.
Let’s update our graph once more to see what it looks like if our program would run in its own
thread on a dedicated CPU core:
Notice how we have three different colored blocks, each color represents a thread that is run
on a CPU core. I’ve simplified the image a little bit compared to before because I’m sure that
Donny Wals 18
Practical Swift Concurrency
you get the point without showing everything that our program does again.
You can see that our getFullName() function can run in parallel with an OS task, and even
with the system handling mouse movement. If the OS task would run a very slow operation,
our program won’t have to wait for it like it would when we only had concurrency on a single
CPU core. Now that the OS task’s thread runs on its own CPU core, our program’s thread can’t
be blocked by it because it’s running on a different CPU core.
In this example, our code is still single-threaded but the CPU is leveraging multiple cores
to run multiple threads in parallel. As an application grows, odds are that it might have to
perform a lengthy task that shouldn’t prevent your user from interacting with your app, and
that shouldn’t prevent your app from performing other work on behalf of the user.
A common example of using multi-threading in an application is performing network calls.
Let’s look at a diagram that’s intended to provide an overview of how a network call is made
in response to a button tap on iOS.
Donny Wals 19
Practical Swift Concurrency
Donny Wals 20
Practical Swift Concurrency
Notice how the dark colored boxes represent our app’s UI thread. On Apple platforms, we
refer to the UI thread as the main thread. This thread is responsible for rendering UI and
handling user interactions like taps and swipes. When the user taps the screen, we call
fetchFeedData(). This method kicks off a URLSession data task.
If we performed this task on the main thread, we would be waiting for the response for a while,
and we wouldn’t be able to animate the loading spinner at the same time because the main
thread would be blocked while waiting for our data to be fetched.
By default, URLSession will run its network calls on a separate thread (the lighter boxes).
While that thread is waiting for a response from the network, receives it, and decodes the
received data, the main thread will animate our spinner. Eventually the data is fully decoded
and we can pass the decoded data back to the main thread and update the UI.
This example clearly demonstrates how our main thread is free to do other things while a
different thread performs a lengthy operation. If we would run this example on a single-core
CPU, we would be running these two threads concurrently. Each thread would get some time
to do work before being paused so the next thread can do some work. Imagine that our CPU
would alternate between animating the spinner for the next UI draw cycle, and then back to
receiving and parsing a packet of data that was received from the server really quickly until all
the work is done.
If we run this example on modern hardware, we’ll get parallelism. Our application can leverage
multiple CPU cores on all of Apple’s platforms, which means that our CPU won’t have to
alternate between tasks. In theory, it can simply leverage one CPU core for each thread which
is far more efficient.
Notice that I mentioned that this is the case “in theory”. In practice, our app won’t be the
only thing running at all times; especially in a desktop environment like on the mac. In that
case the main thread and a background thread might share a CPU core while other processes
leverage the other cores. This is a detail that we, as programmers, typically shouldn’t concern
ourselves with. We shouldn’t focus on which thread is run on which CPU core; the system
should handle this.
In this networking example, our setup was relatively simple and we probably would never
encounter issues. However, one last topic I want to cover before we move on to more Swift-
specific code in the next chapter is the topic of data races and thread safety.
Donny Wals 21
Practical Swift Concurrency
In this diagram you can see that each file is being read on its own thread. The work that needs
to be done for each file is the same. Loop over the lines in the file, check if there’s a match
with our keyword, and write any matches to the matchesArray. On first sight, this looks
Donny Wals 22
Practical Swift Concurrency
perfectly fine and incredibly efficient. We’re parsing three files in parallel and writing matches
as we find them.
Unfortunately, there’s a huge potential issue in this approach. Multiple threads might want to
write a match at the exact same time. If this happens, we’ll encounter what’s called a data
race (unless we protect ourselves against data races).
When a data race occurs, multiple threads attempt to access the same memory resource (our
matchesArray) at the same time where at least one of these accesses is a write operation.
This is a data race because readers of the data will get an inconsistent representation of our
matchesArray. Things could get worse when multiple threads try to write to this resource.
Data races can lead to memory corruption and can be notoriously hard to debug because
they’re not guaranteed to occur.
For example, if all threads involved in the diagram above would run concurrently on the same
CPU core we’re probably fine; only one thread is active at the same time. If we’re running
in parallel, we’d have an issue when two threads access matchesArray at the exact same
time.
To fix this, we need to synchronize access to matchesArray. You will learn a little bit about
this in the next chapter, and you’ll learn about data races and access synchronization in-depth
in the chapter on actors.
For now, it’s important that you know data races exist, and why they can occur in a program
that leverages multi-threading.
In Summary
In this first chapter of the book, you’ve learned a lot about multi-threading fundamentals. You
should have a solid understanding of the differences between concurrency and parallelism,
and what threads are. You learned that multi-threading can be done on systems with one or
more CPU cores, and that parallelism is achieved by running two or more threads in parallel
on two or more CPU cores. When a single CPU core runs multiple threads, it runs the threads
concurrently; not in parallel.
You also learned that multi-threaded environments allow for a UI to remain responsive while
expensive and slow work is done in the background. After learning this, you learned that
Donny Wals 23
Practical Swift Concurrency
multi-threading comes with the risk of data races because threads can run in parallel. As a
result we open ourselves up to multiple threads accessing the same memory address at the
exact same time. This isn’t a problem until one or more of these access operations attempts
to write to the memory address.
In the next chapter, you’ll learn about concurrency in a pre-Swift Concurrency world. If you
are already familiar with the basics of Grand Central Dispatch, feel free to jump right over to
chapter three. If you feel like you might need a bit of a refresher on GCD, please go ahead and
read the next chapter to refresh your mind. I won’t talk about GCD much in this book, but
having basic GCD knowledge will be incredibly helpful to understand some of the comparisons
made in this book.
Donny Wals 24
Practical Swift Concurrency
Exploring DispatchQueue
In GCD (Grand Central Dispatch), we don’t interact with threads directly. Instead, we create
queues that we schedule work on. A dispatch queue will receive work items in the form of
closures, and the queue will schedule them to run as soon as possible. Often this might mean
that the work item is executed immediately, but other times it could mean that the work is
performed later depending on the queue’s configuration, and how busy the queue is.
Donny Wals 25
Practical Swift Concurrency
For example, a dispatch queue can be configured to run work serially. In this case, the dispatch
queue will perform one work item at a time, in other words, the work is performed in a serial
manner. If we schedule multiple work items in parallel, the work will be executed in a first-
come-first-served manner.
You could also configure a queue to execute multiple work items in parallel. This would allow
us to schedule several work items in parallel, and have them all be performed at the exact
same time.
By default, GCD provides us with two dispatch queues that we commonly use. The first (and
most important queue) is DispatchQueue.main. This queue is directly linked to our app’s
main thread which is responsible for rendering our UI. The main queue is a serial queue which
means that it only runs one body of work at a given time. This means that it’s extremely
important that we don’t schedule work that takes a long time to complete because that would
block our main thread from performing any other work.
The second queue is DispatchQueue.global(). This queue is a queue that can perform
multiple work items in parallel, and it’s commonly used to kick off expensive processes that
shouldn’t block the main queue. Since this queue is configured as a parallel queue, it will run
many work items concurrently.
The actual work that’s scheduled by our queues is run on threads and threads are run by CPU
cores. When our global dispatch queue runs a work item, it will do this as soon as possible. If
no available threads exist, GCD might spin up a new thread for us to perform work on. These
threads are created without taking into consideration how many CPU cores we have. The
result is that we can have way more threads than CPU cores.
Whenever we have more threads running than we have CPU cores in our processor, the CPU
will have to switch between threads so that we can run these threads concurrently. This
process of thread switching is quite expensive and relatively slow, and as the name for this
phenomenon — thread explosion — suggests, that’s not good.
Sadly, when we’re working with dispatch queues, we can’t really prevent thread explosion
from happening. It’s part of how GCD was designed, and it’s something that we have to accept.
As you’ll learn in later chapters, this problem is no longer relevant in Swift Concurrency.
Whenever we want to run work on a dispatch queue, there’s two ways for us to do this. We
can either use the sync method on DispatchQueue, or we can use the async method.
It’s important to understand that these two methods do not apply to how the dispatch queue
Donny Wals 26
Practical Swift Concurrency
will schedule the work. Instead, it applies to whether the call site (where we schedule the
work from) will wait for the work item we passed to our queue to be completed or not.
Consider the following example:
func performSync() {
print("sync: before")
DispatchQueue.global().sync {
print("sync: inside")
}
print("sync: after")
}
func performAsync() {
print("async: before")
DispatchQueue.global().async {
print("async: inside")
}
print("async: after")
}
sync: before
sync: inside
sync: after
Because we scheduled our work using sync, our performSync function will block until the
main dispatch queue has performed our work. Sometimes this is exactly what you want to
happen, but the way the second function works is what you’d more commonly expect. Let’s
see what output a call to performAsync produces.
async: before
async: after
async: inside
Donny Wals 27
Practical Swift Concurrency
You can see that in the second situation, our function did not block until the work item was
executed. Instead, the work got scheduled and we didn’t wait for the outcome; the last print
statement was executed immediately.
Very often, this is the exact behavior that you’re looking for. You don’t want to block execution
of your code since it might take a while for your work item to be executed.
Tip: The code bundle for this chapter contains a sample app that allows you to run all of
the examples shown in this chapter. You can run the app on your iPhone, mac, or iPad to
see the results for yourself.
At this point, I think you should have a decent enough understanding of dispatch queues and
how they work. Of course, there’s still a lot to learn if you’re truly interested in the nitty gritty
details of GCD, but since this is a book on Swift Concurrency, and my goal is to provide you
with just enough information to understand the world pre-concurrency, I’d like to move on to
the next tool in our GCD toolbox; DispatchGroup.
Donny Wals 28
Practical Swift Concurrency
func fetchWebsites() {
let group = DispatchGroup() // 1
var results = [Data]()
let urls = [
"https://practicalcoredata.com",
"https://practicalcombine.com",
"https://practicalswiftconcurrency.com"
].compactMap(URL.init)
// 2
group.notify(queue: DispatchQueue.main) {
print(results)
}
}
The code above constructs several urls, kicks off a network call for each url, appends the
fetched data to an array, and once all network calls are complete the data that we’ve fetched
is printed.
Of course, this example on its own isn’t particularly useful; but the pattern of fetching data
from various sources before using this data is useful and not too uncommon.
Let’s look at the code in more detail by going through the numbered comments.
1. First, we create a new dispatch group. We don’t need to pass this group anything. The
group, as mentioned, will only track the number of times we start work, and the number
Donny Wals 29
Practical Swift Concurrency
Dispatch groups are a very useful tool, and as you’ll learn in this book — we can replace them
with a Swift Concurrency version. For now I would like to move on to a brief discussion on
semaphores and locks before we actually start digging into Swift Concurrency properly.
Donny Wals 30
Practical Swift Concurrency
Let’s take a quick look at what this looks like before we move on to locks and semaphores.
The code above will work fine, but it uses a dispatch queue which means that we’re (potentially)
creating threads whenever we want our queue to perform some work. This might lead to
thread explosion and we know from the previous section that we don’t want that.
More importantly, the section you’re reading right now is supposed to teach you about locks
and semaphores, so let’s switch gears and see how locks and semaphores relate to synchro-
nizing access to a resource like we did in the code snippet you just saw.
A semaphore is an object that can keep track of how much of a given resource is available, and
it can force code to wait for resources to become available before that code can proceed.
This description is pretty technical, so let’s use an analogy to clarify what a semaphore does.
Imagine that you’re going to a restaurant. There’s only a certain number of tables available
(let’s say ten), and there’s a waiter at the door that will either point you to your table, or they
will make you wait at the door for a table to become available.
Once the first dinner party comes in, the waiter will note down that instead of ten tables, there
are now nine tables available, and they’ll take you to your table. The next party comes in, the
Donny Wals 31
Practical Swift Concurrency
number of available tables goes down, and the people are pointed towards their table. This
keeps happening until no more tables are available.
Once all tables are taken, the people that show up at the door must wait for somebody to
free up their table. When that happens, the waiter increments the number of available tables,
back to one. And if somebody is waiting, they will decrement the available table count back
to zero and point the people to their table.
In this example, the tables in the restaurant are the resource, and we have ten resources
available. The waiter is our semaphore that keeps track of how much of the resource is
available, and whether or not new guests will need to wait for a table to become available.
When we talk about semaphores, there are essentially two kinds of semaphores that we can
distinguish. One is a counting semaphore. The waiter example is an example of this. We have
n resources and the semaphore counts how many resources are taken and / or available.
A semaphore like this can be useful in certain situations, but we generally don’t use a counting
semaphore to prevent data races where we only want at most one thread to have access to a
given resource.
To achieve the kind of exclusive access we’re after, we use a so-called binary semaphore, also
known as a lock. This kind of semaphore will only have one resource available at maximum.
A binary semaphore can, for this reason, ensure that we only access a resource in a serial
manner. Just like we did with the dispatch queue earlier.
Let’s take a quick look at what this looks like with a semaphore.
Donny Wals 32
Practical Swift Concurrency
semaphore.wait()
cache[key] = value
semaphore.signal()
}
}
Every time we call wait on the semaphore, the number of available resources either de-
creases, or we wait for a resource to become available for us. After we’ve accessed the cache
dictionary, the semaphore’s signal method is called to increment the number of avail-
able resources so that whoever is waiting to access our cache dictionary can eventually gain
access.
When running this code you might see some warnings pop up about priority inversions due to
a thread with no priority (QOS) waiting for a thread that does have a priority (QOS). This is
fine; it doesn’t impact the point that we’re trying to make in this example. It mainly shows us
that a semaphore is maybe not the most straightforward way to achieve synchronized data
access.
Another way to achieve the same effect without the warnings is to use an object that was built
for this purpose: an NSLock.
The principles behind a lock are exactly the same as those for a binary semaphore so let’s
jump right into a code sample of how it’s used.
Donny Wals 33
Practical Swift Concurrency
lock.lock()
cache[key] = value
lock.unlock()
}
}
Looks familiar, right? It’s exactly the same as how I used the semaphore, except it’s called a
lock and we lock and unlock instead of wait and signal.
There are other kinds of locks available to us, but for the purposes of understanding how and
when a lock is useful, it’s not really relevant for us to explore these options.
In Summary
In this chapter, you’ve learned about some of the fundamental tools and principles that
were available to developers pre Swift Concurrency. Knowing about these fundamentals is
important because you’ll sometimes encounter code that uses these principles, you might
have to refactor existing code based on GCD, or you might just stumble across something on
the internet that refers back to GCD.
More importantly, as I’ve mentioned, I will sometimes refer back to GCD in the book and
assume you know the basics of what was outlined in this chapter.
In this chapter, you’ve learned about DispatchQueues and how they are used. You’ve learned
about the main dispatch queue that’s used to run our UI code, and you’ve learned that you
should never block this queue. You’ve also learned that we have a global dispatch queue that
will run code in parallel, away from the main thread.
After dispatch queues you’ve learned a little bit about data races, and how we can solve them.
After seeing an initial dispatch queue based solution, you learned about semaphores, and
that we have counting and binary semaphores. You saw how a semaphore is used, and then
you saw how we can replace a binary semaphore with an NSLock.
Now that you have a basic understanding of concurrency, parallelism, and concurrency with
GCD it’s finally time for you to start learning about Swift Concurrency and async / await. In
Donny Wals 34
Practical Swift Concurrency
the next chapter, you will be taking your first steps into a whole new world where you’ll await
your first asynchronous method call!
Donny Wals 35
Practical Swift Concurrency
• How to call asynchronous functions in Swift Concurrency, and how to handle potential
errors
• What happens when you call an asynchronous function
• How to define an asynchronous function of your own
Before we dig into async / await, let’s make sure that you’re all set up with the sample app for
this chapter so you’re able to follow along with the code-along parts in this chapter.
Donny Wals 36
Practical Swift Concurrency
All Macs should come with Python pre-installed so you should have no issues running this
command. If you do run into issues because you don’t have python available, you can install
Python yourself via the official Python website or you can run a local server on port 8080
through any other means you have available. For example, if you have node and npm installed
you could make use of the http-server package.
When you have your local server running, you have a static file server in the current working di-
rectory. In other words, it allows us to access URLs like http://127.0.0.1:8080/1.json
to access the json files in the movies directory.
With the server running, you’re all good to work on the sample app for this chapter and for
other chapters too. Whenever a chapter requires you to leverage our local server I will make
sure to remind you to start your local server.
Note that the local server will only be available when you’re running the sample code on your
Mac or if you’re running code on the iOS simulator. If you want to run the sample code on your
iOS devices you should make sure that:
The simplest way to find your Mac’s IP address is to open the System Settings app on your
Mac, navigating to the Network tab and then click on your active network connection (usually
WiFi or Ethernet, it will have a green dot alongside it). This will show you your Mac’s local IP
address that allows you to connect your phone to your Mac as long as they’re both on the
same network. Note that your Mac’s IP address will be different on a different network and
some networks rotate IP addresses so if something’s not working, always double check that
you’re using the right IP address.
You can find the same instructions that are listed in this section in the README.md file for
the book’s code bundle.
Donny Wals 37
Practical Swift Concurrency
do {
let (data, response) = try await URLSession.shared.data(from: url)
let htmlBody = String(data: data, encoding: .utf8)!
print(htmlBody)
} catch {
print(error)
}
In just a few lines of code, we can fetch data from the network using URLSession. In case
you’re wondering what the same code would roughly look like without async / await, here’s a
small code sample for you:
Donny Wals 38
Practical Swift Concurrency
}.resume()
It doesn’t take much effort to argue that the async / await version of this code is a lot easier to
write, read, and reason about.
The async / await based version of the code reads a lot like synchronous code while still being
non-blocking for the duration of the network call.
Let’s talk a little bit about the syntax and semantics of awaiting a function for a bit.
We had to call the data(from:) method on URLSession as follows:
First, we must use the try keyword when calling the data(from:) function because
it’s a throwing function. This means that if anything goes wrong with the network call,
data(from:) will throw an error. This is quite convenient because we don’t have to un-
pack a Result object like we commonly do when we’re calling asynchronous functions with
completion handlers.
After the try keyword, we write await. This keyword is mandatory when calling any asyn-
chronous function that’s marked with the async keyword. We’ll explore why that’s the case a
bit later in the chapter. For now, it’s important that you know that an async function should
always be called with an await.
It’s important that we always put try and await in the correct order. For example, you can’t
write await try. It’s either await myFunction() or try await myFunction().
If you do end up writing await try, Xcode will help you correct this which is quite nice.
We’ll take a close look at what happens when you await something in the next section, but
for now all you need to know is that our function is essentially “paused” (suspended) until
the function we’re awaiting is completed and then our initial function can resume.
The following image visualizes this idea of being paused until the work we were waiting on
has completed.
Donny Wals 39
Practical Swift Concurrency
Note that the someAsyncWork() function has a different color than the myFunction()
block. This is to indicate that awaiting the async function pauses execution of myFunc-
tion(), and frees up the thread that myFunction() was running on. Once someAsync-
Work() completes our original function can continue executing where it left off.
The explanation above paints a somewhat simplified picture of what it’s actually like to call
an async function. There are some rules and requirements that we must respect whenever
we call an asynchronous function.
Donny Wals 40
Practical Swift Concurrency
The body of our task will be executed by SwiftUI using the same constraints and rules as
onAppear. Essentially this means that our task will be run whenever our view will be shown
to the user. If SwiftUI removes the view from the view hierarchy it will automatically mark the
task as cancelled.
A Task in Swift Concurrency is the main unit of concurrency that we reason about. We’ll take
a deep dive into tasks and how they work in the next chapter, so I won’t explain them too
deeply for now. Just know that a Task is your basic unit of concurrency, and that any async
work that you do in Swift Concurrency is part of a Task in one way or another.
While it’s great that SwiftUI has a convenient way to create a Task for us, we probably want to
go async outside of SwiftUI too. We can do this by creating our own Task object as follows:
Task {
let url = URL(string: "https://practicalswiftconcurrency.com")!
do {
let (data, response) = try await URLSession.shared.data(from:
,→ url)
Donny Wals 41
Practical Swift Concurrency
You can put this code wherever you want; for example, you could use it in a viewDidLoad
function of you’re using UIKit:
func viewDidLoad() {
Task {
// ...
}
}
Note that the body of your task is run asynchronously, so any code that you put after your
task will not wait for the work inside of your task to be completed. For example:
func viewDidLoad() {
print("one")
Task {
// ...
print("two")
}
print("three")
}
Donny Wals 42
Practical Swift Concurrency
one
three
two
That’s because the code that we write inside of the Task is scheduled to run as soon as
possible. This usually means that the function we were already in has to finish first, and after
that the contents of the Task can start running.
Earlier in this chapter, you saw the await keyword for the very first time. We know that we
need to write await in front of any calls to async functions, and we know that we can wait
for that function to complete in a non-blocking way, but let’s take a closer look at that so we
can understand why using await does not block whatever function you’re in.
Donny Wals 43
Practical Swift Concurrency
In the image above, you can see the call stack for a synchronous function. As a function calls
other functions we build up a stack of function calls. Once a function completes, it’s popped
off of the stack and the function that came before that function continues running. Once that
function completes, it too is popped off of the stack and the function that called it resumes.
This continues until the outermost function is completed.
In a function that runs synchronously, once a function starts running, a thread must continue
running this function until it’s completed. When one of the functions in the call stack is
extremely slow, that means that the thread is stuck running that function until the slow work
is completed.
An asynchronous function however, is slightly different. Where a regular function must be
executed in one go, building up and unwinding its call stack uninterrupted, an asynchronous
function does not have this limitation. It has several predefined places (where we write await)
where it’s possible for Swift to take the call stack for a function and put it aside for a while.
Let’s see what this looks like in an image.
Donny Wals 44
Practical Swift Concurrency
The image above shows that we call a function and that function has an await in its body.
The system then takes the call stack for that function call, and it puts it aside. Once this has
happened, we can run other work. Notice how the other work is run on a different thread as
indicated by the background color of these blocks. At the point of calling our async function
the original call stack is moved aside as indicated by its background becoming transparent.
Once the work by processFile(_:) is completed, the original call stack is restored and
our code continues to run. In this image, the original function continues running on the thread
we started off on. This is not guaranteed to happen though. If the system determines that
it’s safe (and allowed) for another thread to resume a function then the system will place the
relevant call stack on an available thread.
Because of how a suspended task has its call stack put aside, using an await does not block
a thread. Instead, it does the exact opposite; it allows the system to take your function and
temporarily set it aside until it can be resumed. This frees up the original thread to do other
work until our original function can continue running.
Note that when an awaited function completes, all of the work that is awaited within that
function must also have completed. This might sound obvious to you, but it’s actually a
specific feature of Swift Concurrency that has a name. It’s called structured concurrency
and we’ll learn more about it in Chapter 9 - Performing and awaiting work in parallel.
Now that you know more about the await part in async / await, let’s dig into the async
part a bit more.
Donny Wals 45
Practical Swift Concurrency
What we’ve defined here is a function that will be run asynchronously. It doesn’t do anything
if you would literally copy this into a project, but it would do so asynchronously.
An async function does not always run in parallel with other functions or in a non-blocking
fashion if it is somehow tied to an actor. You will learn more about what this means, and how
it works in Chapter 6 - Preventing data races with Swift Concurrency. So for now, let’s just
assume that our function will run asynchronously.
Determining where an async function will run can be a bit of a puzzle depending on the
Swift version you’re using. Throughout the rest of this section, you will learn about the
default behavior that Swift 6.2 and earlier have for async functions. In the next section,
you will learn more about the different configuration options that are available in Swift
6.2 and how they can change the way your code runs.
Note that in a plain Swift 6.2 program that has none of the newer build settings enabled, it
doesn’t matter where we call our asynchronous function from. What matters is whether or
not the function itself is somehow tied to an actor. You’ll learn more about isolating functions
to actors in Chapter 6 - Preventing data races with Swift Concurrency but it’s important
that I mention this now so that you more or less understand when an async function runs
asynchronously.
Generally speaking, you’ll only mark a function as async if:
In the first case, your function would end up looking a little bit as follows:
Donny Wals 46
Practical Swift Concurrency
A more sensible example would be making a network call that you want to await the results
of. We can do this as follows:
This example is a lot more useful than the previous one. We load data and decode the loaded
data into a model. Since the network call can fail by throwing an error, we must use a do {}
catch {} block to handle any thrown errors.
Sometimes, you might not want to handle the error immediately and instead have callers of
your function handle the errors. In this case that means that instead of handling errors inside
of the loadData function we expect callers of loadData to handle any thrown errors.
If we want to do this, we can add a throws to our method declaration as follows:
Donny Wals 47
Practical Swift Concurrency
If we want to write a throwing asynchronous function, we must mark the method as async
throws. The async keyword must always come before the throws keyword. Similarly, if
we call a throwing asynchronous function we must write try await. Writing await try
would be a compiler error, but you already knew that of course.
Donny Wals 48
Practical Swift Concurrency
await Task.yield()
}
}
Adding calls to Task.yield in strategic places allows for your code to run in a highly con-
current manner while you make sure that you don’t claim a thread for longer than needed.
Adding a call to Task.yield does not mean that your function will always yield and be
suspended. If no other work needs to be performed, your loop would continue as if you never
yielded. If other work does need to be performed, your loop would be suspended until the
system ends your yield.
While it’s important that you’re aware of Task.yield and how it allows you to optimize
your code for maximum concurrency, it’s not a construct you’ll have to add to your code often.
In fact, I would argue that Task.yield is quite obscure and will mostly be used by people
writing code that interacts with low-level components that don’t have support for concurrency
yet.
Note that just like any other await, calling Task.yield() makes it so that there’s a chance
that your loop continues on a different thread than the thread you were on before your yield.
This might sound scary to folks familiar with thread safety and data races but it’s really not
that scary. As you’ll learn in later chapters, Swift does a really good job of helping us write
code that is safe, even when crossing thread boundaries.
Donny Wals 49
Practical Swift Concurrency
class MovieRepository {
@MainActor
func loadMovies() async throws -> [Movie] {
// ...
}
@MainActor
func makeRequest() -> URLRequest {
Donny Wals 50
Practical Swift Concurrency
}
}
The code above contains four different kinds of functions. There are two functions that are
isolated to the main actor through an explicit @MainActor annotation. One is async, the
other isn’t.
We also have two nonisolated functions. These functions are just written as plain func-
tions, and because MovieRepository isn’t an actor or annotated with @MainActor, we
consider both perform and decode to be nonisolated.
Let’s take a look at where these functions will run using Swift 6.1’s semantics.
func testFunction() {
print(Thread.current)
}
DispatchQueue.main.async {
// testFunction will run on main
testFunction()
}
DispatchQueue.global().async {
// testFunction will run on a background thread
}
In Swift 6.1 things work a bit differently for functions that are isolated to an actor and for
nonisolated async functions. Let’s go through the functions we defined earlier one by
one to explain what they are, and how they run when they get called from different places.
Donny Wals 51
Practical Swift Concurrency
@MainActor
func loadMovies() async throws -> [Movie] {
// This function will _always_ run on the main actor
}
The first function we’ll look at is an isolated async function. It’s isolated to the main actor
which means that when this function is called it will run on the main actor. It’s also an async
function which means that the function can suspend when calling other async functions. It
doesn’t matter where we call this function from. It’s isolated to the main actor so it runs on
the main actor.
Next, let’s look at an isolated non-async function:
@MainActor
func makeRequest() -> URLRequest {
// This function will _always_ run on the main actor
}
Similarly to the previous function, this one is isolated to the main actor which means it’s
always going to run on the main actor. No matter where it’s called from. The function is not
async which means that it cannot suspend to call any async functions.
Next up, let’s talk about a nonisolated async function:
In Swift 6.1, a nonisolated async function will always run on the global executor, no matter
where it’s called from. In terms of running on the main actor or not running on the main actor,
this means that perform will never run on the main actor. It doesn’t matter whether you
call this function from a main actor isolated function or from another nonisolated function. A
nonisolated async function in Swift 6.1 never runs on main.
Now, let’s look at a nonisolated non-async function:
Donny Wals 52
Practical Swift Concurrency
A nonisolated non-async function is really just a normal function that you would have written
prior to adopting Swift Concurrency. The nice thing is that these functions will behave exactly
like you’re used to. Call it from the main actor, it will run on the main actor. Call it from
elsewhere, it will run where you called it from.
That said, it does introduce some confusion around how nonisolated behaves in Swift.
Depending on whether a function is async, in Swift 6.1 and earlier, nonisolated can mean
“will inherit the actor that we called it from” or “will never run on an actor no matter where it
was called from”. That’s why Swift 6.2 contains some changes to where your code runs, and
how you can control where code runs.
Time to explore default actor isolation and actor inheritance in Swift 6.2.
Default Actor Isolation Generally speaking, less concurrency in your app will make your
app more stable and your code easier to work on. It’s rare for apps to be performing tons of
work that truly benefits from being async throughout. That’s why, in Swift 6.2 the new default
for all your code is to run on the main actor unless you specifically opt-out of running on the
main actor. This might sound scary at first but in reality, this will mean that most of your code
will behave the same as it did without Swift Concurrency. Try and remember how frequently
you explicitly decided to run code on a global dispatch queue versus just running your code
Donny Wals 53
Practical Swift Concurrency
“wherever”. Wherever most frequently would have been the main dispatch queue for most
apps.
By making running on the main actor the new default, you would basically change the follow-
ing code:
class MovieRepository {
// ...
}
You simply remove your @MainActor annotations because in Swift 6.2 you will receive these
annotations by default.
When you create a new Xcode project in Xcode 26, global isolation is set to be the MainActor
by default. You can look for the “Default actor isolation” build setting to turn it on or off for
your project.
To leverage global isolation in a Swift Package, you can add the following configuration:
swiftSettings: [
.defaultIsolation(MainActor.self)
]
This will isolate all code in your package to the main actor by default (unless you explicitly
opt-out for a type or function). You can mix different isolation settings throughout your project
since it applies on a package level. Your app should benefit from bein main actor by default in
most cases, but if your project contains a package that does loads of heavy processing it might
make sense to use a nonisolated default for that package instead. To be nonisolated,
you can set the defaultIsolation setting to nil to use nonisolated as your default
isolation.
Donny Wals 54
Practical Swift Concurrency
In addition to this “main actor by default” mode, Swift 6.2 comes with a feature that changes
the behavior of nonisolated async functions to me more inline with how noniso-
lated non-async functions work.
Our function is now marked nonisolated explicitly because it would otherwise receive
an implicit @MainActor annotation when we use global isolation. And with the new non-
isolated(nonsending) feature, the function will inherit the caller’s isolation context.
This means that the function will run on the main actor when it’s called from the main actor,
or elsewhere if that’s where it was called from. This makes the behavior for nonisolated
async consistent with nonisolated and non-async.
Donny Wals 55
Practical Swift Concurrency
This feature is automatically turned on when you create a new project in Xcode 26 through
the “Approachable Concurrency” setting. For existing projects you can opt-in by setting the
“Approachable Concurrency” build setting to “YES”.
Or if you’re using SPM, you must add the following feature flag:
swiftSettings: [
.enableExperimentalFeature("NonisolatedNonsendingByDefault")
]
To gain the same features as approachable concurrency would get you, you should also enable
a second feature:
swiftSettings: [
.enableExperimentalFeature("NonisolatedNonsendingByDefault"),
.enableUpcomingFeature("InferIsolatedConformances")
]
While inheriting a caller’s isolation makes it much easier to write code that doesn’t require
everything to be sendable, there are times where you want to make 100% sure that a function
does not inherit its caller’s isolation. To do this, you can mark a nonisolated function as
@concurrent:
@concurrent
nonisolated func perform<T: Decodable>(_ request: URLRequest) async
,→ throws -> T {
// This function will never run on the main actor in Swift 6.2 (it
,→ will run on the global executor)
}
Note that only nonisolated aysnc functions can be marked with @concurrent. For
example, the following is not allowed:
Donny Wals 56
Practical Swift Concurrency
However, a function that isn’t explicitly isolated can be marked @concurrent and it will be
nonisolated automatically:
So while nonisolated in Swift 6.2 will inherit the caller’s isolation if you opt-in to non-
isolated(nonsending) by default, an @concurrent function will not:
So, to summarize this. When you want code to just run wherever while also being able to
call other async functions, you write a nonisolated async function. If you want code to
Donny Wals 57
Practical Swift Concurrency
never run on the main actor, you make it nonisolated @concurrent so it always runs
on the global executor.
Using @concurrent should not be your “default”. You should think carefully about intro-
ducing concurrency because while it might sound like a good idea to run as little work on the
main actor as possible, context switching isn’t free, and writing good concurrent code is hard.
You need far less concurrency in your app than you might initially think.
Before we move on, here’s a list of every way that a function can be declared now, and where
that function will (or might) run. Given the following constraints:
class MovieRepository {
func loadMovies() async throws -> [Movie] {
// runs on the main actor due to implicit main actor isolation
}
Donny Wals 58
Practical Swift Concurrency
This list shows where every function will run, and it shows that with Swift 6.2, default actor
isolation, and nonisolated(nonsending) behavior between async and non-async
functions is consistent again.
For the rest of the book, I will assume that you’re using Swift 6.2 with Default actor isolation
enabled since that’s an opt-in feature. Here’s how that changes the list of function you just
saw:
class MovieRepository {
func loadMovies() async throws -> [Movie] {
// runs on the main actor due to implicit main actor isolation
}
Donny Wals 59
Practical Swift Concurrency
I understand this might be a little confusing and it will take time to develop an intuition for
where things run. The sample code for this chapter contains a lot of examples that show
where different functions will run depending on project settings. I highly recommend that you
explore these samples to get an understanding of different ways your code can be written,
and what the impact of Swift 6.2’s settings is on where your code will run.
Let’s put everything you’ve learned so far to the test by adding some network calls to this
chapter’s sample app!
Donny Wals 60
Practical Swift Concurrency
class MovieDataSource {
func fetchMovies(_ page: Int) async throws -> [Movie] {
}
}
Each method will follow the same pattern which is to fetch data, decode the data into the
expected model object, and then return the decoded array of models.
Here’s what the implementation for fetchMovies(_:) should look like:
Notice how we can fit all of this logic in just four lines of code. It’s really quite cool. If anything
goes wrong with either our network call or the JSON decoding an error will be thrown from
fetchMovies and the caller of this method can handle the error as needed (and possibly
retry the network call).
Before looking at the implementation for the other two methods, try writing them yourself.
The URLs that you should use are:
Donny Wals 61
Practical Swift Concurrency
• http://127.0.0.1:8080/cast-\(movieID).json
• http://127.0.0.1:8080/crew-\(movieID).json
Once you’re done implementing these methods, the solution is available below:
As I mentioned earlier, the implementation for all three methods is very similar.
Note that, since we’ve enabled global actor isolation which means that these functions all run
on the main actor. Calling await from the main actor to suspend our functions is fine; an
await is not blocking. That said, decoding JSON might take a while. With moderately small
responses JSON decoding is super fast, you won’t have any issues. But if you’d fetch a JSON
body that’s lots of megabytes large, decoding is going to cost time. Calling decode on the
main actor is potentially an expensive operation.
To avoid paying a decoding cost on the main actor, you have several options. The simplest
Donny Wals 62
Practical Swift Concurrency
This removes the inferred main actor annotation from MovieDataSource and makes it
so all members of MovieDataSource are nonisolated too. However, if you’re using
default settings for a new project in Xcode 26 “Approachable Concurrency” is turned on which
means your nonisolated async functions run on the callers actor. This means that calling your
functions from the main actor would result in them running there. To avoid this, you can mark
the functions as @concurrent so they always run on a background thread:
@concurrent
func fetchMovies(_ page: Int) async throws -> [Movie] {
let url = URL(string: "http://127.0.0.1:8080/\(page).json")!
let (data, response) = try await URLSession.shared.data(from: url)
Note that @concurrent can be used to offload work from the main actor even if you don’t
mark the data source as nonisolated which is what I’ve done in the sample project for this
cahpter.
Now that these methods are defined, we can call them from the relevant places in our code.
Start by opening PopularMoviesViewModel.swift. This is where we’ll add the code
that calls fetchMovies(_:) to retrieve the pages of popular movies.
The pattern that we’ll follow is to have our SwiftUI view call a regular synchronous method. In-
side of that method, we’ll start a new Task to call our async method to retrieve movies. After
that, we’ll update our @Published property movies with the newly retrieved movies.
We’ll start by implementing fetchPage(_:) and then we’ll implement the other two meth-
ods.
We really only need to change a single line in fetchPage(_:) to call our data source’s
fetchMovies(_:) method instead of returning an empty array:
Donny Wals 63
Practical Swift Concurrency
isLoading = true
defer { isLoading = false }
Next, let’s go ahead and implement fetchMovies(). Based on what you’ve seen already,
you might have some idea of how to do that. The key is to start a new Task and call the
fetchPage(_:) method from inside of your task:
func fetchMovies() {
currentPage = 1
Task {
do {
let fetchedMovies = try await fetchPage(currentPage)
movies = fetchedMovies
} catch {
// handle the error in some way
}
}
}
After adding this code, you can go ahead and run the project. You should see a list of movies
appear in the app.
To handle our error, there are several options available. One option is to just ignore any errors
thrown like we do now. We could also add an @Published var error: Error? to our
PopularMoviesViewModel and have our SwiftUI view present an alert with a relevant
message to the user when the error is set to a non-nil value. It’s really up to you and the
requirements of your app to decide what to do with certain errors. Just know you can catch
errors like you would normally and decide what to do from there.
Donny Wals 64
Practical Swift Concurrency
func fetchNextpage() {
currentPage += 1
Task {
do {
let fetchedMovies = try await self.fetchPage(currentPage)
await MainActor.run {
movies += fetchedMovies
}
} catch {
// handle the error in some way
}
}
}
Note that a Task in Swift Concurrency has an interesting quirk. The closure that we pass to
a Task can be a throwing closure which means that we don’t have to catch errors that are
thrown from within the closure.
Usually, this is quite frustrating because the compiler doesn’t inform us when we accidentally
forget to handle errors. But in this case, we might choose to ignore errors anyway which
means that it’s kind of nice that we’re allowed to write the following instead:
func fetchNextpage() {
currentPage += 1
Task {
let fetchedMovies = try await self.fetchPage(currentPage)
await MainActor.run {
movies += fetchedMovies
}
}
Donny Wals 65
Practical Swift Concurrency
The errors thrown by us are now essentially ignored because the Task doesn’t require us
to handle the errors. In essence, we let our task throw an error but since we’re not explicitly
waiting for any results from our Task object, we don’t have to handle the error. Again, usually
this is not what we want and it’s quite frustrating that Task makes it so easy to not handle
errors.
In the code snippet above it’s kind of nice that we don’t have to deal with errors since we’ve
decided we’re ignoring errors anyway. At the same time I would probably argue that having
an empty catch is better than this because it’s easy to overlook that errors could occur here,
and an empty catch would make the decision to ignore errors more explicit.
Doing that would look as follows:
func fetchNextpage() {
currentPage += 1
Task {
do {
let fetchedMovies = try await self.fetchPage(currentPage)
await MainActor.run {
movies += fetchedMovies
}
} catch {
// we're ignoring the error.
}
}
}
Now that we have our main page done, let’s add some more network calls to fetch cast and
crew for a movie’s detail page.
Open CastList.swift and look at the code that’s already there:
Donny Wals 66
Practical Swift Concurrency
VStack {
ForEach(crew, id: \.uniqueId) { castMember in
PersonCell(person: castMember)
}
}
}
}
You can see that this view has an @State property for the list of cast members, and that
the view has access to our movieDataSource object. This means that we can directly use
the movie data source in our view to update the list of cast members with the task view
modifier:
VStack {
ForEach(cast, id: \.uniqueId) { castMember in
PersonCell(person: castMember)
}
}.task {
do {
cast = try await movieDataSource.fetchCastMembers(for:
,→ movie.id)
} catch {
cast = []
Donny Wals 67
Practical Swift Concurrency
}
}
}
Notice that I have to use a do { } catch { } here because unlike the Task initializer,
the task view modifier does not take a throwing closure. All errors thrown by our async work
must be handled inside of our closure. In this case, I’ve decided to ignore errors but you could
leverage a second @State property to keep track of any errors that might have occurred and
present an alert or render text if needed.
Next, go to CrewList.swift and apply the same pattern you just saw there to fetch the
movie’s list of crew members:
With this code in place, you can run the MovieWatch app and see that all screens are now
populated with data. Use the segmented control on the movie detail page to activate the crew
or cast tabs and notice how the data for each section appears nicely.
This is great! You’ve just completed your first super-simple networking feature built with async
/ await. Feels good, right?
Donny Wals 68
Practical Swift Concurrency
In Summary
In this chapter you’ve made your fist steps with Swift’s async / await features. You learned
how you can call asynchronous functions with the await keyword and when you’re allowed
to do so. You learned that you can only call asynchronous functions from an asynchronous
context like a function that is already async or from a new Task.
After establishing some of the calling basics you learned what happens when you await a
function and why using await does not block your current thread. After that, we moved on
to defining asynchronous functions.
As the final part of this chapter, you’ve written your very first basic networking layer for
this book’s MovieWatch sample application. You saw that you can get a lot of work done
with relatively little code, and that the code you write with Swift Concurrency is much more
straightforward than code you write with callbacks.
In the next chapter of this book, you will learn more about Swift’s Task objects that we can
use to jump from a synchronous context to an asynchronous context.
Donny Wals 69
Practical Swift Concurrency
By the end of this chapter you will have a strong sense of how tasks fit in Swift’s Concurrency
system. It’s important to bear in mind that this chapter will not teach you everything there
is to know about tasks. For example, I won’t go into actor inheritance and child tasks very
deeply. These topics will be covered in their respective chapters. Covering them now wouldn’t
make too much sense since we’re still at the start of your journey into working with Swift
Concurrency, and you’ve already had to digest loads of information as-is.
Donny Wals 70
Practical Swift Concurrency
The main reason for you to create your own tasks is when you wish to execute a piece of
asynchronous code from a place that otherwise doesn’t support concurrency. An example of
this is kicking off asynchronous code from within a UIViewController in its viewDid-
Load function. Or maybe you want to call an asynchronous function from a SwiftUI Button’s
action handler.
A common sign that tells you that you need to go async is when Xcode shows you the following
compiler error:
This tells you that you’re currently in a non-async context that doesn’t support being sus-
pended. Or in other words, the place we’re calling our function from isn’t part of a Task.
Another reason to create a new task is when you want to run a body of work concurrently with
other work. Every Task you create will begin running immediately after it’s created, and it
will run concurrently with other tasks.
Knowing exactly when it makes sense to create an extra Task to make a piece of code run
concurrently with other pieces of code can be a pretty complex decision to make. For example,
when you’re processing a relatively large amount of data you’ll want to make sure you perform
this operation efficiently.
Initially, you might think of Task as a perfect tool to split all the work out into pieces of
work that run concurrently, and that might actually work perfectly fine. On the other hand
introducing concurrency into a part of your codebase that doesn’t need concurrency increases
complexity without offering a significant benefit.
Sometimes the benefit of running lots of work concurrently is quite evident. For example, if
you’re writing code to fetch data from many different URLs and you want this operation to be
done as fast as possible, you can create a task for each request you’ll make so you’re making
requests in parallel.
On the other hand it’s less likely for you to gain benefits from asynchronously mapping over
an array of model objects when all you’re doing is transforming them into a different domain
model. This kind of operation will generally be quite fast even when you do it all on the main
thread. And if you do want to perform your work away from the main thread it’s usually more
desirable to create an async function that’s nonisolated (and @concurrent depending
Donny Wals 71
Practical Swift Concurrency
on your project settings) and to perform your mapping in there. You’d map one object at a
time, but without blocking the main thread.
The bottom line here is that you should always try and stay away from trying to solve perfor-
mance problems that you don’t have. Especially when you’re trying to solve your non-existent
problem with concurrency. If you’re unsure whether or not a piece of work is causing you
performance issues you’ll want to run you code with Instruments and use the Time Profiler
instrument to make sure which code is causing you trouble.
Task {
let userInfo = try await fetchUserInfo()
}
This way of creating tasks is the recommended, and most commonly used way to create and
kick off asynchronous work in Swift Concurrency. You already know that creating a task as
shown above will run work concurrently with other tasks that are active.
A task created in this manner will be scheduled immediately but it’s important to know that
the task might not run immediately. This is fully dependent on where you create the task.
For example, if you create a new task from a main actor annotated function as shown below:
This code would start a new task that runs on the main actor but since the main actor is already
running our function, the printed output of this function would be:
Donny Wals 72
Practical Swift Concurrency
Pre Task
After Task
In Task
It’s possible to create a task that does start running immediately as follows:
By using Task.immediate you ensure that the task starts execution immediately instead
of as soon as possible. In this case, that means that our task starts running on the main actor
immediately, interrupting execution of startATask entirely. Our new task will run until it
runs into a suspension point (an await or yield() call) or until the task completes. If our
task suspends, the main actor will continue running startATask until our newly created
task is resumed.
So while immediate forces our newly created task to start running immediately, it doesn’t
mean that our task must finish before the function that created it can finish.
My general recommendation is to just use Task and only reach for immediate when you
find that it would truly benefit your situation.
We call tasks that are created as shown above unstructured tasks. As the name suggests, an
unstructured task does not participate in structured concurrency. For now, you don’t need to
worry about what structured concurrency is; we’ll cover this in depth in Chapter 9 - Performing
and awaiting work in parallel.
Note that an unstructured task is not a child task of the context you created the task from.
Unstructured tasks inherit certain bits and pieces from the context that they are created in:
• Actor isolation
• Task local values
Donny Wals 73
Practical Swift Concurrency
I won’t go into details on these two topics right now. For now, it’s important that you know
that an unstructured task inherits the two bits of context I just mentioned from the place that
the task is created from. In practical terms this means that a task that is started from a context
that runs on the main actor will mean that the newly created unstructured task will also be
running on the main actor.
Creating new tasks should be a rare occurence in most of your code, especially when you’re
already in a function that’s async.
There are scenarios where you might want to make sure you don’t inherit anything from the
context you start out from. For example, you might be creating a new task from your main
actor (which runs code on the main thread) while you want to make sure that your task runs
its body anywhere except the main thread. In this kind of situation, a detached task might be
the tool you’re looking for.
Generally speaking it won’t be common for you to run into situations where you must use
a detached task. Offloading work from the main thread can be done by correctly defining
your async functions as nonisolated (and @concurrent) and spawning new tasks for
parallelism is less practical than using tools like a task group which we’ll cover later.
We can create a detached task using the Task.detached method:
Task.detached {
let userInfo = try await fetchUserInfo()
}
While a task that’s created with the plain Task initializer inherits the actor and task local values
from the context it was created in, Task.detached inherits neither of those attributes from
its context.
Like I mentioned earlier, this means that a Task that’s created from within a function or task
that runs on the main actor will, itself, run on the main actor. It inherits the actor from the
context it was created in. A detached task that’s created from within that same function will
not run on the main actor. It does not inherit the actor from the context it was created in.
Earlier, I mentioned that detached task should not be needed often and you might be wonder-
ing why you wouldn’t want to detach your tasks from the main actor.
Donny Wals 74
Practical Swift Concurrency
The example you saw earlier is a perfect example of a detached task that probably should not
have been detached in the first place. Regardless of where the task runs exactly, and which
context it was created in, we’re not blocking any threads with the work in this task.
In other words, if we would use an unstructured task instead of a detached task to await
fetchUserInfo, the result would be the exact same.
When you look at the contents of the task, you will see that the task immediately suspends to
await the call to fetchUserInfo.
Unless fetchUserInfo was explicitly annotated or written to run on the main actor (or any
other actor), that method is considered nonisolated. And if you’re not using noniso-
lated(nonsending) by default, all nonisolated async methods will run on a background
thread. If your are using nonislated(nonsending) by default, annotating your function
with @concurrent ensures that your function never runs on the main actor.
And since an await is not a blocking operation but a suspension point, this means that if we
have an unstructured task that’s running on the main thread, an await inside of that task
does not block that thread. Instead it frees up the thread to do other work while our task is
suspended and awaiting the result of our method call. Of course, the method your calling
might block the main actor but that’s not the point. The point here is that the await itself is
never blocking.
You will learn more about this reasoning in Chapter 6 - Preventing data races with Swift
Concurrency which is where we’ll take a deep dive into actors and isolation.
Your key takeaways from this section should be the following: - Depending on your compiler
settings and how your async function is defined, your async function might already be running
on a background thread - Awaiting an async function creates a suspension point, this allows
the thread that was running your task to make progress on other tasks while you’re suspended
- In Swift Concurrency, an async function does always not run on the thread it was called from.
The actual run destination depends on the function definition and your compiler settings.
A third way to create a new task that is quite common in SwiftUI apps is to use the task view
modifier. In the previous chapter you’ve leveraged this view modifier to kick off the fetching
of crew and cast members in the MovieWatch sample app. The task view modifier creates
a new task whenever your SwiftUI view is about to appear for the first time, very much like the
onAppear view modifier. The main reason to favor task over creating a new Task instance
yourself in onAppear is that the task created in the task view modifier is automatically
Donny Wals 75
Practical Swift Concurrency
cancelled whenever your SwiftUI view disappears. This means that there’s less cleanup for
you to do, and it’s harder to make mistakes.
class TaskRunner {
static func run(tasks: Int) {
for i in 0..<tasks {
spawnTaskAndSleep(for: 3)
print("Spawned task for \(i)")
}
}
Donny Wals 76
Practical Swift Concurrency
When you call the run method that’s defined on TaskRunner with a nice high number like
10 and run this sample on the iOS simulator you’ll find that it produces an output that looks a
bit as follows:
As you can see, every task starts and ends before the next task can start. In other words, we’re
only running one task at a time. This is due to a limitation in the number of threads that are
available on the simulator.
When you run the example’s macOS target you’ll find that you see a different output; you
Donny Wals 77
Practical Swift Concurrency
should see that the console prints information about multiple tasks in parallel because there
are more threads available on your mac.
Try bumping the number of tasks you spawn in ContentView.swift to something very
high like 100 and you’ll see that we process batches of tasks instead of processing only one at
a time.
You’ll observe the same effect as you saw on macOS when you run the example app on an iOS
device since iOS devices have multiple CPU cores just like a mac does.
The reason we’re seeing the output we saw is that sleep blocks our task for the duration of
the sleep by sleeping the underlying thread. The task does not give up its thread because
the thread is sleeping. This means that nothing else can leverage that thread for the duration
of the sleep.
If we refactor the spawnTaskAndSleep to use a non-blocking way of sleeping you’ll find
that this problem goes away:
Running this version of the code results in an output that looks a bit as follows:
Donny Wals 78
Practical Swift Concurrency
You can see that all tasks start in rapid succession, then give up their thread while they are
suspended to await the Task.sleep call, and then resume once the sleep is over.
Donny Wals 79
Practical Swift Concurrency
The lesson here is that you want to be mindful of the work you’re doing in your tasks. A key
rule for tasks is that they should always be making forward progress (or give up their thread).
If you decide that you want to perform blocking work on a task it’s a good idea to manually
allow Swift Concurrency to suspend your task so other tasks can make progress in between
the work you’re doing.
As you’ve learned in the previous chapter, you can voluntarily give up your thread by calling
Task.yield() in between performing heavy work. For example, if you’re processing a
large number of files in a single task it might make sense to allow the concurrency system to
suspend your task in between files as follows:
Task {
for file in files {
processFile(file)
await Task.yield()
}
}
As robust as Swift Concurrency aims to be, it really relies a lot on developers being good
citizens. I’ve mentioned it before but the key rule here is ensure that your tasks are either
making progress or yield their thread like we do in the example above.
Generally speaking, it’s pretty safe to say that it’s not likely that you will be writing a lot of
code that is problematic. You can safely assume that any code that’s provided by lower level
systems will correctly suspend and resume as needed. An example of this is URLSession’s
async data method. When you’re calling this method from a Task you can be certain that
you’re not blocking anything.
As always, if you suspect that you might be performing a slow blocking task in a Task, measure
your code with Instruments using the Time Profiler and take it from there. You will learn more
about profiling your async work with Instruments in Chapter 11.
Donny Wals 80
Practical Swift Concurrency
To do this, we can mark our tasks with one of three priority levels:
• userInitiated
• utility
• background
These tasks range in importance from the user absolutely needs this to use the app all the way
down to we need to do this, but it’s okay if it’s done a bit later. The default task priority that’s
used by tasks is userInitiated. This will ensure that your task is run as soon as possible,
on processor cores that are as fast as possible (on systems with different types of cores), and
before tasks that have a lower priority.
While it sounds great that the default priority is to mark your task as extremely important,
it’s likely that a lot of the tasks that you want to perform should really be utility tasks or
possibly even background tasks.
Deciding the correct priority for your tasks can be pretty tough. The thing that you should
always ask yourself is how limited the user is within your app while you’re running a certain
task.
For example, if you’re fetching data that should be shown in your app it makes sense to run
this as a userInitiated task. You want the data to be shown as soon as possible because
until the data is shown the user is looking at a spinner.
On the other hand, if your app has a data exporting option where you convert a bunch of
model objects to a JSON file and write it to disk it might make sense to do this as a utility
level task. This task can run while the user is using the app so it doesn’t need to compete with
tasks that help the user actually use the UI. You could even run a task like this as background
because the user will most likely not be sitting and waiting for the export to complete.
A classic example of a task that can run as a background task is when you have a process in
place to synchronize data that was created locally over to a backend service. This sync can
happen at any time, doesn’t need to run with a super high priority, and the user doesn’t even
need to know when this process starts or finishes. They just need to be able to trust that the
sync is performed automatically at an appropriate time. That’s exactly what a background
priority task gets you.
Being mindful of the task priorities that you use when scheduling your tasks will help the Swift
Concurrency system to optimally schedule and perform your work.
Donny Wals 81
Practical Swift Concurrency
At the same time, you shouldn’t get too hung up on trying to decide whether some piece
work should be userInitiated or utility. The system default is userInitiated
and more often than not this default is absolutely fine to be used unless it’s crystal clear that
a lower priority level is more appropriate.
class TaskLifecycle {
let networking = Networking()
var items = [Response]()
func loadAllPages() {
Task {
var hasMorePages = true
var currentPage = 0
while hasMorePages {
let page = await networking.fetchPage(currentPage)
if page.hasMorePages {
currentPage += 1
items.append(page)
} else {
hasMorePages = false
}
}
}
}
}
Donny Wals 82
Practical Swift Concurrency
The first issue you might have noticed is that we use items instead of self.items. Nor-
mally, this would result in a compiler error that informs you about having to explicitly use
self to make the capture semantics of self explicit.
There are various situations where you don’t need to capture self explicitly. The reason for
this is that some closures that capture self have a clearly defined lifecycle with a beginning
and end of the work. This means that these closures don’t create a retain cycle that will never
be broken. The idea is that even if self would be captured for a little longer than we’d like,
eventually our retain cycle solves itself.
In most cases in Swift Concurrency this is fine; a Task will usually end (eventually) so we’re
not keeping self alive forever.
However, in the code above we’re loading potentially lots of pages from the network. Imagine
that our API returns up to a hundred pages and a user dismisses the screen that initiates this
loading after the first couple of pages were fetched.
Our implicit self capture would stick around until after all pages are loaded even though we
already know we’re never going to present these pages. We can fix this with a [weak self]
which will work exactly like you’re used to. A weakly captured self will make sure that self
can be released when the user dismisses the screen they were looking at which is great.
However, this doesn’t fully solve our issue. We know self will be deallocated but unfortu-
nately, we’ll still be performing lots of work before our Task completes.
The reason our work doesn’t stop is that our Task’s lifecycle is not bound to anything. In
other words, our task begins running when it’s created and it doesn’t stop until all work is
done. If we want to make sure our task is cancelled when self goes out of scope we need to
explicitly check whether self exists, and we should cancel our task when self is gone.
The following code shows how we can do this:
class TaskLifecycle {
let networking = Networking()
var items = [Response]()
func loadAllPages() {
// we've added a weak self here
Task { [weak self] in
Donny Wals 83
Practical Swift Concurrency
while hasMorePages {
guard let self = self else {
return
}
On every iteration of the while loop, we check if self still exists. If it doesn’t we return
from the Task which will end the task and stop any work that we would otherwise perform.
The way that Task captures self implicitly, and the fact that a Task doesn’t have its lifecycle
bound to anything means that we should always be very mindful of whether or not we should
be capturing self weakly, and whether or not we should cancel our tasks at certain points
when self is no longer around.
If you’re only performing a single networking call in a task, you might not have to check
whether self still exists, and a [weak self] might not make sense.
The reason for this is that your network call will most likely start while self still exists because
your Task begins running as soon as it’s created. If you capture self weakly and unwrap it
right after your task starts you’re still retaining self for the entire duration of your Task. In
other words, you didn’t gain anything from your weak self if you’re going to require self to
exist for the entire duration of your task anyway.
The above is essentially the reasoning behind why a Task has an implicit self capture. The
Donny Wals 84
Practical Swift Concurrency
work you’re doing will usually complete eventually so keeping self around for just a bit longer
usually doesn’t cause any issues. In Chapter 8 we will see that tasks that can run for a poten-
tially infinite amount of time exist, and you will learn how to properly avoid retain cycles for
these kinds of tasks.
In Chapter 9 we will dig deeper into task cancellation, and more specifically we’ll take a look
at how cancellation propagates through our tasks using a concept called cooperative task
cancellation.
Task {
let userInfo = try await fetchUserInfo()
}
Notice how we call a throwing method with the try keyword but there’s no do or catch
in that code. The Task will happily catch errors for you without any complaining. This is
convenient when we want to ignore errors on purpose which is something we shouldn’t be
doing. Ignoring errors should be a conscious decision that we’ve made clearly and obviously.
Quietly ignoring errors isn’t great because it hides our intent; it’s never clear whether we’re
ignoring errors on purpose or not.
Donny Wals 85
Practical Swift Concurrency
Unfortunately this is something you have to know and be aware of; the compiler can’t help
you here. Whenever you call a throwing method in a Task at the very least add a do {}
catch {} with an empty catch clause that holds a comment. Something like this would
do just fine:
Task {
do {
let userInfo = try await fetchUserInfo()
} catch {
// ignore errors...
}
}
This makes our intent clear and will not leave our coworkers wondering whether we meant to
ignore errors or not.
When you create a task, you can assign the task itself to a property. Your task can then
eventually produce a value which we can extract through the task’s value property. Note
that even when our task returns nothing, we can access the value property to know when
our task completed.
If the task can throw an error, we have to await the value with the try keyword. We don’t
need the try when our task can never thrown an error. The following code demonstrates this
by showing you an incorrect and a correct example of awaiting a task’s value property:
Donny Wals 86
Practical Swift Concurrency
} catch {
return nil
}
}
The fact that it’s allowed for a task’s value to produce an error is the entire reason why your
task allows you to throw errors from within your task closure. The closure itself is allowed
to be throwing which means that it’s perfectly fine for us to “leak” our errors from the task
without any consequences.
In Summary
In this chapter, you’ve learned a lot about how Task can be used in Swift concurrency. We
started by looking at situations where you’ll want to (or have to) create new Task objects.
You learned that a Task in Swift encapsulates a body of asynchronous work and that a Task
runs concurrently with other Task object.
You also learned that there are three ways to create a Task. Two that inherit certain bits and
pieces from the task it’s created from, and one that runs completely detached from other
tasks. Generally speaking, you won’t be using detached tasks much; it’s preferred to use a
regular unstructured Task. And even then, you should always carefully consider whether a
new Task is what you need.
After that, we talked about Task lifecycles and capture semantics. You learned that a Task
implicitly captures a reference to self which means that we can access members of self
without explicitly referencing self and without capturing it in our Task closure’s capture
list. You also learned that a Task runs until all its work is completed, and that its lifecycle is
not bound to the place it’s created from. This means that we need to carefully consider if and
how we should be cancelling our tasks if the task’s creator is deallocated.
We wrapped this chapter up with a brief overview of how Task swallows errors without any
compiler errors or warnings. In my opinion this is an unfortunate gotcha of how Task works
Donny Wals 87
Practical Swift Concurrency
and it’s something you have to know and remember. Ignoring errors should always be a
conscious decision so it’s a good idea to make your intent clear at all times, even when your
code compiles when you don’t make your intent clear.
In the next chapter, we’ll take a look at bridging your existing code over to the world of Swift
Concurrency through continuations.
Donny Wals 88
Practical Swift Concurrency
If you’re interested in learning more about Swift 6.2, make sure to check out Chapter 12 -
Migrating to Swift 6.2. That chapter covers strategies to apply to projects that have already
adopted Swift Concurrency but have not yet migrated to Swift 6.2.
Donny Wals 89
Practical Swift Concurrency
object that wraps an object representing your networking layer. Depending on your app’s
architecture, this object could have many names, but let’s call it a view model.
Your view model might have a function that retrieves a certain page worth of content from a
service via a network call. It might also handle caching and enrich the data in some way.
The following code represents what this might look like when modeled as protocols. Note
that I’m not suggesting protocols are the best way to model networking interactions; I’m only
using them because they’re a convenient way to have something that compiles, while also
allowing me to demonstrate the API for a setup.
protocol Networking {
func performRequest(_ request: URLRequest, completion: @escaping
,→ (Result<(Data, URLResponse), Error>) -> Void)
}
protocol PagesViewModel {
var network: Networking { get }
func fetchPage(_ index: Int, completion: @escaping (Result<Page,
,→ Error>) -> Void)
}
In the snippet above, you can see how the view model object has a fetchPage(_:completion:)
method. That method calls the networking service which in turn would perform a network
call using URLSession.
A great first step to migrating that code over to async / await would be to update the view
model to have an async version of fetchPage(_:completion:) alongside the existing
fetchPage(_:completion:) method:
protocol PagesViewModel {
var network: Networking { get }
func fetchPage(_ index: Int, completion: @escaping (Result<Page,
,→ Error>) -> Void)
Donny Wals 90
Practical Swift Concurrency
1. We implement this function using async / await and we have duplicated business logic
between the two versions of fetchPage.
2. We create a bridge between the async / await world and the callback based world so we
can slowly migrate over and avoid duplicating business logic.
The second option is the one I would suggest. Reimplementing everything in one go is almost
never a good idea because you’d be making way too many changes all at once and you would
have no way to properly test your work while you’re migrating. Furthermore, having duplicated
business logic is a good way to increase your maintenance burden and introduce bugs over
time. You don’t want this.
In order to build a bridge between async / await and callbacks, you can leverage continuations.
A continuation is used as follows:
Donny Wals 91
Practical Swift Concurrency
All three of these options are defined as overloads for the resume method on continuations.
While using a continuation in a situation like this doesn’t look too complex, there are a couple
of rules to keep in mind when working with continuations.
Always make sure that you complete your continuations at some point.
It doesn’t matter how long your continuation takes to complete, as long as it completes
eventually. You only need to ensure that every codepath within the code you’re bridging into
Swift Concurrency has a correctly implemented exit point that resumes your continuation.
This usually means that you should pay extra attention to making sure that every codepath in
a callback based function actually calls the completion handler instead of failing silently.
It’s also important that you only resume your continuations once. Resuming a continuation
more than once is an error and will result in a crash. Depending on what kind of continuation
you’ve used, this might be a very vague crash, or a pretty clear one.
There are two kinds of continuations that you can use:
• Checked
• Unsafe
Donny Wals 92
Practical Swift Concurrency
propagation of errors which means that you would have to try await your calls to
withCheckedThrowingContinuation.
A checked continuation is a continuation that performs runtime checks to ensure that you
don’t complete your continuations more than once, and to ensure that your continuation
doesn’t get deallocated without first resuming it which would cause any code that awaits your
continuation to be hanging forever, and resources that are held on to by the continuation to
be retained which would be a leak.
When you accidentally break one of the rules while using a checked continuation, you will
get a nice runtime crash or warning that has a clear message and stack trace. This is exactly
what you want during your development phase because debugging a checked continuation is
much, much easier than debugging an unsafe one. The image below shows an example of
what a crash for a checked continuation looks like in Xcode:
An unsafe continuation is used to solve the exact same problem as a checked continuation
except it doesn’t eagerly perform runtime checks. This means that a checked continuation
Donny Wals 93
Practical Swift Concurrency
has a little bit of overhead that an unsafe continuation doesn’t have, but otherwise they work
the same and have the same rules and requirements.
When you accidentally misuse an unsafe continuation, your app will crash at runtime with a
much vaguer error message. Debugging this will be much harder so it’s recommended to only
use an unsafe continuation when you know that you’re using continuations correctly. The
image below shows an example of a crash for an unsafe continuation:
The code bundle for this chapter contains a sample app that allows you to run examples of
continuation misuse in both a checked and unsafe continuation. I highly recommend that
you take a look at the code to see exactly what’s happening and to experience these crashes
for yourself.
Generally speaking, you should avoid using unsafe continuations. There’s a very minor runtime
overhead associated with checked continuations but in my experience that overhead is not
worth the hassle of dealing with mistakes that can occur in unsafe continuations. If you’re
concerned about the exact amount of overhead that you’re incurring by using a checked
Donny Wals 94
Practical Swift Concurrency
continuation, I can only recommend that you profile your code with Instruments and then
make an informed decision about whether or not you should switch to unsafe continuations.
Donny Wals 95
Practical Swift Concurrency
init() {
$searchText
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.flatMap { query in
let url = URL(string:
,→ "https://practicalswiftconcurrency.com?q=\(query)")!
return URLSession.shared.dataTaskPublisher(for: url)
.map(\.data)
.decode(type: [String].self, decoder: JSONDecoder())
.replaceError(with: [])
}
.receive(on: DispatchQueue.main)
.assign(to: &$results)
}
}
The key component in this code is the fact that we use a flatMap to take the current search
query and that we create a new publisher that performs a search query using URLSession.
Using a flatMap to kick off some new asynchronous bit of work in a Combine pipeline is
the correct way to kick off asynchronous work in response to receiving values. Because it
allows us to construct and return a new publisher who’s values will be sent downstream by
the flatMap.
It’s highly unlikely (and probably incorrect) that you’ll encounter code that performs asyn-
chronous work in a regular Combine operator like map or filter. These operators take an
input and synchronously transform the received value into a new value rather than taking an
input and returning a new publisher that performs asynchronous work.
As a rule of thumb, if you’re looking to kick off new work in response to receiving output
from a publisher, flatMap is the operator you’re looking for. Note that flatMap is not an
asynchronous operator. It requires us to synchronously create and return a publisher. It’s the
returned publisher can perform work and emit values asynchronously.
Donny Wals 96
Practical Swift Concurrency
With that knowledge in your head, we can start refactoring the code above to leverage an async
function to perform the search query. For example, we might have the following function
defined on the SearchFeature class:
We can leverage a flatMap, Combine’s Future and a Task to call this function instead of
using a dataTaskPublisher:
Task {
do {
let results = try await self.search(for: query)
promise(.success(results))
} catch {
// instead of failing, complete with empty result
promise(.success([]))
}
}
}
}
.receive(on: DispatchQueue.main)
Donny Wals 97
Practical Swift Concurrency
.assign(to: &$results)
We didn’t necessarily make our code any shorter (nor do I consider this version of the code to
be easier to read) but it does work as a means to call async functions from within a Combine
publisher.
Regardless of the Combine operator you choose to use you always provide that operator a
closure that receives some input, synchronously transforms that input into something new,
and then that something new is returned from your closure as the output of that closure.
So in our flatMap we take the output of a publisher and use that to create and return a
Future. That Future can perform asynchronous work by calling a callback based API, or by
creating a Task and using that to perform async work. Eventually the Future’s promise is
called with a result, and that result then becomes the output of the Future. And subsequently
that value is what’s passed down the Combine pipeline.
Attempting to call and await an asynchronous function inside of a map or other operator is not
the way to go. None of Combine’s operators expect us to perform asynchronous work inside
of the operator’s closure so we have to resort to constructs like a Future and flatMap to
bridge our Swift Concurrency code into the world of Combine.
class Networking {
func fetchHomepage() -> AnyPublisher<String, Error> {
let url = URL(string: "https://practicalswiftconcurrency.com")!
Donny Wals 98
Practical Swift Concurrency
Subscribing to a publisher that will only emit one value and then complete is a perfect example
of something that could be easier to work with when it can be called as an async function
instead since it would mean that we don’t have to go through the process of setting up a
Combine subscription and retaining our cancellables.
When you have an existing method that returns a publisher, you can bridge that function into
the world of Combine by converting the returned publisher into an async sequence, and using
the very first value emitted by the sequence as your return value. The code below shows how
you can do this:
class Networking {
func fetchHomepage() -> AnyPublisher<String, Error> {
// unchanged
}
Donny Wals 99
Practical Swift Concurrency
Every Combine publisher has a values property that we can access to obtain an async
sequence that emits all values that are emitted from the publisher as soon as the publisher
emits them. Since we know our publisher will only emit a single value, we can leverage an
async sequence and return the first emitted value from our asynchronous function. This is a
nice pattern to leverage when bridging single-value publishers from Combine over to async /
await.
Because we know that our fetchHomepage publisher only emits a single value, I wrote
return value in the async for loop’s body. Just like in a regular for loop, doing this will
return from the enclosing function. In other words, it will make the version return the first
(and only) string that’s emitted by the fetchHomepage publisher.
Calling this function is now as straightforward as calling any other async function:
Note that it would also be possible to accumulate all values emitted by a publisher and return
them as an array:
return values
}
This would be useful if our publisher returns multiple values and we want to return them all
from our async function. Note that our async for loop will keep running until the publisher
who’s values we’re iterating over has completed. If the publisher never completes, our for
loop will also never complete. This can be an important detail to keep in mind in situations
where you’re bridging never ending publishers over to Swift Concurrency.
As mentioned before, we’ll take a much deeper dive into async sequences in Chapter 7 -
Working with asynchronous streams. For now, this is everything you should know about async
sequences and how they can be used to bridge Combine code over to async await.
The nice thing about what we’ve done above is that we can keep both the Combine version
of the code as well as have an async version of the code. This allows us to slowly but surely
migrate over to Swift Concurrency where it makes sense, without changing our entire codebase
all at once.
this earlier when you learned about continuations and when you saw how you can convert a
Combine based function over to Swift Concurrency.
Having the old and new API side by side allows you to refactor slowly but surely without
refactoring business logic all at once. When you choose to replace code immediately I’ve often
found that the scope of the refactor grows rapidly and the likelihood of introducing subtle
bugs increases.
During a major migration I’ve more often than not found that slow and steady wins the race;
make sure you have a plan, and make sure you execute on it.
Once all code in a given layer only calls the async version of a method you can of course safely
remove the old version of the method and begin refactoring its contents over into your async
version of the method. From there, you can start adding async versions of methods in layers
that are used by the layer you’re refactoring in the same you would have done for your view
layer.
Alternatively, you could decide to refactor feature by feature instead of layer by layer. This
can absolutely work fine too depending on how well your features are isolated from other
features.
If you’re starting your migration right now, using the new tools that we have in Swift 6.2 is
really helpful. Running your code on the main actor by default means that everything about
your codebase will be simpler than it is when everything can run anywhere by default. Leave
the main actor only when needed, and only introduce concurrency when it’s intentional. This
model, in my opinion, is very strong. And you truly do need less concurrency than you might
think. Profile your current codebase, see how much work is done on the main thread. It’s
likely more than you expected.
Opting into features like Approachable Concurrency to turn on NonSendingNonIsolat-
edByDefault will make your code more flexible, easy to understand, and you will encounter
fewer compiler errors in the Swift 6 language mode without sacrificing safety and correctness.
You will learn more about this in Chapter 12 - Migrating to Swift 6.2 so make sure to give that
chapter a read too before you start refactoring your code.
Your key takeaway for this section should be that it’s important to learn and understand Swift
Concurrency before refactoring, and that it’s important to make sure you have a good sense
of how you want to tackle your refactor in a way that won’t make you spiral out of control.
Slow and steady wins the race...
In Summary
In this chapter, you’ve learned a lot about how Swift Concurrency interacts with existing
systems. You’ve learned about continuations and how they allow you to take any bit of
callback based asynchronous code and move it over to an async / await friendly wrapper.
After that, we did a brief exploration of how we can use both Combine and async / await in a
codebase. You’ve seen the basics of calling async functions in a Combine operator, and we’ve
looked at how we can iterate over values that are published by a Combine publisher using an
async for loop.
Lastly, we took a deep dive into establishing a migration path for an existing codebase so
it can be refactored to be fully async / await compatible. This section provided a high level
overview of how you can approach such a migration, and what some of the challenges are
that you could face.
In the next chapter, you will learn a lot about how Swift Concurrency helps prevent data races
in our code with a concept called actors.
We will explore these topics by building some neat concurrency related components that you
might want to use in your projects. To be specific, we’ll build an in memory date formatter
cache and an image loader that are thread-safe and handle concurrency nicely.
class DateFormatters {
private var formatters: [String: DateFormatter] = [:]
return newFormatter
}
}
This code isn’t particularly clever, optimized, or groundbreaking in any way. In fact, it has
an enormous potential problem because this object is in no way safe to be used in a multi-
threaded environment.
Imagine that we’re calling our date formatter concurrently from various different threads
with a handful of different formatters. Because we’d be accessing formatter(using:) at
the exact same time and we’d either create a new formatter and put it in the formatters
dictionary or return a cached formatter. Since we don’t have any data race protection in place,
mutating the formatters dictionary while other threads are trying to read from it will crash
our app.
To see our crash in action, run this chapter’s sample app and click the button to run the unsafe
date formatters example. This will run the following code:
DispatchQueue.concurrentPerform(iterations: 10) { _ in
let formatters = ["YYYY", "YYYY-MM", "YYYY-MM-dd"]
Most of the time the program will crash with an EXC_BAD_ACCESS error when you click
the button but every once in a while you’ll get a very obscure error related to things like
NSTaggedPointerString and other objects. There’s even a very, very small chance that
the app doesn’t crash on some runs although it’s highly unlikely you’ll see that happen.
The reason we’re seeing crashes with obscure messages is that we’re reading mutable state
concurrently from multiple places while also changing the state in some situations. If we
would only read from the cache we’d be fine. For example, if we change the sample above to
look as follows, it wouldn’t crash anymore:
DispatchQueue.concurrentPerform(iterations: 10) { _ in
let _ = cache.formatter(using: formatters.randomElement()!)
}
The reason we no longer crash is that we no longer mutate our state anymore in the concurrent
loop. We’re still accessing formatter concurrently but none of the concurrent accesses are
mutating the dictionary.
Of course, doing what I did above isn’t a solution to the underlying data race. We can’t always
know every possible formatter we might need to create. And even if we did know all values
it might not be cheap to create and cache all date formatters especially if we might never
need them. Luckily, Swift Concurrency provides a much better mechanism for fixing our data
race.
By the way, notice that I’m using DispatchQueue instead of a Swift Concurrency to run code
concurrently. I did this because Swift Concurrency schedules work different from GCD and
I’ve found that data races are a bit harder to come by when you’re using Swift Concurrency’s
mechanisms to run work in parallel.
Back to the subject at hand...
To fix our code in a pre-concurrency world we could make use of tools like NSLock or a serial
dispatch queue amongst other tools. I won’t argue which tool is best and instead I’ll just show
you how we can fix the code using NSLock.
class DateFormatters {
private var formatters: [String: DateFormatter] = [:]
private var lock = NSLock()
return newFormatter
}
}
Whenever somebody calls formatter(using:) I grab my lock and I call lock on it. This
means that any subsequent calls to formatter(using:) will need to wait for my lock to
be unlocked before the code can proceed.
In the defer block right after locking my lock, I call unlock() on the lock to unlock when-
ever I exit out of my function. Once I unlock, the next caller of formatter(using:) can
obtain the lock and run the function.
Effectively this means that no matter how many calls we make to our format-
ters(using:) function at once, only one caller will actually run at any given time.
Which in turn means that only one caller will be accessing and mutating our mutable state.
There are more efficient ways to achieve our goal like a dispatch queue with a barrier but like I
said, the point right now isn’t to explain the best way to fix data races in a world without Swift
Concurrency. We have Swift Concurrency so we can leverage it to fix our data race instead!
actor DateFormatters {
private var formatters: [String: DateFormatter] = [:]
return newFormatter
}
}
You might have to look real close to see what we did different from the example we started
with. . .
All we did compared to the original code is change the class keyword to actor.
If you were to make this change from class to actor in your code and then would try and
run your app, you’d see a compiler error on lines where you interact with the DateFormat-
ters object. The error will look a bit as follows: Actor-isolated instance method
'formatter(using:)' can not be referenced from a non-isolated
context. This error essentially means a similar thing to the error that’s presented when we
try to call an async method from a non-async context. By changing our class to an actor,
the code is no longer fully synchronous.
By changing our class to being an actor we made all interaction with our actor asyn-
chronous. The reason for this is that our actor serializes access to its mutable state. Let’s talk
a little bit about what that means.
In the lock based version of DateFormatters, all callers to formatters(using:)
would have to wait for the lock to become available before they could run. This waiting would
happen in a blocking way which means that the thread we’re calling formatters(using:)
from is blocked until we’ve had a chance to obtain the lock and get a result.
As you know, Swift Concurrency avoids these thread blocking mechanisms in favor of having
an async call that we can await instead. Or, in other words, Swift Concurrency suspends
tasks instead of blocking threads.
Actors follow this principle too.
All state in an actor is “isolated” to that actor. This means that only the actor itself is allowed
to access that state directly, and it means that the actor decides how and when state is
accessed. Isolation is a complex and large topic within the world of Swift Concurrency, and
you’ll encounter the term isolation in different contexts. Understanding what isolation means
in the context of an actor is, in my opinion, the best way to understand isolation as a whole.
So whenever you think of actors, their state, and how that state is protected from data races, I
want you to think of actor isolation.
Actor isolation is what make actors work. It’s how actors ensure that access to their state and
functions is synchronized and free of data races.
Whenever we try to interact with an actor, the actor receives a message in its so-called mailbox.
This mailbox acts like queue for interactions with the actors functions and state and it’s a key
component in actor isolation. The actor will take the first message in the mailbox, process it,
and then take up the next message in its mailbox. Much like a FIFO (First-in-first-out) queue.
For our example this means that we put a number of calls to formatters(using:) in the
actor’s mailbox and the actor will process these function calls one by one. However, if we’re
the last message in the actor’s mailbox we might have to wait a short while before the actor
processes our function call.
The following graphic makes this principle clear in a visual manner:
Notice how when we call a function, we put our function call in the actor mailbox. The actor
processes items one by one in the order of receiving the calls. There are some caveats here
that we’ll get into in this chapter but for now, let’s consider actors as a simple mechanism to
do “one thing at a time”.
Some folks will immediately think of an actor as a serial queue when they see this image and
explanation that goes along with it. The thought makes sense, an actor only processes one
item at a time which is exactly what a serial queue also does. There’s danger in the details
though. To understand why, we need to take a look at a concept called actor reentrancy which
we’ll cover in the next section. Before we move on to this topic, there are a couple other
actor-related things we should cover first.
Earlier I mentioned that whenever you interact with an actor, the actor receives a message in
its mailbox and that an actor processes these messages one by one. I also mentioned that
interacting with actors is asynchronous which means that we must await our interactions.
Consider the following code:
At first glance this code looks fine but when you try to compile it, you see the following error:
This error tells us that we’re trying to interact with our actor without an await and we’re not
part of the actor’s isolation context. In other words, the actor might not be able to handle
our call to formatter(using:) instantly since we’re not running as a part of the date
formatters actor. To fix this we need to await the method call as follows:
actor DateFormatters {
private var formatters: [String: DateFormatter] = [:]
return newFormatter
}
}
Notice how we don’t need to have an await whenever we check if a formatter with the given
format exists in the formatters dictionary. Or how we don’t await on the line where we
cache a freshly created date formatter.
The reason for this is that formatter(using:) runs as part of our DateFormatters
actor isolation context already; it’s defined on the actor after all. Another way to explain this
is to say that an actor will not put messages in its own mailbox.
Sometimes though, you might have a function that’s defined on an actor where the function
doesn’t actually interact with any mutable state that’s owned by the actor at all. When that’s
the case, having to await every call to that function when it’s called outside of the actor can
be tedious. Or if you have a let property that’s defined on your actor, you might want to be
able to access that property without having to await it; it’s immutable after all.
In these cases you would know that accessing that specific property or method on your actor
will never cause data races.
Luckily, let constants can freely be accessed from anywhere without an await. The compiler
knows that your let is immutable and it will happily allow you to read it from as many threads
as you’d like; there’s no risk of data races so a let doesn’t benefit from actor isolation anyway
because it will always hold the same value anyway due to its immutability.
A function that doesn’t interact with mutable state is not considered safe to call without actor
isolation though. The compiler cannot guarantee that a function is safe to run without isolation.
If we as developers are absolutely certain that a given function is safe to call concurrently
without isolation we can mark that function as nonisolated to opt that function out of
actor isolation. This looks as follows:
actor DateFormatters {
private var formatters: [String: DateFormatter] = [:]
private let _defaultFormatter = ISO8601DateFormatter()
// ...
}
This example is a little silly; since the default date formatter is a let we might as well interact
with it directly instead of through our defaultFormatter() function. The point wasn’t to
show you something smart; it’s to show you how you opt-out of actor isolation for a function.
In fact, when you attempt to compile the above code you’ll see an interesting warning:
It’s a good thing you’re reading this book because we’ll look at sendable types later in this
chapter. So let’s not worry about that warning for now and talk about non isolated methods a
bit more.
By writing nonisolated func, we tell the compiler that our defaultFormatter()
function can be called from a non isolated context without problems; after all we know that this
function does not access any mutable on our actor so it doesn’t benefit from actor isolation. We
can call nonisolated methods on actors without awaiting them because a nonisolated method
isn’t processed as a message in the actor’s mailbox. Instead, it will be run immediately.
You’ll want to apply nonisolated very carefully and ideally you don’t apply it at all unless
it’s really needed. Applying nonisolated to functions and properties that actually do end
up reading or mutating mutable state on your actor will typically mean your code won’t
compile.
Earlier in this section I promised that you’d learn a bit about actor reentrancy and how it’s a
tricky concept. Let’s take a closer look at actor reentrancy now.
Now. Let’s imagine for a moment that our formatters(using:) function from before
was an async function that had an await somewhere in its body. For now it doesn’t matter
what we might be awaiting; I’ll introduce a new example soon that will make a little bit more
sense (but it’s also more complex).
The following diagram illustrates what a situation where we have have an await in an async
actor function looks like.
In this diagram you can see a dark gray call to actor.formatters(using:). This call
is suspended while it’s awaiting something else. The actor has picked up a new item from
its mailbox in the mean time, and it will run the remainder of the suspended call to ac-
tors.formatters(using:) once the awaited code has returned and the actor has fin-
ished the other work it was doing. The remainder of our function is essentially put back in the
mailbox once its awaited work is done except it gets to skip the queue all the way to the front,
allowing the actor to finish the work as soon as possible.
Consider the following code:
actor ImageLoader {
private var imageData: [UUID: Data] = [:]
return data
}
}
At first glance, this code doesn’t look too bad at all. We use an actor to maintain an in memory
cache of image data so that whenever we call loadImageData(using:), we either get
cached image data or we load new data from the network.
The execution path of this code is exactly what you might expect. We check whether we have
cached image data, if we don’t we go to the network to grab the data, we cache it, and we
return it. Of course there’s also an error path that we should think about in the real world but
for now we just throw an error to the caller of loadImageData(using:) and that’s good
enough.
Now imagine that we call loadImageData(using:) twice, concurrently, for the exact
same UUID. The actor will pick up the first call, see we don’t have cached data, and then our
function will await data from the network.
If we visualize this, here’s what that looks like:
Because our actor knows there’s other work to do it will pick up the next call to loadImage-
Data(using:) and perform the same check. We don’t have image data for this UUID in the
cache so we fetch it from the network.
If we expand the diagram you saw earlier, here’s how we can visualize what’s happening:
At this point we have two network calls for the exact same resource in flight. This isn’t ideal
because the images we’re loading could be pretty huge and it’s a waste of data to be loading
the same image twice at the exact same time.
The cause for our problem is called actor reentrancy.
Actor reentrancy essentially means that whenever an actor is awaiting something, it can go
ahead and pick up other messages from its mailbox until the awaited thing completes and
the function that got suspended can resume.
The result of actor reentrancy is that any assumptions we make before an await should always
be re-validated after an await. Additionally it means that in cases like ours we need to introduce
an explicit loading state for our images.
To fix this, we need to rebuild our ImageLoader to account for actor reentrancy by keeping
track of resources that we’re already loading, and reusing the in-flight fetch operations for
concurrent requests to load the same resource.
We want the flow of our code to be as follows:
• If we don’t have a cached state, create a new loading operation for the requested and
cache it
• Await the result of the loading operation
• Cache the loaded resource
• Return the loaded resource
The first thing we should do to implement the flow above is to introduce a LoadingState
enum so that we can cache in progress loading operations as well as operations that have
already completed:
actor ImageLoader {
private var imageData: [UUID: LoadingState] = [:]
extension ImageLoader {
enum LoadingState {
case loading(Task<Data, Error>)
case completed(Data)
}
}
With this setup, we can change the if statement that checked for the existence of cached
image data to one that checks whether a loading state exists for a given UUID and then takes
the appropriate action based on the cached state. If we have a cached state we know that
we’re either loading the requested image, or we have loaded the image before.
Notice that I’m using a Task<Data, Error> for the enum’s loading case. We can wrap
the work of retrieving our resource in a Task so that we can await the value of the task for
multiple callers of loadImageData(using:). What’s nice about this approach is that a
Task<Data, Error> can safely have its value property awaited by multiple callers. This
means that a single task to load our image can communicate its result back to lots of callers
of loadImageData(using:) without making any extra tasks.
actor ImageLoader {
private var imageData: [UUID: LoadingState] = [:]
// To implement..
}
}
extension ImageLoader {
enum LoadingState {
case loading(Task<Data, Error>)
case completed(Data)
}
}
If a state exists in our in memory cache, the code checks whether the state is loading which
means that we can try await the outcome of the already in progress task. If we encounter
a state of completed we can directly grab the data from the associated value and return
it.
This part of the code is what will prevent us from having multiple data tasks in progress for
the same resource.
In the case that we don’t have an existing state, we should create a task, cache it, and then
await its value after caching it to make sure this all plays nicely with actor reentrancy:
imageData[id] = .loading(task)
do {
let data = try await task.value
imageData[id] = .completed(data)
return data
} catch {
imageData[id] = nil
throw error
}
}
By creating our task first, then caching it, and then awaiting the task’s value, we’ll have added
the in progress task to our cache before we suspend. This means that subsequent concurrent
calls to loadImageData(using:) will see our up to date cache and know that they can
go on ahead and reuse the task we’ve already created by awaiting that task’s value.
After we obtain our task’s value, we set the state in our cache to be .completed with the
data we just loaded. Note that I only need to do this in the code path where we’ve just created
await MainActor.run {
// update UI
}
However, there’s more ways to run code on the main thread. Imagine that you have some
function in your code that should always run on the main thread. We can achieve this by
annotating that function with an @MainActor annotation:
@MainActor
func updateUI() {
// update UI
}
Again, the above is actually the default for new projects created with Xcode 26 and for projects
that opted in to the main actor by default compiler setting. That said, I think it makes sense for
us to explore global actors through the @MainActor declaration anyway since it’s a built-in
global actor with a very clear purpose.
We added a special annotation to the updateUI function to make sure it always runs on the
main actor. This annotation is written as @MainActor and it’s used to refer to a global actor.
We can apply a global actor annotation to the following declarations:
• Functions
• Objects (classes, structs)
• Closures
• Properties
Applying the @MainActor annotation to a class declaration ensures that accessing any
property or method on that class is done through the main actor. In other words, it allows us
to make sure that any interactions with the annotated object occur on the main thread.
A common advice is to, for example, annotate view models and observable objects for SwiftUI
with the main actor because these objects often end up triggering UI updates which should
be done through the main actor. It’s advices like these that led to Apple introducing the
main actor by default configuration in Xcode 26; most apps really don’t need to have tons of
concurrency by default.
Annotating an entire object as @MainActor will make it seem as if that entire object is
itself defined as an actor because access to all state on that object will be subject to actor
isolation; just like it would if you had defined your object as an actor. The key difference is
that with @MainActor you don’t actually define your object as an actor, and your object
will synchronize all of its method calls and property accessing using the main actor which
effectively means that accessing properties and calling methods on the annotated object is
always done on the main thread.
If all you’re looking for is synchronization around mutable state and you don’t care about
making sure your code runs on the main thread, you should define your actor as an actor
instead of using the @MainActor annotation. The @MainActor annotation is only useful
on an entire object when you require all property access and method calls on that object
to occur on the main actor. I can’t stress enough that this should be your default mode of
operation. Less concurrency and fewer actors make your code more predictable and robust.
Concurrency introduces complexity and more often than not, this complexity isn’t required.
That said, all rules surrounding how you should interact with your object will be pretty much
the same as they are when you would have defined your object as an actor. There is one
major exception though. If you’re interacting with an @MainActor annotated object from a
context that already runs on the main actor, you don’t have to await your method calls and
property access like you would otherwise. The reason for this is that you’re already running
on the main actor which means that the same rules apply as when you call a method on an
actor from within that actor.
While the main actor is currently the most useful example of a global actor we can define our
own global actors by annotating an existing actor with @globalActor and conforming it to
the GlobalActor protocol:
@globalActor
actor MyActor: GlobalActor {
static let shared = MyActor()
// ...
}
The GlobalActor protocol requires us to implement a static shared property that is used
as the instance to defer executing code to whenever we annotate something with our global
actor. In other words, when we annotate a function with @MyActor the instance of MyActor
that we created as the shared instance is the actor instance that will receive the call to our
annotated function in its mailbox.
While actors are a fantastic tool to help us protect mutable state, they introduce concurrency
in our apps. When all you need from an actor is to protect a relatively small dictionary, making
every interaction with that dictionary is a pretty large cost to pay. All functions that interact
with your actor must be async functions. This means that you can only call them from tasks or
other async functions. Adding actors to an existing codebase is a large cost because it often
comes with a significant cost.
It’s worth considering a different data race protection mechanism when you want to avoid
introducing concurrency around the state you need to protect. Let’s talk about Mutex.
import Synchronization
class DateFormatters {
private let formatters: Mutex<[String: DateFormatter]> = Mutex([:])
}
The Mutex object is defined in the Synchronization framework so we must import that
if we’re using Mutex.
While we’re protecting mutable state, the formatters property must be declared as let
because we’re using it to hold a Mutex that wraps our dictionary. Properties of type Mutex
must always be defined as let.
Now let’s see how we can read values from our formatters property by partially implement-
ing a mutex-based version of the formatter(for:) function that you’ve seen before.
if let formatter {
return formatter
} else {
// ...
}
}
To access the dictionary that’s protected by our Mutex, we call withLock on it. This will
attempt to acquire a lock through our Mutex, and it will automatically release our lock when
our closure ends. Since withLock is the only way to access our state, we know that we’re
always going to be free of data races and can’t forget to acquire a lock before accessing the
dictionary.
If you return a value from the closure passed to withLock, the returned value becomes the
return value of your call to withLock. In this case I’m returning dict[format] which
means that I’ll return a formatter if one exists, and nil is returned otherwise.
Now let’s see how we can update our dictionary if an existing formatter isn’t found:
if let formatter {
return formatter
} else {
let newFormatter = DateFormatter()
newFormatter.dateFormat = format
formatters.withLock { dict in
dict[format] = newFormatter
}
return newFormatter
}
}
We can create a new date formatter and mutate our dict from inside of the withLock
closure. The dict that’s passed to the closure is passed as an inout value which means
that mutating the dict we receive will mutate the original dictionary.
Working with mutexes is a little bit more involved than working with actors, but it does mean
that all extra work is done in one place. If we use an actor we’re suddenly introducing a lot
of concurrency into our codebase which is quite wasteful when it comes to only having to
protect a property.
If you benchmark this mutex based dateformatter against an actor based solution, the actor
based solution is much slower due to the system having to suspend functions, hopping from
one isolation context to the other, and then going back. The mutex approach is blocking the
calling task while we wait for a lock, but in practice reading / assigning a property is so fast
that in all tests I’ve done there were no measurable downside to blocking while waiting for a
lock.
In the end, it’s your decision to decide which solution to pick. The tradeoff is always going
to be between performance, ease of use, and amount of concurrency you want to introduce
in your code. I highly recommend you run experiments and measure performance using
instruments to figure out the best solution for your use case.
When you use a class with mutex to protect mutable state, you might find that Swift starts
complaining about your class being non-Sendable. You didn’t get this complaint when you
were using actors, so let’s dig into Sendable and see what’s up with that.
// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.
import PackageDescription
Now let’s see an example of an older package that will use the Swift 5 language mode:
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.
import PackageDescription
Because we’re defining our package with the 5.10 tools version, the compiler will use the Swift
5 language mode for this package.
New packages should always be created using the latest tools version since using an older
tools version means there will be SPM features you can’t use if they were made available in a
newer toolchain.
It’s much better to explicitly set your Swift language version in your package description
instead:
// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.
import PackageDescription
swiftSettings: [.swiftLanguageMode(.v5)]
),
]
)
If you’re creating a new package, I recommend that you don’t drop your swift language version
down to Swift 5 unless you’re running into problems you can’t solve in the Swift 6 language
mode. Writing new code with Swift 6 is much easier than migrating from Swift 5’s language
mode to the Swift 6 language mode.
In any event, if you have a package or project that uses the Swift 5 language mode, you can
opt-in to stricter concurrency checks in preparation for the Swift 6 language version.
To enable strict concurrency checks in an SPM package, you need to pass the enableExper-
imentalFeature flag to your target settings with a value of StrictConcurrency:
// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.
import PackageDescription
Note that setting this flag in a project that uses the Swift 6 language mode has no effect since
you’d already have the full suite of sendability checks available under Swift 6.
To enable strict concurrency checking in an Xcode project, navigate to your project’s build
settings tab and search for “strict concurrency”. By default, you will find this build setting
to have a value of Minimal. As you can imagine, that will perform a very minimal set of
constraints like explicit Sendable annotations for example (you will learn about Sendable
annotations shortly).
You could bump the concurrency checks to be Targeted which will enable sendability and
actor isolation checks for your project using the same constraints that will be used in Swift 6.0.
I would recommend you set this setting to be at least Targeted for existing projects where
you want to ensure that your code is as thread-safe as possible.
The third setting is Complete. This will enable the full set of concurrency checks that exists
in Swift 6.x. This includes sendability checking, actor isolation checks, and more. For new
projects I would recommend you jump straight to setting your checking settings to Complete
which will make sure that your code is compatible with the Swift 6 language mode right
away.
Refer to the screenshot below to see how you can find and set your strict concurrency checking
settings.
Once you have your concurrency checks set to Targeted or Complete you will be able
to follow along with the sections that follow, and see the compiler errors I will mention and
resolve. If you’re not seeing the same errors, make sure that your concurrency checking
settings are configured correctly.
In addition to enabling strict concurrency checking, you can take a look at Chapter 12 - Migrat-
ing to Swift 6.2 to learn more about migrating to Swift 6.2.
class Movie {
// ...
struct MovieViewData {
let movie = Movie()
}
Given a setup where we have a struct that has a reference to a class as one of its properties,
we know that copying the struct will not copy the instance of the class that the struct points
to. We only copy the pointer to the instance itself, resulting in two different structs that point
to the same class instance.
This means that when we pass instances of MovieViewData around in an application
we will create copies of that MovieViewData instance. Because the movie property on
MovieViewData is a pointer to a class instance, the pointer is copied. This results in both
copies of MovieViewData pointing to the exact same Movie instance.
When we try to use an instance of MovieViewData in a way that would have us pass that
instance across concurrency boundaries Xcode will show us a warning:
func example1() {
let data = MovieViewData()
Task {
// Capture of 'data' with non-sendable type 'MovieViewData' in a
,→ `@Sendable` closure
print(data.movie.isFavorite)
}
}
For now, don’t worry about what an @Sendable closure is. The point is that the Swift
compiler inferred that our MovieViewData instance cannot be passed across concurrency
boundaries because one or more of its properties is not sendable.
If we’d change our Movie object from a class to a struct, we’d see that the example()
function shown above suddenly no longer presents warnings:
struct Movie {
// ...
struct MovieViewData {
let movie = Movie()
}
The reason for this is that MovieViewData now implicitly conforms to the Sendable
protocol because all of its members are also sendable.
Let’s take a closer look at all rules that make a value type implicitly conform to Sendable:
If your struct or enum does not meet all of the above requirements, you can manually add
Sendable conformance to tell the compiler to check the sendability for your object.
For example, when I make the MovieViewData a public struct, the warning that was
resolved earlier shows up again:
func example() {
let data = MovieViewData()
Task {
// Capture of 'data' with non-sendable type 'MovieViewData' in a
,→ `@Sendable` closure
print(data.movie.isFavorite)
}
}
Go ahead and try the above in this chapter’s code examples to see the warning for yourself.
Regardless of what the compiler is telling us, we know that all of the members on our struct
are sendable so we can tell the compiler that we want our struct to be considered Sendable
anyway:
By manually adding conformance to Sendable, we tell the compiler that our object should
be sendable and the compiler will actually verify this at compiler time. This means that
breaking the first rule of sendable structs (All members of the struct or enum must be sendable)
will still result in a warning. For example, we can change the Movie struct back to being a
class and our warning re-appears.
class Movie {
// ...
The compiler will proactively tell us that we should be careful with our MovieViewData
object because the movie property is not sendable which means that our entire object is not
correctly implementing the Sendable protocol which can lead to data races if we pass our
MovieViewData instances across concurrency contexts.
All the rules you saw just now apply equally to enums. When you have a plain enum with no
cases that have associated types you’ll find that your enums are pretty much always sendable.
However, when you add associated values to your cases, you should make sure that these
associated types conform to Sendable to ensure that your enum is also sendable.
We saw how using a class as a property on our structs makes the struct non-sendable. Luckily
this doesn’t have to be the case. We can make reference types conform to the Sendable
protocol with a little bit of care, attention, and manual work.
through their actor mailbox which allows actor instances to be passed across concurrency
boundaries safely.
We’ll talk about sendability for functions and closures separately so in this section I would
like to focus attention on classes and the Sendable protocol.
Classes can be manually marked to conform to the Sendable protocol but there are a few
requirements that I’ll dig a bit deeper into in just a moment. Let’s list out the requirements
first:
The Movie class object you saw before can be made Sendable by marking the class as fi-
nal, making its isFavorite property a let and making the class conform to Sendable
explicitly:
Not marking the class as final or keeping isFavorite a var will result in compiler warn-
ings that tell us exactly what we should do to satisfy the Sendable protocol.
In some legacy codebases you might have classes where you manually ensure that a class is
thread safe. For example, you might have used NSLock to serialize access like we did at the
start of this chapter. Or maybe you’re using a serial dispatch queue to synchronize access to
mutable state. Or maybe you’ve set up a class that implements some convenience methods
for subclasses that are otherwise fully immutable.
In an ideal world, you would refactor this legacy code to properly meet the requirements that
the Sendable protocol imposes. You might want to switch to actors or flatten your class
hierarchy eventually. Unfortunately, refactoring can take time and in a sufficiently complex
codebase you might not be able to perform all refactoring in one go.
When you’re certain that you’ve taken the needed steps to ensure that your class is thread-safe
and fully free of data races you can force the compiler to accept your Sendable conformance
without actually verifying your conformance by marking your class as @unchecked Send-
able:
The compiler will see that our Movie object is @unchecked Sendable and it will blindly
accept that our class is sendable. Note that this does not guarantee that our Movie instance
can safely be passed across concurrency boundaries. It simply means that we told the compiler
that even though the class doesn’t look like it conforms to Sendable, we want it to pretend
that it does because we’re taking full responsibility for ensuring that our class is free of data
races.
If that sounds dangerous to you, then you’re absolutely right. Using @unchecked Send-
able at the wrong time just to get the compiler to stop complaining is not a very good idea.
The concept of sendability in Swift is intended to help you write code that is free of data races.
It’s not intended to be a roadblock that you should find your way around using whatever tools
you can find.
Now that you know about sendability for classes, actors, and value types it’s time to take a
look at sendability for functions and closures.
functions and closures do not “own” any state in a way that allows us to access or interact
with that state by accessing properties or calling method directly on that closure or function
like we can with structs and classes that can expose methods and properties.
So what do we mean when we say that a certain closure for example is sendable?
When a closure or function is sendable, the closure or function in question does not capture
any non-sendable objects. And because functions and closures do not conform to protocols
we declare sendability for these types using the @Sendable annotation.
For example, we can define a sendable closure as follows:
Similar to how we define an @escaping closure, a closure that is sendable will have an
@Sendable annotation before we write the closure’s actual type. Note that closures that
are both @Sendable and @escaping would be marked as @Sendable @escaping ()
-> Void.
For functions this looks as follows:
class Movie {
// ...
Our closure that’s supposed be sendable captures and interacts with a non-sendable instance
of the Movie class. This isn’t allowed because it breaks the sendability of our closure. We
can fix our issue by making sure that our Movie class is sendable, or if we’re only reading
properties on that instance, we could capture those in a capture list instead of using (and
implicitly capturing) the movie as a whole:
The second point is especially interesting, if we capture isFavorite, we will capture its
value at the time we create the closure. So if isFavorite is false when we make the
closure, we capture the property as being false. If the isFavorite property on the movie
changes between creating and executing the closure, we’d see the old value in our closure.
The only way to properly fix the problem at hand is to either make our Movie conform to
Sendable, or to remove the @Sendable annotation from our closure if it’s a closure we
wrote ourselves.
Annotating a function or closure with @Sendable carries a heavy semantic meaning. It
means you intend to use that closure or function in code that is concurrent, and it means that
you want to make sure that calling your closure or function is completely thread-safe.
When you don’t intend to be calling your function or closure from async methods you
shouldn’t default to applying @Sendable all over the place. Doing this will make it much
harder to work with your own code, sometimes for no good reason at all. Of course, if you do
think you have a good reason it makes sense to ensure that certain functions can safely be
called from multiple threads; but in my opinion your efforts are better spent on making sure
your classes and structs are thread-safe than it is to make sure that all your functions only
operate on sendable objects.
The @Sendable declarations for closures is mostly useful when you’re passing state into a
closure, and you want to be able to use that state from other places at the same time. That’s
why a sendable closure can’t capture any non-Sendable state at all. However, sometimes the
compiler can proof that transfering non-Sendable state from one isolation context to another
is safe because the original isolation context transfers the state to your closure and then it
never accesses it again. This situation is solved by marking a closure as sending instead of
@Sendable.
Let’s take a closer look at sending next.
class NotSendable {}
print(notSendable)
}
}
While the compiler is correct, our class isn’t sendable so it’s not safe to use it from multiple
tasks at once, the compiler should be able to proof that this code is perfectly fine.
We create an instance of our NotSendable class inside of runWork and it’s assigned to a
local property. This means that we know that after runWork returns, we do not have any
references to our NotSendable instance anymore. We only use notSendable inside of
our Task so once we’ve transfered our instance from the isolation region in runWork to the
one owned by our Task we could say that we transferred ownership of our instance from one
isolation region to the next.
Because @Sendable places a hard-constraint on what we’re allowed to capture (only send-
able state), we can use the new sending keyword to indicate that we’re okay with capturing
non-Sendable state as long as it’s safe.
So in Swift 6 you’ll see that the code shown above works perfectly fine. The compiler can
proof safety, so we’re okay to capture notSendable in our Task.
The following code is fine too:
print(notSendable)
Task {
print(notSendable)
}
}
While we do interact with notSendable inside of runWork, the compiler can proof that
we don’t interact with runWork anymore after we’ve sent it into our Task.
The compiler will flag an issue if we access nonSendable after our task:
Task {
// Sending value of non-Sendable type '() async -> ()' risks
,→ causing data races; this is an error in the Swift 6 language
,→ mode
print(notSendable)
}
print(notSendable)
}
The compiler can’t proof that our Task and access to notSendable won’t race so it flags
an issue.
What’s interesting is that the following is perfectly fine (assuming we have main actor by
default turned on):
Task {
print(notSendable)
}
print(notSendable)
}
Because everything in the code above is isolated to the main actor, the compiler knows that
our access to notSendable after the task is safe. The task can’t run in parallel with runWork
since the main actor will only be doing one thing at a time.
Usually when you define functions that take a closure that needs to be run in a thread-safe
manner you’ll want to mark the closure as sending first and then switch to @Sending only
when you find that it sends you down a path of compiler errors that are hard to solve. Here’s
how you define a sending argument on a function:
This is quite similar to defining a @Sendable closure except you use sending instead.
Features like sending make writing concurrent code much easier than it was with just
Swift 5.10. We’re now able to let the compiler verify that our code is safe even when we’re
technically passing non-Sendable state across isolation boundaries. Swift 6 will allow us to
do that as long as the compiler can ensure exclusive access. In practice this means that we
see fewer Sendability related warnings which makes adopting Swift Concurrency much more
straightforward than it was in Swift 5.10. Combined with features like @MainActor by default
and nonisolated(nonsending) to inherit actor isolation writing concurrency-friendly
code is becoming more and more straightforward with every Swift release.
In Summary
In this chapter, you have been introduced to the basics and more advanced uses of actors.
You learned what data races are in programming, why they are hard to debug, and how they
would traditionally be solved in a pre-Swift concurrency world.
Next, you learned that we can leverage actors in modern applications to ensure exclusive
access to mutable state that’s owned by an actor. You learned that actors leverage a so-called
actor mailbox to receive messages that are handled one at a time. You also learned that
whenever an actor is processing a message and it hits an await, the actor suspends the
current function (or message) and starts processing the next item in its queue. You learned
that this can lead to unexpected changes to assumptions you made before the await, and
you’re learned that this principle is called actor reentrancy.
After that, you learned about global actors and how they can be used to build a single actor
that can be leveraged to synchronize code all throughout your codebase onto a single actor.
You’ve learned that this principle is mostly useful for the main actor, and that it will become
more useful once Swift Concurrency officially supports custom executors.
We wrapped this chapter up by looking at a concept called sendability, the Sendable protocol
and the @Sendable annotation. You’ve seen how these concepts help us to make sure that
our code is thread-safe and how we can use these concepts to make sure that our objects can
be passed across concurrency boundaries safely.
After that, you also learned about the sending keyword which allows us to safely transfer
non-Sendable state from one isolation context to the next as long as we do so with exclusive
access.
In the next chapter, we will take a look at a completely different part of Swift Concurrency that
I haven’t mentioned much about just yet; async sequences.
Hopefully, this code doesn’t look unfamiliar to you at all. All we do here is iterate over each
element in an array using a standard Swift for loop.
Now imagine that we’d like to receive all data from a URL where it’s possible (or even required)
to process each line on its own. For example, we could be dealing with a document that looks
like this:
year,make,model,body_styles
2023,Acura,ILX,"[""Sedan""]"
2023,Acura,Integra,"[""Hatchback""]"
2023,Acura,MDX,"[""SUV""]"
2023,Acura,RDX,"[""SUV""]"
2023,Acura,TLX,"[""Sedan""]"
2023,Alfa Romeo,Giulia,"[""Sedan""]"
2023,Alfa Romeo,Stelvio,"[""SUV""]"
...
This kind of document is called a CSV (Comma Separated Values) document where each
line in the document represent an entry in a spreadsheet or database. For each entry the
information fields are separated by commas. It’s a common kind of file to use when you want
to exchange tabular data between two people that might not use the same spreadsheet editor
for example.
A logical way to process a file like the above would be to go line by line, extract the fields into
a model object, and essentially keep doing this until the end of the file.
I’m not saying this is the best or most efficient way to process a CSV file; it’s just an example that
happens to fit nicely with what I’d like to show you in this section. In fact, the implemantation
you’re about to see is incredibly naive to the point that it will certainly have some bugs when
applied to a different data set.
We could write the following to parse the file you just saw:
struct Car {
let year: String
let make: String
let model: String
let body_styles: String
}
print(cars)
Again, the code above is far from a completely valid implementation of a proper CSV parser.
The point of this example is that we have a file where each line in the file represents an instance
of our data model. So what we end up doing is parsing our file line by line, extracting what we
need, and we create our model objects using that input.
Tip:
Follow along with the examples in this section by running the local server in the code
bundle for this chapter.
If you run the sample app for this chapter you can give the code you just saw a spin and play
around with it. You’ll see a warning about synchronously loading data from a URL which we
shouldn’t be doing. That’s fine; we’ll fix that in just a moment.
Now imagine that this file is very large, thousands of lines, and we need to load it from the
internet before we can parse it. We’d probably spend quite some time to fetch Data and then
transform all data into a String before we could begin splitting the large string into an array
of strings by splitting the string up based on new line characters.
A transfer of data over the internet (or reading a file from disk for that matter) is usually done
by sending chunks of bytes from the server to a client. The client can parse and buffer these
bytes and merge all received chunks together as the result of the work that’s performed. This
ability to send chunks seperately is the foundation of the TCP protocol which is the most
commonly used protocol for network communication.
When we parse bytes as soon as they are loaded, we can actually start passing the data that
we’ve received to a piece of code that’s waiting for the data we’re loading.
For example, we could process lines one by one if we are given them by the network or file
system.
A setup like this would allow us to receive and process every line in the CSV file as soon as the
line is received from the server. That means that we might not have the entire file yet because
the server is still sending data but that’s okay. We’re only interested in parsing one line at a
time until we’ve parsed all lines anyway. The sooner we can begin parsing the first line, the
better.
We can do this by leveraging an asynchronous for loop and a very neat property on URL.
When we have a URL that contains our data, we can access the lines property on it to
asynchronously loop over all of the lines that the URL will load.
Here’s how we would adapt the code you saw earlier to do exactly what I just described:
model: components[2],
body_styles: components[3])
cars.append(car)
}
print(cars)
Notice how little code I changed. The key difference between the before and after code is in
the following line:
A new property on URL called lines allows us to process the contents of the provided URL
line by line, asynchronously. It’s almost like it gets us an array of lines that are present in the
file that’s hosted at the given URL. The key difference? Not all values are loaded by the time
we want to iterate over the sequence of lines.
In other words, as soon as we’ve fetched a full line from the network we can process that line.
After that, we are suspended and waiting for the next line to be loaded. This process repeats
until all lines are loaded and processed by our for loop.
Usually when you write a for loop, you iterate over an object that conforms to Swift’s Se-
quence protocol.
In this case, we’re iterating over an object that conforms to AsyncSequence. This means
that we’re still dealing with a sequence-like object, but not all values in the sequence are
known up front. This is the key difference between AsyncSequence and Sequence.
We can iterate over an asynchronous sequence with a for loop but we must await for each
value to become available.
Notice how there’s also a try written before the await. Certain sequences might encounter
errors while obtaining or producing values. These errors are then be surfaced to us using a
throw. Whenever a sequence throws an error, we know that the sequence has ended (with a
failure). The for loop ends, and our program continues.
Handling a for loop’s error looks a little bit as follows:
do {
for try await line in csvURL.lines {
// ...
}
} catch {
print("something went wrong", error)
}
In essence, handling an error that is thrown by an async sequence is no different from handling
an error that is thrown by a function that you’ve called.
It’s important that you understand that an asynchronous for loop’s control flow is the same
as that of a synchronous one. We can exit out of our loop using a break, if we’d like to skip
processing a given element in the loop we can use the continue keyword, and when we
want to return something from a for loop that’s written within a function we can use return
just like we normally would.
With that knowledge in mind, I’d like you to look at the following code and try to reason about
what happens when this code runs:
In this example, our csv file is split up into two parts, and we want to asynchronously load and
process each file by loading them in parallel. When we run the code above, your expectation
might be that both for loops will run simultaneously because they leverage async sequences.
Sadly, this is not the case.
Because asynchronous for loops have the same rules as normal for loops, the first loop must
complete before the second loop can start. We can add some print statements to make this
concept more clear:
print("about to process...")
Adding these print statements paints a somewhat clearer picture. If I would have shown you
this a couple of paragraphs sooner you would probably have 100% expected that these print
statements would be printed in order. And that’s entirely correct.
Knowing that, it also makes sense that these for loops do not run in parallel.
So how can we run both loops in parallel then? Well, there’s a few options.
The most obvious solution that only leverages Swift Concurrency features that you’ve seen
before is to give each loop its own task:
// ...
Task {
for try await line in csvURLPartOne.lines {
// ...
}
}
Task {
for try await line in csvURLPartTwo.lines {
// ...
}
}
While this works to run code in parallel, it does prevent us from knowing when both tasks are
completed in a nice way. We can fix this by assigning these two tasks to their own variables
and awaiting their values:
Task {
let csvURL = URL(string: "http://127.0.0.1:8080/cars.csv")!
This solution isn’t very clean at all, and we can do much better using tools like async let
and TaskGroup which you will learn about in the Chapter 9 - Performing and awaiting work
in parallel when we talk about parallelizing work and structured concurrency.
For now, your key takeaway should be that two for loops written right underneath each other
act like any other for loop would; the first loop must complete before the second loop can
start.
Oh, one more thing I want to show you. We can put our asynchronous for loops inside of
asynchronous functions. These functions will not return until the for loop that’s inside of
them is completed. As you might expect, the example below is effectively the same as having
two for loops right underneath each other:
await loadOne()
await loadTwo()
I wanted to include this sample for you because I could see how you might be wondering if
async functions could help you out here.
In the example we started out with, we asynchronously fetched a csv file and we used this file
to populate an array of car objects. Here’s what that looked like:
continue
}
let car = Car(year: components[0],
make: components[1],
model: components[2],
body_styles: components[3])
cars.append(car)
}
print(cars)
If you look closely, you’ll see that each line object that we receive is expected to be a String
and we transform each string into a Car. If this were all synchronous, you might say we take
an object of type [String] and we transform it to [Car] and you might be thinking “ha I
could do that with a map!”.
What’s nice about async sequences is, is that they too can be mapped. We could refactor our
code a little bit and make it so that we build an async sequence that includes a mapping step
that will be performed for every value that’s emitted by the sequence.
That’s pretty neat right? If you’re used to Combine this code will look very familiar because
Combine’s map works in a similar way.
For each element that is produced by our async sequence, we can transform that element and
return something new. Applying a map to an async sequence produces a new async sequence
that we must iterate over using await to obtain all of its values.
Other than map, we can use filter, flatMap, and more on AsyncSequence. For a full
overview I would recommend you take a look at the official documentation for AsyncSe-
quence.
While it’s great that Apple has added a lines attribute to URL and it served as a very neat way
for me to introduce async sequences to you, it’s probably not something that you’ll actually
use much.
What’s far more useful to explore is how we can implement our own async sequences, or to
see how we can have Combine and async sequences interoperate to allow us to get the best
of both worlds.
That’s exactly what we’ll shift our attention to next.
In this section, we’ll look at a different mechanism that also leverages continuations but in
a different way, allowing us to take any asynchronous work that produces values over time
(location updates, incoming websocket messages, and more), and transform that work into
something that produces an async sequence.
To do this, we can leverage something called an AsyncStream. Async streams are a contin-
uation based mechanism that allows us to have our continuation produce multiple values
which is great for tasks that have progress, or can produce a large number of values over
time.
An AsyncStream instance conforms to the AsyncSequence protocol which means that
anybody that is given our async stream will be able to iterate over our stream asynchronously.
This makes AsyncStream very useful if you’re interested in building your own objects that
produce values over time.
In this section, we’ll take a look at two objects that can be built using AsyncStreams. Of
course, these aren’t the only two kinds of objects you can build, but I’ve personally found these
to be interesting exercises in using async streams where the first example is a nice introduction,
and the second is more complex in terms of responding to events like cancellation.
The first object we’ll build is a location provider object that uses an async stream to provide
updates on the user’s location. The second object is an object that receives incoming messages
from a websocket, and nicely closes the connection whenever the async stream is cancelled
or otherwise goes out of scope.
Before we jump in and start building things with streams, let’s explore the basics of async
streams first.
We’ll start by exploring the first kind of async stream; the one with a closure that is called
repeatedly:
return UUID().uuidString
}
When we create our async stream, nothing will happen initially. The stream is created, and we
can start iterating over it using an async for loop whenever we’d like.
Once we start our iteration and we start awaiting our first value, the async stream’s unfolding
closure is called. This closure is marked async which means that we’re allowed to do asyn-
chronous work from within our unfolding closure. In the example code, we await a function
call to function called produceValue(shouldTerminate:).
We expect that function to fetch or process some data to eventually produce a value that’s
returned. This returned value is then returned from the unfolding closure. In turn, the value
that we return from the unfolding closure is passed to our for loop so that we can use it.
Once our for loop body completes and we ask the stream for its next value, the unfolding
closure is called again. We then call our async function again to produce our next value, a
value is returned, and that value is then provided to our for loop.
This process repeats until we return nil from our unfolding closure. Once we return nil, we
indicate that the stream has ended and we cannot produce a next value anymore; all work is
complete.
If we want to be able to perform a task that might throw an error in our unfolding closure, we
can use an AsyncThrowingStream instead of an AsyncStream. They function identi-
cally except a throwing stream can throw errors and it must be iterated over using for try
await value in stream instead of a plain for await value in stream.
In addition to providing values, we can also respond to cancellation for our stream. This can
be done by passing an onCancel closure to the AsyncStream’s initializer:
The onCancel closure is called whenever the task that owns your async stream is can-
celled.
Note that breaking out of the for loop that consumes your async stream does not count as
a cancellation event. If you stop your loop, you will find that the current iteration of your
unfolding closure will continue to run, but it won’t be called again after that because your for
loop is no longer asking the stream for values.
The unfolding closure approach is particularly useful when you want to perform and await
some async work that can be contained within your unfolding closure. It’s not particularly
useful when you’re bridging a delegate based API like CLLocationManagerDelegate
because there’s no way for us to yield a new value for our stream from outside of the unfolding
closure.
Luckily, there’s a second way to build an AsyncStream that provides us more flexibility and
control (but is also a little bit more complex to manage). Let’s look at an example:
A key difference between the unfolding closure and the continuation based approach is
that the continuation gives us full control over how and when we produce values for our
continuation. In this example, we immediately send the values 1, 2, and 3 over the stream
and then we finish the stream. We do this by telling the continuation to yield values and
once all values are yielded we call finish on the continuation to complete the stream.
This shows that we can produce and send values whenever we decide we want to produce
and send values rather than waiting for a closure to be called.
By default, all values that we we yield are buffered. This means that even though we im-
mediately yield all values one after the other and then we complete the stream, anybody
that chooses to iterate over our stream will receive all values that have been yielded before
receiving the continuation.
If we run this code, the output will look like this:
You can tell that the closure that we passed to the AsyncStream initializer was invoked
immediately, and that our calls to yield and finish all executed before we started our for
loop to iterate over the values yielded by our stream.
In some cases, this behavior is exactly what you want; you might be interested in all the
results that our stream ever yielded. For example, when you’re iterating over messages from
a websocket where each message you receive is a chat message that should be presented to
the user.
When you’re leveraging a continuation based AsyncStream to implement a location
provider, you’re likely not interested in all the locations that you’ve recorded for the user
before you started iterating over your stream. It’s more likely that you’re interested in the last
known location only.
When you’re only interested in the most recent n items that were sent by your AsyncStream,
you can give it a buffering policy through its initializer:
In the example above, I’ve provided a buffering policy of bufferingNewest(1). This will
make it so that the async stream buffers exactly one element and discards any values that
were yielded before the buffered value.
A buffering policy like this is useful when you want to make sure that you always receive the
last yielded value (if any) before receiving any new values. This is a very reasonable policy for
the location provider that we’ll build later.
You can also provide a buffering policy of bufferingNewest(0) which would discard any
values that weren’t received by a for loop immediately. This could be a useful buffering policy
for an async stream that yields values for events like when the user rotates their device or
taps on a button. You’re usually only interested in these kinds of events as soon as they occur,
but once they’ve occurred they lose all relevance; you wouldn’t want to start iterating over
a stream only to be told that the user rotated their device 5 minutes ago; you’ve probably
already handled that rotation event somehow.
It’s also possible to provide numbers other than zero or one for your buffering policy. You
might be interested in getting the last four of five values from your stream instead. You can
simply provide a number that fits your requirements and you’re good to go.
In addition to buffering the newest values received by your stream, you can also use a
bufferingOldest policy. This will keep the first n values that were not yet received by a
for loop, and discard any new values that are received until space opens up in the buffer.
The sample code for this chapter contains an AsyncStreams object that you can play around
with to see the impact of using different buffering policies on what ends up in your async for
loop.
In addition to the ability to buffer values, an AsyncStream allows us to keep a reference to
our continuation outside of the closure that we pass to our AsyncStream initializer. This
allows us to yield values for our stream from outside of the initializer. For example, we can
yield values in response to certain delegate methods being called on an object that created
an AsyncStream and stored the stream’s continuation in a property.
To demonstrate how this works, let’s move on to our first example of an AsyncStream in
action and build an async stream based location provider.
import CoreLocation
override init() {
super.init()
locationManager.delegate = self
}
func requestPermissionIfNeeded() {
if locationManager.authorizationStatus == .notDetermined {
locationManager.requestWhenInUseAuthorization()
}
}
func startUpdatingLocation() {
requestPermissionIfNeeded()
locationManager.startUpdatingLocation()
}
}
So far so good, this code shouldn’t contain any surprises; we’re only setting up the very basics
of what we need.
When we want to make the user’s current location available through an async stream we’ll
need send locations from our locationManager(_:didUpdateLocations:) dele-
gate method. We can leverage a continuation based AsyncStream that we set up in our
startUpdatingLocation() method by storing the continuation in our class and using
it in locationManager(_:didUpdateLocations:).
The following code shows how we can do this:
// ...
locationManager.startUpdatingLocation()
As usual, you’ll need to run this from a new Taks or SwiftUI’s task view modifier since we’re
doing async work by awaiting values in the for loop. The sample app for this chapter leverages
a Task that is started in response to a button tap.
There are a couple of things we don’t handle currently:
• We don’t stop updating locations when the task that wraps our for loop is cancelled; for
example when we start the iteration in a SwiftUI task view modifier and the view goes
away.
• We can only call startUpdatingLocation() once. Calling it twice will result in
the stream that we created first never ending and never receiving values.
The first item on the list can be implemented using a built-in property on continuations that
allows us to run a closure whenever the task that encloses the work we’re doing is cancelled.
The second item on the list requires us to make some changes to the way the location provider
object is implemented.
We’ll tackle the second point first because it changes the structure of our location provider a
bit. By doing that work first, we can implement a solution for the first issue in a nicer way. But
before we start fixing problems, let’s take a look at the issue at hand in a demo:
Task {
for await location in seq1 {
print("seq1", location)
}
}
Task {
for await location in seq2 {
print("seq2", location)
}
}
The code snippet above illustrates problem two from the list above; we can’t call startUp-
datingLocation() more than once.
The code above is included in the sample app from the code bundle that’s available alongside
this book. When you run this example, on an iOS device you’ll find that only seq2 received
values; seq1 never receives any values.
When we examine the implementation of the startUpdatingLocation() method, that
makes a lot of sense:
locationManager.startUpdatingLocation()
requestPermissionIfNeeded()
locationManager.startUpdatingLocation()
return stream!
}
The location provider is now an actor to make sure that we can call our startUpdatin-
gLocation() method concurrently without any problems. I’ve added a property stream
to my actor so I can cache the stream I’ve created for reuse later.
The rest of the startUpdatingLocation() method should speak more or less for itself.
After creating a stream we assign it to the stream property, and we return it.
Because the LocationProvider object is now an actor, we also need to change the way
that we conform to CLLocationManagerDelegate a bit:
Task {
for await location in seq1 {
print("seq1", location)
}
}
Task {
for await location in seq2 {
print("seq2", location)
}
}
The change we’ve had to make is that we should now await the call to startUpdatin-
gLocation() because calling that function is now a message in the LocationProvider
actor’s mailbox.
When you run this code, you’ll find that we are now receiving locations in both of our for
loops. This is really nice, but when we closely inspect what’s happening it seems that there
might be an issue with our code...
Notice how the output from our AsyncStream alternates between seq1 and seq2. That’s
great, exactly what we want. But take a look at both the GPS coordinates and time in each pair
of outputs. You can ignore the xxxx part in the coordinates, I’ve redacted a part of the output
a little but you should still be able to see that the location that’s passed to both sequences
isn’t the same within each pair. More importantly, notice how the time also isn’t the same for
each output. They’re about a second apart every time.
It looks like both of our for loops do receive values, but they don’t share the output from the
stream. Instead, the consume values from the stream by taking turns somehow, and once a
value is received by one loop, the other loop will never receive the value.
This is neither a bug on our end nor unexpected. Async sequences like AsyncStream do not
support sending values to multiple iterators. In a framework like Combine you could leverage
an operator like .share() on publishers that do not support multiple subscribers receiving
output from a single publisher.
Unfortunately, async sequences do not have a similar mechanism at this time. There is
some work being done in the Swift async-algorithms package to add support for an
equivalent of Combine’s share operator but at the time of writing this book this work is not
yet completed and there are a few PRs open that implement different versions of sharing
mechanisms.
We can leverage Combine to work around this limitation of async sequences for our location
provider as you’ll see in the next chapter. I will not cover other workarounds like trying to
maintain a collection of continuations, implementing your own share operator or similar
solutions because a correct implementation of a sharing feature is not trivial to achieve, and I
do not want you to rely on incomplete or incorrect workarounds for a problem that’s being
solved in one of Apple’s packages (like async-algorithms in this case).
Now that we know that supporting multiple iterators for a single AsyncStream doesn’t
work, we can revert our code back to what it was before; a simple implementation that will
overwrite the cached continuation for every call to startUpdatingLocation().
We’ll accept that we can’t have multiple iterators for now.
If at any point the task that owns our async for loop is cancelled, we’d like to stop monitoring for
location updates. To do this, we can assign a closure to our continuation’s onTermination
property:
locationManager.startUpdatingLocation()
self.continuation = continuation
}
}
Whenever the task that owns our for loop is cancelled or otherwise ends the onTermination
closure is called. We can perform cleanup, like stopping our location updates, from within
this closure.
One last thing we should take into account is to end our async stream when our Location-
Provider is deinitialized. To do this, we can implement a deinit method on our provider
and call finish() on our continuation:
deinit {
// ...
continuation?.finish()
}
}
This will end the stream for anybody that’s iterating over it. If we wouldn’t do this, our Loca-
tionProvider could be deinitialized but the for loop that’s iterating over our async stream
would never end. This would essentially cause the task that’s iterating over the stream to be
stuck forever since nothing is telling it the stream ended.
With this in place, we’re able to start using our LocationProvider and know that it handles
all the important bits and pieces other than having multiple iterators for the same stream.
We’ll look at a Combine based solution that’s bridged into Swift Concurrency in the next
chapter, but for now I’d like to move on to our next example; connecting to a websocket and
listening for incoming messages using an async stream. For this example, we’ll leverage the
third approach to creating async streams using the makeStream(of: bufferingPol-
icy:) method. Using this method relies on the exact same mechanisms that the second
approach uses, but it’s slightly more convenient to use in comparison. For that reason, I won’t
be explaining all the details unless they’re different from what you’ve just learned.
func setReceiveHandler() {
socketConnection.receive { result in
defer { setReceiveHandler() }
do {
let message = try result.get()
switch message {
case let .string(string):
print("incoming message", string)
case let .data(data):
print("incoming message", data)
@unknown default:
print("unkown message received")
}
} catch {
// handle the error
print(error)
}
}
}
setReceiveHandler()
In the code above, I create a URL that points to a websocket that’s running locally. In this
book’s sample code, you’ll find a folder named local-server/simple_socket that
contains a small node.js script that starts a websocket server when you run it. More detailed
instructions for running the code can be found in the README.md file.
Run the script locally by navigating to the simple_socket folder in your terminal and
typing the following command:
node index.mjs
The websocket server this creates will accept incoming connections and respond by sending
a message to the connected client on a regular interval. This is nothing fancy but sufficient for
our testing purposes.
To connect to our local websocket server, we can use URLSession and its webSocketTask
method. Just like with a regular URLSession data task, we need to call resume() on the
created websocket task to actually connect to our server.
Once connected, we need to actually start receiving messages from our websocket. We do
this by calling receive(:_) on the websocket task. Whenever the websocket receives
an incoming message from the server, our closure is called, and we need to register a new
receive closure to receive the next message.
If we don’t register a new closure, we’d receive a single message and ignore any further
incoming messages.
While Apple didn’t add an ability to iterate an async sequence to receive websocket messages,
they did provide an async version of the receive(_:) message. If we refactor the code
from before to leverage the async receive(_:) method, the code would look as follows:
switch message {
case let .string(string):
print(string)
case let .data(data):
print(data)
@unknown default:
print("unkown message received")
}
} catch {
print(error)
}
await setReceiveHandler()
}
The code isn’t very different, but at least it uses a little bit of Swift Concurrency.
The code above could be improved in two ways:
1. We should only start awaiting a new message if the socket connection is still open
2. We should only start awaiting a new message if we didn’t receive an error from the
socket
The current implementation will recursively call setReceiveHandler() even if the socket
connection is closed; that’s not ideal.
We can leverage a while loop and two checks to see if the connection should still be consid-
ered active, and one to see if the connection with the server has been closed:
do {
let message = try await socketConnection.receive()
switch message {
case let .string(string):
print(string)
case let .data(data):
print(data)
@unknown default:
print("unkown message received")
}
} catch {
print(error)
isActive = false
}
}
}
Whenever we encounter an error, we set isActive to false to stop our while loop. If the
socket connection’s closeCode changes from .invalid (not closed) to anything else, we
also exit out of our while loop.
Converting the process of receiving incoming messages to be driven by an async sequence
means that in an ideal situation we’d be able to write something like the following to receive
incoming websocket messages:
}
}
} catch {
// handle errors
}
We can leverage AsyncStream to approach a solution that works quite nicely for a use case
like this.
The simplest thing we can attempt to write is a version of setReceiveHandler that’s
driven by a while loop wrapped in a computed property on URLSessionWebSocketTask
by extending it:
typealias WebSocketStream =
,→ AsyncThrowingStream<URLSessionWebSocketTask.Message, Error>
extension URLSessionWebSocketTask {
var stream: WebSocketStream {
let (stream, continuation) = WebSocketStream.makeStream(
of: URLSessionWebSocketTask.Message.self)
Task {
var isAlive = true
return stream
}
}
Notice how this code is quite similar to the code we wrote earlier for our location provider.
The key difference here is that the makeStream(of:) method creates both a continuation
and a stream for us which makes managing our async stream easier.
The code above would allow us to await incoming messages through the websocket task’s
stream property as follows:
do {
for try await message in socketConnection.stream {
// handle incoming messages
}
} catch {
// handle errors
}
This actually works quite well, it’s not perfect but we can receive messages from our websocket
connection. You can verify this using the following code or by running the relevant example
from this chapter’s sample app.
Task {
try await Task.sleep(for: .seconds(5))
socketConnection.cancel(with: .goingAway, reason: nil)
}
Task {
do {
for try await message in socketConnection.stream {
// handle incoming messages
}
} catch {
// handle error
}
After running the code above, you’ll notice that a couple of messages are printed to the console,
and then our final messages is printed to indicate that all messages have been received and
handled.
If we abstract this code into a class, we can make it more reusable and easier to work with.
It will also help surface an issue with the code that we’ll need to solve in order to correctly
handle closing our websocket connection when an instance of our class is deinitialized.
For example, we could use something like the following class as a simulation of how you might
use a websocket connection in a real app:
class SocketConsumer {
func setupConnection() {
let url = URL(string: "ws://127.0.0.1:9090")!
let socketConnection = URLSession.shared.webSocketTask(with: url)
socketConnection.resume()
Task {
do {
for try await message in socketConnection.stream {
print(message)
}
} catch {
// handle errors
}
}
}
}
First, this example is highly simplified, but it does do a good job of showing you how we would
set up a socket connection and start iterating its stream for messages. The problem here,
though, is that when we deallocate our socket consumer, the socket connection won’t close.
Here’s what deallocating the socket consumer could look like.
What you can see in the code above is a simple view that has a button and we tap the button,
we start iterating our web socket for about two seconds and then we leave our function, which
means that nothing is holding on to the consumer anymore.
At that point, we would expect the socket connection to close and to no longer receive mes-
sages. Instead, we keep receiving messages because the task that actually grabs the socket
connection stream and iterates the messages, which is defined in SocketConsumer, does
not actually end. So what we really need to do is find a way to either stop the task when
SocketConsumer goes away or to send a connection closing message to the web socket
when SocketConsumer is deallocated. So let’s go ahead and add that functionality to our
SocketConsumer right now.
class SocketConsumer {
deinit {
socketConnection.cancel(with: .goingAway, reason: nil)
}
When you add this one piece of code to the SocketConsumer, we can now make sure that
whenever the SocketConsumer instacne is no longer used by anybody, we will actually
clean up after ourselves and close our socket connection. So if you were to add this code to
the SocketConsumer defnition that you’ve seen before, what you’ll find is that we actually
close our connection at the right time.
It’s this kind of clean up that makes working with things like asynchronous sequences kind of
complicated because the Task objects that we use to iterate our asynchronous sequences
don’t automatically get cleaned up because tasks only end when the code that they run ends.
In this case, that means our task would only end when it’s either cancelled or if the async
stream we’re iterating ends.
So unless we either end the stream or end the task by cancelling it, depending on whether we
have access to the source of the sequence or not, our tasks would run forever resulting in a
memory leak.
In Summary
In this chapter, you’ve learned a lot about Swift Concurrency’s async sequences. You’ve
seen that async sequences provide a very powerful and straightforward to read API to asyn-
chronously receive and process values that are generated by an async process. We started of
by looking at the URL.lines method which provides an interesting and powerful way to
begin exploring async sequences. You learned that async sequences are used a lot like regular
for loops with the main difference being that an async loop requires you to await values in
the async sequence.
After your introduction to async sequences, you saw how you can use an AsyncStream
object to build a simple wrapper around a location manager and a websocket connection so
you can asynchronously process values from these objects using an async for loop.
You also learned that the location provider cannot share its output to multiple for loops. In the
next chapter you’ll learn how to fix this by mixing Combine and async sequences seamlessly.
6.2. That said, I usually try a pure concurrency approach first. And with every Swift release I
rely less and less on Combine. I want you to keep this in mind while reading this chapter; it’s
about providing a full picture of the tools that are available to you.
override init() {
super.init()
locationManager.delegate = self
}
deinit {
continuation?.finish()
}
func requestPermissionIfNeeded() {
if locationManager.authorizationStatus == .notDetermined {
locationManager.requestWhenInUseAuthorization()
}
}
locationManager.startUpdatingLocation()
self.continuation = continuation
}
}
}
We’re leveraging an AsyncStream to send the user’s location to whoever iterates over that
stream. We saw that an async stream can only have one object iterating over it at a given
time, so this solution isn’t great if we want a single instance of LocationProvider to be
the main source of truth for the user’s location.
Combine has a construct called a Subject which is used for similar reasons as an Async-
Stream. It allows us to send values to subscribers of our Subject as needed. For example,
a Subject can deliver a user’s location to subscribers whenever a new location becomes
available.
A key difference between an AsyncStream and a Subject is that a Subject can have
multiple subscribers. This means that we can leverage a single Combine subject as a source
of truth for our application and it will deliver values to anybody that’s interested.
Before I show you how to bridge a Combine publisher like Subject into Swift Concurrency
as an AsyncSequence I want to show you what our location provider would look like if it
was written using a Combine Subject.
First, we’d replace the continuation property with a property called subject which will
be Combine CurrentValueSubject:
// ...
}
A CurrentValueSubject in Combine takes two generic arguments. One for the type of
objects that it will emit, and another one for the type of error that it can produce. In our case,
the CurrentValueSubject cannot produce errors so we use Never as our failure type.
We must also provide an initial value for our CurrentValueSubject so we provide nil
since we don’t immediately have a location available.
The init and requestPermissionIfNeeded methods don’t change compared to what
we had before. The startUpdatingLocation() method will update to return our Cur-
rentValueSubject instead of an AsyncStream for the time being. We’ll strip out any
nil values using compactMap, and we’ll hide the resulting type from callers of startUp-
datingLocation() by erasing our subject to AnyPublisher:
locationManager.startUpdatingLocation()
return subject
.compactMap({ $0 })
.eraseToAnyPublisher()
}
And to send our user’s location over the subject, we should to update our didUpdateLo-
cations delegate method as follows:
The resulting code can now be used as follows. Remember that we’re still looking at the
Combine version of this code before we bridge it over to an async sequence. This means that
we’ll subscribe to our publisher to receive values instead of using an async for loop.
seq1.sink(receiveValue: { location in
print(location)
}).store(in: &cancellables)
In Combine we receive values from a publisher by subscribing to it. One way to do this is to
call sink on a publisher which will allow us to start receiving values from a publisher. The
sink method produces an AnyCancellable object that we must retain somewhere for as
long as we want to keep our subscription alive. Usually this will be done in a property owned
by the object that starts the subscription, like a view model or view controller. Whenever that
object is deallocated we want the cancellable to be deallocated as well because that will tear
down the Combine subscription so we’re no longer observing our publisher’s output.
This is very different from how Swift Concurrency and AsyncSequence work with regards
to lifecycle management but more on that later.
You’ve now seen a one on one translation of the async stream based location provider into the
world of Combine with one key difference. We have a single Combine subject that anybody
can subscribe to. This is something that we couldn’t achieve with ant built-in mechanisms on
AsyncStream.
We can now turn this Combine publisher into an async sequence with just one extra line of
code and a minor change in our startUpdatingLocation method’s return type:
locationManager.startUpdatingLocation()
return subject
.compactMap({ $0 })
.eraseToAnyPublisher()
.values
}
class Networking {
func getResults(forQuery query: String) async throws ->
,→ [SearchResult] {
// make a network call...
}
}
class SearchService {
@Published var query: String = ""
@Published var results = [SearchResult]()
func setup() {
$query
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.map({ query in
// transform the query into a network call?
The tricky part here is the map. What should we be doing in there? Your initial thought might
be to write something like the following:
.map({ query in
return try await network.getResults(forQuery: query)
})
$query
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.map({ query in
return self.network.getResultsPublisher(forQuery: query)
})
.switchToLatest()
.assign(to: &$results)
The map returns a publisher object and we leverage switchToLatest so we take the output
from the most recently mapped publisher and use that output as the output we’re working
with. So in this case, that means that switchToLatest() creates a publisher that outputs
[SearchResult] objects.
Since we’d like to be able to call async functions from our map, we have to find a way to turn
an async function call into a publisher. To do this, we can leverage Combine’s Future and
Swift Concurrency’s Task:
$query
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.map({ query in
Future { promise in
Task {
do {
let results = try await self.network.getResults(forQuery:
,→ query)
promise(.success(results))
} catch {
promise(.success([]))
}
}
}
})
.switchToLatest()
.assign(to: &$results)
A Future in Combine takes a closure that receives a Promise closure. We can perform our
async work inside of a Task since a Future doesn’t create an async context for us, and we
can then fulfill the Promise by calling it with the result of our work.
The Future will then emit the result that we’ve given it as its value and after that it completes.
This is a perfect tool to build a bridge between our async function in the networking layer
and Combine’s Publisher focussed world.
It’s also possible to use this pattern inside of a flatMap if you’re always interested in the
result of every change in the search query.
A little bit earlier I briefly mentioned a major difference between the lifecycle of a Combine
subscription and that of an AsyncSequence iteration. Let’s explore this difference next.
func nonStoredCancellable() {
let cancellable = URLSession.shared.dataTaskPublisher(for:
,→ URL(string: "https://practicalswiftconcurrency.com")!)
.sink(receiveCompletion: { _ in
print("received completion")
}, receiveValue: { _ in
print("received response")
})
}
If you call this function, you’ll notice that no values are ever printed because the cancellable
that we use to store our AnyCancellable is deallocated once the nonStoredCan-
cellable() function ends.
If you move the storing of the AnyCancellable to be outside of the function and make it a
property of a class or struct, you’ll find that the network call suddenly works:
func storedCancellable() {
cancellable = URLSession.shared.dataTaskPublisher(for:
,→ URL(string: "https://practicalswiftconcurrency.com")!)
.sink(receiveCompletion: { _ in
print("received completion")
}, receiveValue: { _ in
print("received response")
})
}
Notice that the only thing that’s different compared to what we had before is where we store
the AnyCancellable.
I tend to call this behavior that Combine has “safe by default” what I mean by that is that a
Combine subscription will deallocate unless it is actively being kept alive through an Any-
Cancellable. In practice, this means that as long as your code is free from retain cycles
you won’t have any lingering Combine subscriptions that are active even though nobody is
interested in them anymore.
We can demonstrate this principle more in-depth by building an object that allows us to
publish values at arbitrary times, allowing us to see that as soon as we discard the object that
holds our cancellable is deallocated we don’t receive any more values.
The following code defined a simple sample object that will take a Combine Subject from
an external source, and it has a method to subscribe to the provided Subject:
class SubjectDrivenExample {
let subject: CurrentValueSubject<Int, Never>
var cancellable: AnyCancellable?
deinit {
print("subject driven example deinit")
}
func subscribe() {
cancellable = subject.sink(receiveValue: { value in
print("received \(value)")
})
}
}
We can then use this code from an object that will own our Subject and the instance of
SubjectDrivenExample. This will allow us to subscribe to the subject, send values, and
deallocate SubjectDrivenExample as needed:
class SubjectDrivenExample {
let subject: CurrentValueSubject<Int, Never>
var cancellable: AnyCancellable?
deinit {
print("subject driven example deinit")
}
func subscribe() {
cancellable = subject.sink(receiveValue: { value in
print("received \(value)")
})
}
}
And lastly, we can use an ExampleRunner object in a view so we can call start, end,
and sendValue in response to button presses. The ExampleRunner is a thin wrapper
that only interacts with our SubjectDrivenExample and it allows us to perform various
operations on our subject:
class ExampleRunner {
let subject = CurrentValueSubject<Int, Never>(0)
var example: SubjectDrivenExample?
func start() {
example = SubjectDrivenExample(subject: subject)
example?.subscribe()
}
func end() {
example = nil
}
func sendValue() {
subject.send(Int.random(in: 0..<Int.max))
}
}
Button("End") {
runner.end()
}
Button("Send Value") {
runner.sendValue()
}
}
}
}
Once we call start on the example runner, we can call sendValue and see in the Xcode
console that a value was received. When you call end on the runner, you’ll see a message
in the console that says that the subject driven example was deinitialized and when you
call sendValue after that nothing is shown in the console. After all, there’s no more active
subscription to receive values. Call start again and you’ll be able to see received values
again.
As you can see, we don’t need to do anything to make sure we don’t have lingering subscrip-
tions; the AnyCancellable takes care of that for us.
Let’s compare this setup to one that leverages an async sequence instead. I’ll start by defining
an object we’ll use to iterate over an async stream that’s derived from our subject. It’s similar
to the SubjectDrivenExample from before:
class SequenceDrivenExample {
let subject: CurrentValueSubject<Int, Never>
deinit {
print("sequence driven example deinit")
}
func subscribe() {
Task { [weak self] in
guard let self else {
return
}
for await value in self.subject.values {
print("received \(value)")
}
}
}
}
The flow of this code is the same as it was before. The key difference is that in the sub-
scribe() method we don’t subscribe to our publisher but we start iterating its sequence.
This acts and behaves identical to any other sequence we can iterate over but we still have
the convenience of sending values over a subject which makes it more convenient to run our
test.
We can use this SequenceDrivenExample from the ExampleRunner that we made
earlier by changing the example property to be our new example instead of the subject
driven one:
class ExampleRunner {
let subject = CurrentValueSubject<Int, Never>(0)
var example: SequenceDrivenExample?
func start() {
example = SequenceDrivenExample(subject: subject)
example?.subscribe()
}
func end() {
example = nil
}
func sendValue() {
subject.send(Int.random(in: 0..<Int.max))
}
}
If you run the example now and you call start, sendValue, and end you’ll notice that we
start the observation just fine, we can receive values just fine, but when we call end we don’t
see the deinitialization message printed. And when we call sendValue after calling end we
still see values printed to the console.
It appears we’re stuck with a leak of sorts and that’s unfortunate.
The reason we have this leak is because even though we have a weak self capture on the Task,
we make the reference strong again in our guard. Since the Task closure runs immediately,
self will never be nil and after that we perform our loop with a strong self instance
captured.
We can resolve this retain cycle by capturing our subject instead of self:
func subscribe() {
Task { [subject] in
for await value in subject.values {
print("received \(value)")
}
}
}
Now try calling start, sendValues, followed by a call to end. You’ll notice that now you
do see the deinitialization message prints in the console. Great! Now call sendValues
again.
You’ll notice that we see a value printed in the console even though our example was deini-
tialized. When you call start again, send more values, call end and repeat this a few times
you’ll notice that you get a lot of duplicate values.
Te reason we have this issue is that the Task that we create in the subscribe method does
not have its lifecycle bound to anything. We start our task from within a synchronous function,
the function end and the task keeps running until its closure completes. As you know, an async
for loop doesn’t complete until the sequence its iterating over completes. And the sequence
we’re using as an example here is a sequence that never ends.
The fact that a task does not have its lifecycle bound to anything by default is what I would call
the opposite of the “safe by default” behavior we get from Combine subscriptions. Instead
of actively making sure that our subscription stays alive, we now have to make sure that our
Task ends at an appropriate time.
The best way I’ve found to do this is to keep a reference to the Task in the object that you wish
to tie its lifecycle to, similar to how Combine works. You can then cancel the Task reference
in your deinit to end your for loop and in turn end your task:
class SequenceDrivenExample {
let subject: CurrentValueSubject<Int, Never>
var task: Task<Void, Never>?
deinit {
print("sequence driven example deinit")
task?.cancel()
}
func subscribe() {
task = Task { [subject] in
for await value in subject.values {
print("received \(value)")
}
}
}
}
If you have several of these iterating tasks running inside of one object, assigning each to
its own property can be cumbersome. You could create an array of type [Task<Void,
Never>] to contain all of your tasks and cancel them all in your deinit which will work fine.
Alternatively, you can leverage AnyCancellable and a small extension on Task to make
this all a little bit more convenient:
extension Task {
func store(in cancellables: inout Set<AnyCancellable>) {
cancellables.insert(AnyCancellable {
self.cancel()
})
}
}
This extension allows you to store your Task objects in a Set<AnyCancellable> in the
exact same way Combine does it:
func subscribe() {
Task { [subject] in
for await value in subject.values {
print("received \(value)")
}
}.store(in: &cancellables)
}
I personally like this approach because it feels familiar and it’s a little bit of a nod to something
I really like about how Combine manages subscriptions.
One important thing to not on Task lifecycles is that normally a Task won’t need a mech-
anism like this if your task performs a small amount of work that will (eventually) end. You
should not riddle your codebase with Task objects and store(in:) calls just to make sure
that every single task is cancelled as soon as possible. Only do this in situations where your
task might otherwise never end, or if keeping the task alive until it’s completed is actually
problematic.
If you’re kicking off tasks with SwiftUI’s .task view modifier, the task that SwiftUI creates
will automatically be cancelled when the view goes away. This means that any async for loops
that you start from within your task view modifier will be cancelled appropriately. Note
that you would need to use a separate call to the task view modifier for every async for
loop you have. If you create multiple unstructured tasks using Task {} within the task
view modifier these unstructured tasks are not cancelled when the view modifier’s task is
cancelled.
Before we wrap up this chapter, I want to take a brief look at a package that Apple has published
to expand the capabilities of AsyncSequence tremendously. This package is called async-
algorithms.
import AsyncAlgorithms
class SearchExample {
@Published var query = ""
func subscribe() {
let sequence = $query.values
.debounce(for: .seconds(0.3))
.removeDuplicates()
Task {
for await value in sequence {
// perform search with a try await
}
}.store(in: &cancellables)
}
}
Notice how I can just chain these operators one after the other, just like how you would
otherwise be able to chain calls like map, filter, and sorted on a normal sequence. Or if
you’re more familiar with Combine, this looks a lot like how you’d build a Combine pipeline.
Again, the purpose of this section is not to go over everything that you can possibly find in the
async-algorithms package. Instead, I want to make sure you’re aware of this package and of
the fact that it provides several time-based and Combine-like operators to use on your async
sequences to build more elaborate sequences that, in some cases, can replace your Combine
pipelines.
Having said that, I don’t think you should go and replace all of your Combine pipelines with
async sequences as soon as possible. Combine is a fine framework, it’s stable, and it works well.
Swift’s async sequences are very nice but as you’ve seen in the previous sections and chapter
they lack some features (some of those are covered by async-algorithms) and managing
the lifecycle of your async sequences is a little bit more involved than managing a Combine
subscription.
Of course, in the end it’s entirely up to you to see how much you want to replace, and how
much of the missing features that prevent you from going all-in on async sequences are filled
in by async-algorithms.
In Summary
In this chapter we expanded on the previous chapter and you saw how async sequences
compare to Combine, we looked at differences and similarities as well as some pitfalls that
currently exist with async sequences. These pitfalls are especially relevant if you’re currently
used to how Combine does things.
To wrap up this chapter you took a brief look at the async-algorithms package which provides
a lot of Combine-like tools that can help you use async sequences instead of Combine for
certain tasks in your code.
In the next chapter, we’ll take everything you know about concurrency, structured concurrency,
and async sequences to learn more about running multiple asynchronous tasks in parallel as
part of a single parent task.
• Async let
• Task groups
In this chapter, we will look at both of these tools. You will learn how they can be used
and when they should be used. Additionally, we will finally explore structured concurrency
alongside these topics.
By the end of this chapter you will have built a complex data importer that leverages Task
Groups and async let to fetch data from many API endpoints, groups them together, and
enriches objects with more data.
Let’s get started by taking a good look at async let.
Tip:
If you’re following along with the coding samples in this chapter, make sure you start your
local webserver by navigating to the code bundle’s movies folder in your terminal and
starting the server by running python3 -m http.server 8080. For more lengthy
instructions please refer to the README.md file in this book’s code bundle.
The code above will work perfectly fine, there’s nothing wrong with it. We can make some im-
provements though. Crew members and cast members for our movie object can theoretically
be fetched in parallel because these calls can be made independently. The don’t depend on
each other so here’s no reason for us to be fetching crew members first and cast members
second. However, that’s exactly how the code as it stands right now is written. We can’t start
fetching cast members until the await for crew members completes. This means we make
the two network calls one by one.
We could leverage two separate Task objects to achieve parallel executing of these network
calls by creating our tasks and then awaiting each task’s value to obtain the result for that
task:
With this approach we create two unstructured tasks that will start running immediately.
This means that we’ll make our network calls as soon as each task is created. After cre-
ating both tasks we await the value property on each of the tasks to obtain the task’s
result. First we wait for crewMembersTask.value, and then we wait for castMember-
sTask.value.
This is quite efficient and allows our networking code to run in parallel but there’s an issue
with the code above.
If the task that called fetchDetailsFor(_:) is cancelled, then the two tasks we create
inside of that function will not be cancelled. Other than inheriting the current actor and any
task local values, the unstructured tasks we created are in no way related or tied to the task
they were created from.
This means that if the async functions we call inside of our tasks are cancellable we miss out
on cancelling that work even though we won’t be using the result from these tasks.
In addition to this, there’s no compile- or runtime guarantee that ensures our network calls
complete before we return from fetchDetailsFor(_:). We know that in this specific
case we await the outcome of both tasks so the tasks will complete before we return but if we
refactor our code at any point we could easily forget to await both tasks and end up with a
task that’s still running when fetchDetailsFor(_:) returns.
At this point you might be wondering what the big deal is. Every code can have bugs, and this
is just something to look out for. You’re not entirely wrong; we should always pay attention to
the code write and make sure that it’s correct. However, if we can get a hand from the runtime
and the compiler, that’s way better than trusting ourselves to not make mistakes. Especially
when the language provides mechanisms that were meant to solve our bugs.
The way we should be going about writing the fetchDetailsFor(_:) method is to lever-
age child tasks that we create through an async let declaration. This will make sure that
the child tasks we create complete before our function returns even if we don’t explicitly aswait
the result of our network calls. Using an async let will also make sure that cancellation is
properly propagated from the parent task to the child tasks that we create.
First, I’d like to show you how we can rewrite the unstructured tasks example that you just
saw with async let and then I’ll talk about how it works exactly:
When examining this code, you can see how the structure of this example is similar to the one
that leveraged two unstructured tasks. The key difference is in how we create our tasks.
When you define a property as async let, you can call an async function without awaiting
that function immediately. Instead, you create a child task that will run the function you’re
calling asynchronously. The function you’re calling with async let will start running im-
mediately when the async let is created. Instead of awaiting the result of the function call,
execution resumes to the next line. On the next line, we create another async let to kick
off a second bit of async work.
Once we’re interested in using the results of the async function we called, we must await the
child task that we created. We can then assign the result of our async let to a property that
will hold the result of our child task. In the code you just saw we did this by writing let crew
= try await crewMembersTask. Note that while we’re awaiting our crewMember-
sTask task, both tasks are making progress. Once crewMembersTask is finished, its result
is assigned to crew and we start waiting for the next task to complete.
Because both tasks have been running in parallel there’s a chance that the castMember-
sTask has already finished or will finish soon. That’s the nice thing about how async let
allows us to run multiple child tasks in parallel.
Up until now you have worked with unstructured and detached tasks. You already know that
an unstructured task is created with Task {} and that an unstructured task inherits things
like actors and task local values. You also know that you can create a detached task using
Task.detached and that a detached task inherits nothing from its creation context.
Neither of these tasks are child tasks of their creation context. Some of the most importantant
differences between child tasks and unstructured/detached tasks are the following:
The most important rules to understand are the first two on that list. Cancelling a parent task
will mark its children as cancelled, and a parent task cannot complete unless its child tasks
are completed. This is fully unique to child tasks, and it’s what makes child tasks structured.
We’ll dig into this a bit more once we get to the section on structured concurrency.
The key reason to use an async let is almost never “I want a child task”. Having a child
task is more of a result of wanting to perform work in a specific way than a result of literally
wanting a child task to exist.
In the example you saw earlier an async let made sense because:
By using async let we were able to perform work in parallel which sped up the execution of
our fetchDetailsFor(_:) method. Instead of having to wait for each part to be loaded
sequentially, the function is now essentially only as slow as the slowest async function we’re
calling.
An async let is very nice when you have a handful of tasks you want to perform in parallel.
However, there are situations where we might want to perform a lot of work in parallel. For
example, let’s say that for each of the movies’s cast members we’d like to make an extra
network call that would fetch some more metadata about the cast member. We don’t know
how many members there are up front, and we know that there could be lots of them.
We can’t create our async let tasks in a for loop unfortunately, and we don’t know ex-
actly how many we’ll need. Luckily, there’s a second way to create child tasks that supports
And now let’s implement a function that fetches movies and for each fetched movie we also
fetch the associated crew and cast members.
Here’s what that code would look like if we process movie objects one by one:
return enrichedMovies
}
We iterate over our movies one by one and we call our fetchDetailsFor(_:) method
for each movie in our list. This code works and gets the job done but I’d like to speed things
up and process as many movies at once as I possibly can.
The key to doing this is, is a Task Group.
With a Task Group it’s possible to spawn any number of child tasks that run as part of our
Task Group. The Task Group will run as many tasks in parallel as it possibly can, which is great
because that means things should speed up by a lot when processing our list movies.
Before I go in-depth on Task Group, how it handles errors, cancellation, and more, I want to
show you how a basic Task Group is created, and how we can add tasks to a Task Group:
return []
The code above only shows a part of what we need in order to properly set up our fetchEn-
richedMovies() function.
The first line to pay attention to is the following:
Note that the type that you use for the Task Group’s output must be Sendable. You will be
creating and obtaining your output in a highly concurrent fashion so it must be safe for your
output to be passed across concurrency boundaries. As you learned in Chapter 6 - Preventing
data races with Swift Concurrency, Sendable is how Swift Concurrency makes sure that an
object can be safely passed around in a concurrent environment.
The second argument that’s passed to a Task Group is a closure that’s defined as a trailing
closure in the example you just saw:
// The first part of this line is cut off so you can focus on the
,→ closure
...).self) { group in
for movie in movies {
group.addTask {
return try await self.fetchDetailsFor(movie)
}
}
}
The closure that you pass to a Task Group will be called with one argument; a group. We add
work to our group by calling addTask on the group. In our case, we iterate over the array of
Movie objects that we obtained by calling fetchMovies(). I use a regular synchronous
for loop to perform this iteration. Nothing fancy is happening there.
For each movie in the array of movies, I call group.addTask with a closure that wraps the
work I want to do in my child task.
In this case, I want to obtain and return the result of calling fetchDetailsFor(_:) be-
cause that’s how I can obtain a movie object along with it’s crew and cast members.
At this point, I’m adding work to my Task Group but I’m not doing anything with the result of
the work that I add to the group. Every child task we create obtains and returns a value but
the results of each child task are kind of lost in the ether right now.
We’ll get to fixing that later.
First, I want to talk a little bit about the implications of having a throwing Task Group and
having an error occur inside of child task. After that we’ll go over how we can use the output
from our child tasks so we can collect the results from all tasks into a single array. Just like we
did earlier in the example where I fetched movie details one at a time.
If we were to fix this code, we could leverage a do/catch block to catch and handle errors
that might be thrown by our call to performSomeWork():
We don’t really handle our error in a meaningful way here, but the do/catch block at least
makes sure that we don’t attempt to potentially throw errors from a child task that’s not
supposed to be throwing errors.
The flow of the code in a non-throwing Task Group is relatively straightforward. We add task
to the group and we await the call to withTaskGroup(of:_:). That await completes
once all of the tasks in our Task Groups have completed or once all tasks have stopped their
work in response to the Task Group’s task being cancelled.
If we ignore cancellation for a moment this means that we know that every task in the
Task Group will have fully completed its work once the await on our call to with-
TaskGroup(of:_:) is completed and our code continues to run. Some of our tasks might
not have completed succesfully and we would have printed something to the console those
tasks. Either way, every task has completed in one way or another.
Notice that nowhere in our Task Group code we have to await the child tasks that are created.
When you add tasks to a Task Group using group.addTask, the Task Group will implic-
itly await any child tasks that we added to the Task Group before completing the call to
withTaskGroup(of:_:). This guarantees that all work that’s added to the Task Group is
completed before our code resumes from the await on withTaskGroup(of:_:).
For throwing Task Groups this is slightly different. A throwing Task Group at its core operates
under similar rules as a non-throwing Task Group. This means that once the await for
withThrowingTaskGroup(of:_:) is completed, we know that no tasks in the throwing
Task Group are running anymore. They either completed successfully, with an error, or they
stopped their work in response to cancellation.
To refresh your mind a little, here’s what creating a throwing Task Group looks like:
In the closure that we pass to our throwing Task Group we add tasks to the Task Group and we
don’t have to handle or catch our errors. We can write try await in the addTask closure
and it’s all good.
So what happens when one of our tasks throws an error then?
Well, the short answer is quite straightforward, and the long one.. . is quite complex. So let’s
start with the simple answer.
If a child task in a throwing Task Group throws an error, the Task Group itself will implicitly
swallow that error. In other words, the try await withThrowingTaskGroup( ...
would never receive the error that’s thrown by our child task.
But what if we do run into a situation where we throw an error from the Task Group’s body?
There’s a hard rule that once our await completes, all child tasks must be complete. In other
words, all child tasks must be completed and no longer running by the time we receive our
error.
What happens is that as soon as an error is thrown from our Task Group, all child tasks get
cancelled immediately. This means that every child task must stop what it’s doing as soon
as it can to respect the cancellation action. Once all child tasks have stopped their work, the
thrown error from the child task is rethrown by the Task Group, allowing us to receive and act
on the error.
Regardless of whether we’re working with a throwing Task Group or a non-throwing Task
Group, once our call to with(Throwing)TaskGroup(of:_:) completes, we know that
no child tasks are active anymore.
In the example for a throwing Task Group, you saw the following code:
Notice how our Task Group produces a result of (Movie, [CrewMember], [CastMem-
ber]).self and the tasks we add to the group return the result of calling fetchDe-
tailsFor(movie). We don’t attempt to grab the results of our work which means that
we never need to try await the outcomes for our child tasks. This, in turn, means that we
never throw any errors from our throwing Task Group’s body.
In the next section you’ll find out how to do begin receiving the results from your child tasks.
If everything goes well and none of our tasks fail, the Task Group will return a random num-
ber.
Note that even though we don’t await every task in the group which means that we’ll hit
our return before all of our child tasks are completed, all tasks in the group have completed
once the random number is assigned to the randomNumber constant.
If you run the code sample in this chapter’s code bundle you’ll see that the first print statement
is printed two seconds before the second print statement in the sample code. That’s due to
the implicit await for child tasks performed by the Task Group.
That’s because the group will implicitly await all active child tasks before allowing our code
to continue running.
Of course, examples like the one you saw aren’t very useful because usually what you’re
actually interested in is running child tasks that produce a result, and then returning the
results from all child tasks from your Task Group.
For example, it makes more sense write something like this where we’d replace the ??? in
the code with something useful:
return // ???
}
The group argument that’s passed to our Task Group closure serves two purposes:
After adding all of the work we’d like to perform to our Task Group we can start iterating over
it to obtain the results that are produced by our child tasks:
return results
}
The code above will receive the result from tasks in the group in the order in which they
complete; we can not force the group’s async sequence to yield results in the order in which we
added tasks. Results are always yielded in the exact order in which the child tasks complete.
Since we’re dealing with a throwing Task Group, our child tasks can fail. When they do,
their error is rethrown by the Task Group in this case. In other words, if we don’t make the
Task Group implicitly wait for all child tasks to complete, we can receive and handle errors
that are thrown by our child tasks before they are thrown to the caller of withThrowing-
TaskGroup(of:_:).
In the code sample from earlier this means that we can make sure that any tasks that threw
an error are silently ignored for example. Alternatively, you could return an array of Re-
sult<YourOutput, Error> instead of an array of YourOutput from your Task Group
closure.
The type of object that you return from your Task Group closure does not have to match the
type of objects that are produced by your child tasks.
Let’s say you’d want to go down the path of leveraging Result in the example we looked at
earlier to make sure that all child tasks can run to completion and produce a result regardless
of how many child tasks failed. Here’s what the code for that would look like:
return results
}
Because a regular for loop would stop as soon as an error is thrown from the sequence, we
have to use a slightly different mechanism to obtain results from our Task Group. For example,
we can use the nextResult() method on our Task Group to ask it for a Result object
that will either hold a value or an error depending on whether the child task that completed
to produce this result succeeded or not.
Note that my child task still produces the same type as before. The Task Group itself will
transform the outcome of our child task to a Result.
Instead of a for loop, I use a while here because that allows me to capture and use the
result of nextResult() until nextResult() returns nil which indicates the end of the
sequence of results. Or in other words, when nextResult() returns nil we know that all
child tasks have completed.
Note that because we don’t throw any errors from the throwing Task Group, a single child
task failure does not impact other child tasks. As long as we do not throw an error from our
Task Group, the group will not cancel itself. This means that if a child task fails and we handle
the thrown error inside of our Task Group closure, all other child tasks continue running as
normal.
However, if we throw an error, any error, from our Task Group closure all pending child
tasks will be marked as cancelled, and the thrown error becomes the result of the call to
withThrowingTaskGroup(of:_:).
With the code that we have in place right now, we can actually build a full blown movie details
scraper that scrapes movie details from the local server that’s included in this chapter’s code
bundle.
In the Xcode project for this chapter, I’ve put together Task Groups and async let to build
a highly concurrent scraper that will retrieve data as fast as possible. I won’t paste all the
code here since it follows the pattern from code snippets you saw earlier in the chapter, I do
encourage you to take a look at the Xcode project and run it at least once to see it in action.
It’s pretty cool.
One approach to limit the number of tasks in our Task Group is to track the number of tasks
that have been added to the task group up to a threshold. Once we hit that threshold we wait
for a task to complete before adding the next task. This allows us to make sure that we never
have more tasks running than we consider appropriate, and for every task that completes we
can add one new task to the group until all work has been added.
An implementation of this would look as follows:
group.addTask {
do {
try await Task.sleep(for: .seconds(Int.random(in: 0..<3)))
} catch {
print(error)
}
}
}
}
This is great when we’re adding tasks based on a range. We can check the current index that
we’re performing our iteration for, and based on that we can go ahead and decide whether
we should add our task immediately or not. But how do we adapt this example to work with
something like the movie fetcher that we’ve built earlier? In that movie fetcher we iterated
over a list of Movie objects using for movie in movies.
On approach would be to keep track of the number of added tasks with a simple counter
that’s incremented whenever we add a new task:
addedTasks += 1
group.addTask {
// fetch movie information
}
}
}
The nice thing about this approach is that the index that we receive will always be zero based,
even if we’re working with an ArraySlice of our movies array.
One thing to be careful about is the fact that we call group.next() in this for loop. If you’re
looking to collect all of the results of the tasks that you’ve added to your Task Group you’ll
want to make sure that you add the return value for group.next() to your output array
because once a value has been returned by the group’s iterator it won’t be made available
again.
You will also need to write a async for loop after adding all work to your Task Group like you
did in the previous section to collect the results for the tasks that didn’t complete yet.
If we put all of this together and update the Task Group code from the previous section to only
fetch details for 5 movies at a time we’d end up with the following code:
return results
}
Notice how I’ve moved the results variable to the top of my Task Group closure while I
defined that after the for loop in earlier examples. That’s so that I can start adding results
from within my for loop.
Inside of the for loop, when we add the 6th task and any task after that, I use nextResult()
instead of next() to obtain a Result object and avoid any errors from being thrown out of
my Task Group closure since that would cancel the running tasks.
After my initial for loop completes I wait for the remaining work to complete, append the
results to my results array, and then I return the results from the Task Group closure.
Even though it would have been nice if we could just tell the Task Group to not work on
more than five tasks at a time, implementing our own limiter isn’t too bad and it works well
enough.
enum FeedItemType {
case photo, text, video
}
struct FeedItem {
let id: UUID
let type: FeedItemType
}
For each FeedItem object we might have to make a separate network call to fetch details for
that specific FeedItem. I know it’s not efficient to do that, and ideally you would not model
your backend like this but for now, let’s just roll with this idea for the sake of the example.
The models that we’ll eventually receive from the server look like this:
struct PhotoItem {
let caption: String
let imageUrl: URL
}
struct TextItem {
let text: String
}
struct VideoItem {
let caption: String
let videoUrl: URL
let duration: Double
}
So based on an array of FeedItem objects, we want to run a Task Group to fetch information
for each item and return the corresponding struct for each item.
Unfortunately, we can’t create a Task Group where we have all three structs as the output.
You could make three Task Groups, one for each feed item type but that’s not ideal either. A
more acceptable solution would be to define an enum that has cases and an associated type
for each type of item:
enum PopulatedFeedItem {
case photo(PhotoItem)
case text(TextItem)
case video(VideoItem)
}
Now you can create a Task Group that produces PopulatedFeedItem objects by fetching
different data depending on the type property of each FeedItem as follows:
return results
}
The code above switches on the type property on FeedItem to determine which function
should be called to obtain data. Once the data is obtained, the relevant PopulatedFeed-
Item case is returned with an associated value.
Finally, we use group.nextResult() to accumulate all of the results from our API calls
together and return an array of PopulatedFeedItem objects from our Task Group.
This approach to having a Task Group that has child tasks with varying results is the way that
Apple actually recommends in some of their documentation and WWDC videos. It’s really also
the most solid and flexible way I have found to solve this problem.
Generally speaking you shouldn’t need this pattern often. But when you do need it, it’s nice to
know how.
Before we wrap up this chapter, there’s an important Swift Concurrency concept that you’ve
seen in action in this chapter as well as in the previous chapter. We haven’t explicitly named
the concept, but now that you’re familiar with its principles I think it’s time we take a moment
to learn about Structured Concurrency.
It shows how we can call a function, kick off a bunch of other work in parallel, and our function
isn’t completed until the work we’ve kicked off is completed.
This is very similar to how a Task Group works.
Note that this is not at all similar to how calling an async function from another async function
works since that suspends the calling function while the called function does its job. The
calling function can’t resume until the called function completes. Logically this is very similar
to what the fork join model describes but the key thing that’s missing is the parallelism that’s
shown in the image above.
However, when we change some of our async function calls to use async let to have these
functions run in parallel, we’re suddenly back on something that looks like the fork-join model
as depicted in the way fetchCrewForMovie(_:) and fetchCastForMovie(_:) are
called in the graphic you just looked at.
The fork-join model is very, very similar to what Structured Concurrency describes. Both are
systems that describe how work items relate to each other, and how any task in the system
can only complete once all of the child tasks it spawned has completed.
The above is, in just one sentence, what structured concurrency is all about. Describing the
relationship between parent and child tasks.
The key difference between the fork-join model and Structured Concurrency is that Structured
Concurrency can enforce that all child tasks complete before the parent does. In the fork-
join model, there are no guarantees or ways to enforce that the join part of the concept is
implemented and honored correctly.
Structured Concurrency on the other hand provides a model where the program is aware of
how child tasks are completed, and how they related back to their parent task. This means
that the program can ensure correctness around child tasks completing, and how errors are
propagated.
For example, when a child task in a Task Group throws an error, and we allow this error to be
thrown from our Task Group closure, the system knows to cancel all of the child tasks, wait for
all child tasks to honor their cancellation and complete their work, before the error is actually
thrown from the Task Group so we can handle it. If the error would be thrown before all child
tasks are completed, that would be a breach of structured concurrency since the parent (the
Task Group) would throw an error (and complete) while it still has running child tasks.
At this point in time, the only places in Swift Concurrency that implement Structured Concur-
rency. Or rather the only places in Swift Concurrency where you will interact with Structured
Concurrency through child tasks right now, are Task Groups and async let. Everything else
you do like spawning unstructured tasks with Task or detached tasks with Task.detached
are actions that do not involve structured concurrency.
When you spawn an unstructured task with Task, you don’t create a child task, and the
function that you spawned that task from can complete just fine without the unstructured
task completing. That’s why it’s named an unstructured task; it doesn’t follow the rules of
Structured Concurrency.
When you await an async function from within another async function you’re not creating a
child task so if you want to get super strict, awaiting something is not structured concurrency.
However, I do feel like awaiting async function involves principles from structured concurrency.
For example, you know that the function you awaited has fully completed by the time it
returns.
Following the principles of Structured Concurrency is not enough for something to be Struc-
tured Concurrency though. So at this point in time, you’re really only actively using Structured
Concurrency when you’re working with child tasks produced through async lets or Task
Groups even when its rules and principles can be seen throughout everything Swift Concur-
erency related.
In Summary
In this chapter, you’ve learned everything you need to know about child tasks and structured
concurrency.
You’ve seen how you can leverage async let as a tool to concurrency run multiple asyn-
chronous functions in parallel before awaiting their results. You saw that this can be extremely
useful when you want to fetch data from multiple resources at once.
After that, you were introduced to Task Groups. You saw how a Task Group can run many,
many, tasks at once as child tasks of itself. You learned that tasks in a Task Group can produce
results, and that it’s possible to iterate over these results using an async for loop. We need to
do this if we want our Task Group to produce an array of individual child task results.
After that, you learned how errors propagate from within a Task Group, how throwing an error
from a Task Group cancels all of its child tasks, and how you can obtain all results from child
tasks in a group using nextResult to avoid any errors from being thrown out of your Task
Group closure.
Lastly, you learned how async let and Task Group relate to Structured Concurrency. You
learned about the fork-join model from the sixties and how it’s a foundation for Structured
Concurrency that ensures all child tasks spawned by a given parent task must complete
before the parent task itself completes. You also learned that Structured Concurrency is only
applicable in Swift when you’re working with child tasks produced by async lets and Task
Groups.
At this point in your Swift Concurrency journey, you have learned about all of the important
concepts that you need to know to effectively use Swift Concurrency. You know how you
can write and call async functions, you know how what happens when you await a function,
you’ve learned about actors, you’ve seen Task Groups, async sequences, and much more.
In the next two chapters we’ll take a look at some key concepts that don’t involve learning
about Swift Concurrency as a topic but instead we’ll focus on being more effective with Swift
Concurrency. First, you’ll learn about testing, and how Swift Concurrency impacts your testing
code. After that we’ll wrap up the book with a chapter on profiling and debugging to help you
find and eliminate concurrency related bugs in your code.
Let’s take a quick look at what testing a callback-based asynchronous API looks like using
Swift testing.
struct SongGeneratorTests {
let generator = SongGenerator()
The SongGenerator object is a class that exists in the sample app for this chapter in the
code bundle for this book. The unit test is intended to test the song generator method on
this class which, if I had implemented this method fully, would probably perform some heavy
work to generate a chord sequence for songs. The exact nature and details of this work are
not relevant for what we’re trying to learn in this chapter though.
In the example test above, our test will wait for our callback to called and the confirm object
to be called.
The code isn’t terrible and I’d say in a small example like this it’s not too bad to read or
understand this code.
However, we can refactor our code and test to leverage Swift Concurrency and the exact same
test would look a little bit like this:
Of course, this code doesn’t have any assertions to make sure that the song was generated
correctly and that everything went as expected but just obtaining the result of calling gen-
erate() was much simpler than before.
We can mark our test methods as async as well as throws and we can await any async work
we’re doing right inside of our test.
This means that you’re no longer dealing with test expectations and you can write your test
code almost like every test case you write is synchronous.
If any of the async work you’re awaiting throws an error that you don’t catch or inspect inside
of your test, your test will fail with the thrown error as a failure message. This can be useful
when you’re testing methods that you expected to succeed. When you’re testing a method
call that you expect to fail you can omit the throws from your test case and catch your error
in a do { } catch { } for inspection.
I could write a couple more paragraphs on the fact that you can await async methods inside
of test cases because this section would be quite short otherwise, but honestly I would just
be doing that to add filler. The reality of the situation is that XCTest has fantastic support for
awaiting async methods which can really help you write more readable test cases.
In the next section I’d like to take a closer look at testing the output of async sequences.
We can test these three assumptions using a relatively simple test because we know that the
sequence will keep going for a while and then it will complete. The fact that we can mark our
test methods as async is a huge help here.
We can keep track of the last emitted value outside of our for loop to assert that every emitted
value is larger than or equal to the previously emitted value, and inside of the for loop we can
check that the value is within our bounds of 0 and 1:
struct ProgressSequenceTests {
let generator = ProgressSequence()
#expect(value >= 0)
#expect(value <= 1)
lastValue = value
}
The run() method on my generator creates an AsyncStream that generates value for the
purpose of this example. If you’re curious about its implementation I encourage you to take a
look at the code samples for this chapter in the book’s code bundle.
The test in this example reads a lot like it would if you would test a synchronous method. We
iterate over all values emitted by my async stream and if we’ve received a previous value, we
make sure that the new value is larger than the previously received value.
We also check that the value we received is between 0 and 1, and then we update our last
received value property so we can use for our next value.
After the for loop I make sure that lastValue was set to a non-nil value and that this value
is larger than or equal to 1. I’m not comparing to 1 exactly because of floating point rounding
errors that can occur in programs.
If our stream emits values that don’t fit our bounds or if a value smaller than the previous
value is emitted, our test fails. Our test will never complete if our stream never completes and
if we end the stream before all progress was reported (before lastValue is set to 1) the test
fails.
There’s an important detail in the previous sentence that we need to keep in mind when
testing async sequences...
In other words, the technique above won’t work if we expect our stream to be running for a
much, much longer time.
Luckily, we can still write tests for streams that never end. We can keep track of the values we
expect to receive and then check whether we’ve received all values on every loop iteration.
For example, if we expect an async sequence to emit the values 1, 2, 3, 4, and 5 in that order
we can write a test that looks as follows:
struct NumbersSequenceTests {
let emitter = NumbersSequence()
By creating an array with all expected output and removing the first element from the array
on every loop iteration we can check that every emitted value from our sequences matches
its corresponding value in our array of expected values.
Once we’ve removed and compared all values or if we find a value that doesn’t match our
expected value we can break out of our loop to end the iteration.
The async sequence might still be able to produce more values than expected, but we’re not
interested in those values. All we’re interested in is the handful of values that we wanted to
see. If the sequence can generate more values that’s fine, but we don’t want to assert anything
about them so we break out of the loop once we’ve seen everything we want to see.
If your sequence should emit the values that you’ve placed in your array of expected values
and complete immediately after emitting the last value, omit the break that I had in the
sample code. This would be very similar to the first async sequence related test you saw where
we were interested in making sure that our sequence completed.
For example, you might be interested in writing a test that ensures that calling a specific
function on one of your classes kicks off a synchronization process in your app where you’ll
download and upload some data to make sure that your user’s app is in sync with data that
exists on the server.
The details of this synchronization process should not matter in your test. However, there are
a couple of things that you might want to test.
In just those two tests there are lots and lots of things that you may or may not want to test
depending on your app’s architecture and design philosophy.
I want to reiterate that this chapter is in no way intended to teach you unit testing, best
practices, or to tell you how to write testable code. Instead, this chapter (hopefully) follows
universal unit testing best practices while focussing on teaching you how to solve problems
that follow certain patterns so you can apply my solutions to problems you might encounter
while unit testing your own code.
Or in other words, do as I say not as I do. Now let’s get back to the problem at hand.
When you want to test that some piece of your system gets to a specific state asynchronously
without callbacks, without awaiting anything, and without iterating over an async sequence
there’s a couple of options we have.
In the simplest case, your code updates a property that’s marked as @Published at the end
of your asynchronous process. You can observe that published property and assert that the
property’s new value is the value you expected. To write a test that does this, we can leverage
an async for loop as follows:
struct SynchronizerTests {
let synchronizer = Synchronizer()
var expectedCount = 2
Looking at this example, you can see that I don’t do much here. I essentially call a synchro-
nize() function on my synchronizer, and I expect that a property called newsItems will
change and eventually have a value of two.
Note that I don’t expect the value being set from zero to two or for it to happen in one go. All
I’m expecting is that there will at some point be two news items available in my synchronizer.
In a real test you would also see some mocks being set up here, and the mock would be
responsible for providing the two items to the synchronizer. My test is a lot simpler than that
and I’ll leave the mocking and stubbing up to you.
When our newsItems property would not be @Published, we’d have to leverage some
different techniques to still observe and assert the number of loaded news items in our test.
You might be asking, if our test is struggling to observe this value then how would our app
observe it? That’s a great question! And I don’t have an answer for that. Usually when you’re
testing you expect some part of your codebase to update. And usually that part of the codebase
has some way of telling other parts of the codebase about its updates.
One approach could be through a closure that’s called after a synchronization is completed.
If that’s how our synchronizer works, we could leverage Swift testing and a continuation to
have our tests wait for the completion handler to be called:
synchronizer.synchronize()
}
}
Unfortunately, in the scenario I outlined earlier we don’t have a closure that’s called. We want
to somehow poll the newsItems property regularly to check if it’s value has changed and
then complete our test once we have the expected number of news item objects.
With Swift Testing I have not yet found a way to do this without having to write anything
custom. So instead of showing you how to write a custom solution, I think it makes sense to
take a look at how we do this in XCTest so that if you would like to write a custom solution,
you more or less know which API you would like to mimic.
XCTest has support for a predicate based expectation that will evaluate a given predicate every
second until the predicate returns true. Once the predicate is true, our expectation fulfills and
the test can complete.
The best way to test our code in this case is to use a closure based version of NSPredicate.
We can evaluate our conditions in the closure and return true if all of our conditions are met.
Here’s what that looks like:
func test_predicateExpectation() {
let synchronizer = Synchronizer()
let predicate = NSPredicate(block: { _, _ in
return synchronizer.newsItems.count == 2
})
let expect = expectation(for: predicate, evaluatedWith: nil)
synchronizer.synchronize()
waitForExpectations(timeout: 2)
}
We still verify that the synchronizer has acquired two news items, and we check this in a
predicate. This predicate is evaluated every second so we’ll need to make sure that our test
waits for more than one second for our expectation to fulfill. Setting the timeout to a too low
value, like a second, can result in a failing test because of the predicate not being evaluated in
time.
In Summary
In this chapter, you’ve seen that Swift Testing has a lot of built in support for testing Swift
Concurrency code. You saw that you no longer have to rely on XCTestExpectation to wait
for your async code to produce results when you write async tests in Swift Testing. Instead,
you can now write async test functions and await any async work you need to wait for.
After that, we moved on to seeing how we can test our async sequences. You saw how we can
keep our test code relatively simple which is nice because that means we don’t have to invest
a lot of effort into workarounds and trying to untangle our test code when we come back to it
weeks or months after writing it.
Lastly, we looked at patterns that are useful when you’re testing code that’s not marked as
async but still uses Swift Concurrency under the hood. You saw an example where we resorted
to using a continuation to wait for our async code to complete.
All in all you should have a much better sense of how you can start writing unit tests for your
async code. There’s a lot more to cover about Swift testing that I chose to not put into this
book because Swift testing is a pretty new framework that has a lot of different features that I
would have to cover and writing a comprehensive testing guide that explores Swift Testing
fully was never a goal of this book.
In the next and final chapter of this book we’ll discuss some of the available profiling and
debugging techniques that can be useful when you’re looking to gain more insights into what
your code is doing.
But how can we be sure that we don’t accidentally spawn some unstructured work that’s still
running in the background? And how can we double check that the hierarchy that we expect
to be created is created correctly?
We can answer both of these questions and more with Instruments.
If you want to follow along with the screenshots and examples in this chapter, grab the finished
Chapter 9 project and make a copy of it since it contains everything we need to explore the
Task related features of the Swift Concurrency Instrument. The Chapter 11 folder in the code
bundle only contains the final product of this chapter.
When following along, don’t forget to start your local webserver by navigating to the movies
folder in your terminal and start an http server by running python3 -m http.server
8080, For more detailed instructions on running the local server, make sure to take a look at
the README.md file in the book’s code bundle.
To run your project with instruments, press cmd+I or choose Product → Profile. . . from the
menu in your Mac’s top bar.
After doing this, a window will appear that allows you to pick a predefined Instruments
template. In our case, we’re interested in the Swift Concurrency Instrument so we can record
and inspect concurrency related information.
The Instruments window that appear after that contains several lanes worth of information.
For now, we’ll focus on the top lane which contains information about tasks and structured
concurrency.
When you hit the record button and execute the app’s movie fetch function you’ll see that a
lot of tasks are created. The graphs in the top lanes fill up rapidly and then the middle lane
goes back down gradually as shown in the following image.
The top lane shows us the tasks in our program that Instruments believes to be currently
active and making progress. Due to the very high volume of tasks we create in our example
we immediately stumble upon somewhat of a limitation of Instruments. As indicated by the
yellow warning symbols shown on the timeline, Instruments drops certain measurements
due to a very high volume of signpost data being generated.
The second lane which is labelled Active tasks shows the number of tasks that are registered
within our system. This includes both tasks that are running as well as tasks that are awaiting
other work to be completed. What’s interesting is that this lane seems to have dropped fewer
data points and we can clearly see the number of tasks go up and then come back down.
The third task related lane is labelled Total tasks and it shows us the total number of tasks
that have been created while we were collecting data for our app.
Let’s try to run our experiment again but instead of kicking of a couple of thousand network
calls we only fetch and process a single page worth of movie data. Update the button action
Button(action: {
Task {
let fetcher = MovieFetcher()
//let movies = try await fetcher.fetchAllMovies()
let movies = try await fetcher.fetchEnrichedMovies(page: 1)
}
}, label: {
Text("Fetch movies")
})
Running Instruments with this code in place shows us a much nicer picture:
If you’re following along and your graph looks a lot less detailed, try zooming in with cmd + to
If we look at the second lane to see the number of tasks that are in the Alive state you can see
that we have many more tasks in that lane than we have in the Running lane. A task that is
awaiting other work to complete is considered alive, but not running.
And of course the total number of tasks only goes up because that graph shows the total
number of tasks that have ever existed in our app.
When everything is set up correctly, you should see that at a time that you don’t expect to be
doing any async work that both the alive and running tasks column show zero tasks.
If you have one or more tasks that are awaiting values from an async sequence that’s long
running, that task will be alive throughout the entire lifetime of the task but it will only be
running when your async sequence produced a value that you can process.
You can use the combination of these two lanes in Instruments to verify that your app isn’t
doing work when you don’t expect it to (and to verify that it’s doing exactly the work that you
are expecting to be doing).
Let’s update the code in our button handler one more time to introduce a bug:
Button(action: {
Task {
let fetcher = MovieFetcher()
//let movies = try await fetcher.fetchAllMovies()
let movies = try await fetcher.fetchEnrichedMovies(page: 1)
Every time we tap our button we create a new unstructured task, and that task starts iterating
over an infinite async sequence after fetching a bunch of movie data.
When we run instruments and tap the button a couple of times, the graphs will look as fol-
lows:
The Total tasks lane looks pretty much as expected. Every time we tap out button more tasks
are created. The running tasks lane looks pretty good too. We can see that our running tasks
spike every time we kick off work and then they come back down to zero when the work is
done.
The alive tasks lane tells us that we have an issue though. We have four tasks that are alive
right now even though that might not be what we expected.
The reason these tasks are still alive is of course that I’ve introduced a bug on purpose. But
let’s say it wasn’t on purpose. Let’s say I want to figure out where I’m creating these four tasks
that are alive when I expected no tasks to be alive.
There are a couple of things we can do to start figuring out what’s happening. First, we’ll
want to narrow down the scope of what we’re looking at. Instrument’s bottom panel shows
all kinds of interesting information for ranges of time in the timeline. By default, the entire
timeline is selected.
To select a smaller timeframe, drag over the section you want to inspect so it becomes high-
lighted. The bottom panel will adapt accordingly. In the divider between the bottom panel
and the timeline there are various ways to look at the data that Instruments collected.
If we choose to look at the task state summary, we can already see that there are a couple of
tasks there. To be precise, there are four tasks shown in the Continuation state. This means
that these tasks are waiting for their continuations to be called so they can be resumed. In
other words, these tasks are awaiting something. Note that these continuations aren’t the
kind of continuations you used in earlier chapters when you bridged existing code to Swift
Concurrency. These are continuations that are internal to Swift.
When you hover over one of the tasks and you click the little arrow button that appears, you’ll
see the following information in the bottom panel:
This tells us a lot. Apparently there’s a closure inside of our ContentView that created a
task that’s currently active. We can start exploring with this and figure out that the only task
we create is the one we kick off in the button’s action handler.
Another interesting view to explore is the task forest. This view shows the same four tasks
except we also see all the tasks that have been associated with this task. It can be useful to
use the task forest when you want to gain insights into how different tasks in your codebase
relate to each other.
The neat thing about the task lanes is that they provide lots of insights into the structure of our
code and the relationships that exist between our tasks. It does this down to a level of detail
that we can’t really get with the Time Profiler since our code isn’t actually doing anything
when we have tasks that are alive when they should have completed. These tasks exist, and
they are often an indication of memory leaks, but we can’t see them in the Time Profiler. For
that reason alone, you’ll want to look at the concurrency instrument every once in a while.
The second section in the Instruments template we just used is labelled Swift Actors Let’s go
ahead and explore that section next.
On the other hand, when you do some expensive processing on an actor in the background,
it might be far less obvious that you’re not using your actor optimally. Your main sign that
something’s wrong will usually be that your work appears to take longer than it has to, or when
you notice that something that you wrote to be async in the background seems to process
tasks one by one instead of in parallel.
To help you detect and fix these kinds of issues it’s highly recommended that you regularly
use Instruments on our app to keep an eye on how your app performs work. You already
know how to read the Task related lanes in the Swift Concurrency Instrument, but when we
combine the data from those lanes with the actors lane we can really start to understand why
our app might not perform as well as we’d hope.
As an example, we can take a look at the Chapter 11 sample project in the book’s code bundle.
This project contains a version of the movie fetcher object that we used in Chapter 9 except
I’ve rewritten it to be quite inefficient. We can use this inefficient example to explore how the
actors lane instruments can be leveraged to find suboptimal tasks in our codebase.
Tip:
When following along with the steps in this section make sure to run a local http server
in the movies folder from the code bundle. In your terminal, navigate to the movies
folder and run python -m http.server 8080 to start your server. Check out the
README.md file in the book’s code bundle for more detailed instructions.
Before we look at any code, let’s go ahead and profile the sample app. Use cmd+i to launch
Instruments and choose the Swift Concurrency template. Once you’re recording in Instruments
click the Fetch movies button in the app to start fetching movies. You should see the label in
the app change to say 20 movies were fetched relatively quickly.
However, when we inspect the Instruments trace we see an interesting piece of information in
our tasks lane.
In the top lane you can see that we have only one task running at a time while we have over
sixty tasks alive. That’s quite unexpected. The movie fetcher object was built to fetch as many
movies in parallel as possible so we would expect at least a couple of tasks to be running at
once.
The movie fetcher object itself was written as an actor because it holds the fetched movies as
mutable state. So that means that it only works on a single task at a time. We can actually see
that the movie fetcher being an actor is the entire reason for our slowdown when dig a bit
deeper.
With the tasks lane selected, we can look at the bottom of Instruments and see a list of
Enqueued tasks. A task in the enqueued state is a task that isn’t running because it’s waiting
for an actor to process its messages. We have quite a few tasks in this state and we can explore
these tasks more in depth by pinning them to the timeline.
Once you’ve pinned a task or actor to the timeline you can learn a lot more about what’s
happening. For example, after pinning one of our tasks to the timeline we can look the
so-called narrative view to learn more about everything this task has been up to so far:
This is pretty cool because we can see when our task was created, when it was in an enqueued
state (indicated by the red color on the timeline), when it was waiting for other work to be
completed, and more.
We can even see which actor has our task in the enqueued state. Right clicking on the actor
name allows us to pin that actor to the timeline so we can inspect the actor’s narrative view:
After zooming in on the timeline a bit we can read the bottom two lanes to see that the actor
is performing lots of small pieces of work. The bottom most lane shows the actor’s mailbox
queue. We can see that it floods with messages rather quickly and then it slowly but surely
starts chugging away and clearing those messages.
If we look at the bottom panel in Instruments, we can see more information about what the
actor is working on at any given time. This allows us to see which functions or properties on
our actor are called often, and how long each call takes.
What’s interesting in the image above is that we see lots of calls to fetchDetailsFor(_:).
This method is a method that uses two async lets to read data from the network so it should
really be running asynchronously without blocking our actor.
When we look at the two functions called via async let, we would expect those functions to be
asynchronous. They are responsible for reading data from the network after all:
Writing code this way is highly undesirable so I don’t recommend you do it. This code is purely
intended to illustrate how we can use Instruments to begin tracking down slow and blocking
code in our actors.
One way to start resolving the problem we have with our code is to mark all of the data fetching
methods as async. So that would mean marking the two methods you just saw as well as
the fetchMovies(page:) method as async. We can do that as follows:
Marking these functions as async will allow the system to run these functions asynchronously
if possible. Let’s see what the instruments log looks like when we run the code now:
In this image you can see one of the many tasks that we’ve created along with our movie
fetcher actor. This image still looks a lot like the situation we had earlier so clearly marking
our functions as async is not the full solution to our performance issue.
While it’s true that the system can run an async method asynchronously on the global thread
pool, an async method that’s constrained to an actor will still need to be enqueued by the
actor.
What we’re looking for is to break free from the actor and run our code on the global thread
pool. To do this, we must mark the three data fetching methods as nonisolated
Since none of these three methods interact with our actor’s mutable state it’s perfectly safe
for these methods to run on the global thread pool.
Running our app with Instruments again yields the following graphs:
This is already much better. We see that we’re enqueued for a little while but the actor is
chewing through way fewer tasks. I had to zoom in my Instruments timeline a bit more to
capture this graph than I had for the previous one.
However, we can push this to be even better. To understand how we need to take a look at
the code in MovieFetcher. The fetchDetailsFor(_:) method is actor isolated even
though it never access any mutable state on the actor. This means that we can opt out of
actor isolation for this method so it can run asynchronously on the global thread pool without
being actor constrained without any issues:
When running the app again the graph looks much, much, better. In fact, there wasn’t much
of a graph to show so I decided to show you the enqueued tasks list instead.
We no longer have a long list of tasks in the app that needs to be enqueued on the actor.
We only have a single task enqueue and that’s our button handler that calls fetchEn-
In Summary
In this, we took a deep dive into the Instruments template that Apple provides.
We started off by exploring the task related lanes to see how we can inspect the number of
tasks we created, which tasks are active, and which tasks are running. You learned about
different task states and how you can leverage the task lanes to gain insights into verifying
whether you code runs as efficient as you’d like. You also learned about the Task Forest which
can help you inspect and understand Structured Concurrency relationships between tasks in
your app.
After that, we took a look at the actors lane in Instruments. You learned how the tasks lane
can highlight interesting actor related hold ups in your code. I showed you how you can use
the narrative view for a task to learn more about the actor that your task is waiting on, and
then you saw that you can deeper inspect that task by pinning the actor to the timeline and
inspecting the actor’s narrative.
With this information we solved an example of a performance issue in this chapter’s sample
app and we verified our solutions by constantly measuring and verifying the improvements
we’ve made.
In the next and final chapter of this book, you’ll learn more about migrating existing code
from Swift 5.10 over to Swift 6.
The changes that Apple made to Swift Concurrency in Xcode 26 make it so that you’ll need to
check various project settings before you can fully reason about a codebase that uses Swift
Concurrency. In this chapter I will try and cover as much of this as possible, but the main focus
will be on default settings rather than different variants and flavors of these settings. While
working on this chapter I had to make decisions about what to include, and what to omit.
Because not all readers will have read this book for Swift 5.x, 6.0, and 6.2 I will avoid comparing
every version of the language in this chapter. It would simply become too confusing to explain
every Swift version, especially for those that are just learning about concurrency for the first
time.
Our core focus in this chapter will be to learn about what it takes to enable the Swift 6 language
mode rather than updating code from Swift 5.x to Swift 6.0, and to Swift 6.2.
In other words, we’ll spend most of this chapter learning about enabling the Swift 6 language
mode for the Swift 6.2 compiler.
you open an older project in Xcode 26. You won’t even see concurrency warnings unless
you’ve opted in using the instructions from Chapter 6 - Preventing data races with Swift
Concurrency.
If you do want to enable the Swift 6 language mode for your existing or your new project you’re
going to want to go to your project in Xcode’s navigator and navigate to build settings.
On the build settings screen you’re going to look for language version. You’re going to see a
screen that looks a little bit like this.
Once you’ve enabled strict concurrency and you’ve resolved most or all of your warnings is
when you want to turn on the Swift 6 language mode. Later in this chapter we’re going to take
a look at some strategies that you might employ to migrate your project over but the high
level overview is that: you want to enable the warnings first, resolve them and then go to the
Swift 6 language mode.
For new projects that you start from scratch I think it does make a lot of sense to try and enable
the Swift 6 language mode from the get-go. Most people will want to move there eventually
anyway so it makes sense for you to start using it as soon as possible. However, if you find
that you’re jumping through a lot of hoops or that you’re having trouble coding the way you’d
like to because Swift 6 is so strict about data safety you might want to drop down to the Swift
5 language mode and revisit Swift 6 later. If you’ve tried Swift 6 mode in Xcode 16 and gave
up, you might want to try again with Xcode 26. Swift 6.2 introduces a lot of features that make
the compiler smarter about data race protection which means it’ll complain less frequently
due to the compiler being able to detect that certain patterns of (tecnically) unsafe code are
actually very safe.
Now that you’ve seen how to enable the Swift 6 language mode for Xcode projects let’s take a
look at how SPM packages can be migrated next.
// swift-tools-version: 5.10
Notice how the tools version specifies a specific Swift version. If you create a project or an
SPM package using an older version of Xcode or using command line tools before Xcode 26,
the comment will contain the tools version that you used to create your SPM package.
For example, that might be Swift 5.9 if you created your SPM package with Xcode 15.3.
When Xcode 16 or newer is your primary Xcode version, any new SPM packages that you create
will be using the Swift 6 language mode because the Swift 6 tool chain is going to be selected
in the comment at the top of your Package.swift file.
This means that opposed to how for Xcode projects the default is still to use Swift 5, for SPM
packages the default is actually to use Swift 6.
If you’re opening an existing SPM module in Xcode the Swift tools version will not be updated
which means that, if your tools version is 5.x, you’ll be using the Swift 6 compiler with the
Swift 5 language mode.
All in all this makes SPM packages a little bit more confusing than Xcode projects because for
an Xcode project it’s pretty clear that Xcode is going to not use Swift 6 by default and it’s not
going to alter existing projects to use the Swift 6 language mode.
For an SPM package Xcode will not alter the package to use the new language version. However
new SPM packages will use the Swift 6 language mode. If you want to drop down to the Swift
5 language mode you have several available options. One approach could be to edit the
comment at the top of the file and say that you’re using Swift 5.9 or Swift 5.10 as your tools
version. This will work but you will be limiting it in the SPM features that you can use. If you
want to use an SPM feature that became available with the Swift 6 compiler that feature would
not be available if your selected tools version is smaller than Swift 6.
Instead, you can actually use the Swift 6 tools and set the Swift language mode on either a
package level or on a target level. Let’s take a look at how to do both. We’ll look at how to do
it for your entire SPM module first:
// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.
import PackageDescription
Every target in the Swift package will automatically use the Swift 5 language mode with the
setup shown above. If we want to have a specific target that uses the Swift 5 language mode
but have other targets that use the Swift 6 language mode, we can opt into Swift 5 using our
package definition like this:
// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.
import PackageDescription
My recommendation for new packages is to have them use the Swift 6 language mode. The
reason for that is similar to why I would recommend using Swift 6 in a new project: you’re
starting from scratch anyway so you’re not held down by existing code you might have to
rewrite in order to adopt Swift 6.
It’s totally possible and valid to mix Swift 6 packages with Swift 5 packages so your new and
old code can coexist inside of your apps. This means that you could actually start writing new
features in Swift 6 if your codebase is modularized and you can have your existing features
continue to use the Swift 5 language mode.
Once time is on your side, and you’ve resolved your strict concurrency warnings in your other
targets, you can switch those over to Swift 6. In the next section we’ll take a closer look at
what that process looks like.
2. A networking package
3. A models package
4. A package with UI components
The app target is going to depend on all three of our packages. The networking package will
depend on the models package only. And the package with UI components will also only
depend on the modules. All of the packages will be at Swift 5 when we start this refactor.
As we refactor, I will highlight a couple of things that you’ll run into. This will help you under-
stand problems that you might run into. The bottom line with any migration from Swift 5 to 6
is that you need a good understanding of how actors, sendable work as well as understanding
the essentials how Swift Concurrency and structured concurrency work.
If actors and Sendable are not something that you’re somewhat comfortable with, I highly
recommend to review the relevant chapters in this book before you start your migration.
Even though you should be looking at the models package in this case, I would like to turn on
Swift 6 mode just momentarily for our application target so that you can see what the effect
of that is when it is and how broad the concurrency warnings are going to be.
Navigate to the build settings of the app target and change the language mode to Swift 6:
After doing this, you’ll notice that the project no longer compiles. The first two errors that
show up for me are in the following files:
• CastViewModel.swift
• CrewViewModel.swift
Both of these files contain a function that creates a new Task. Inside of that task we interact
with the viewmodel itself as well as with the networking object. At the time of writing this,
the error message in both cases looks a little bit like this:
Task-isolated value of type ‘() async throws -> ()’ passed as a strongly transferred parame-
ter; later accesses could race
This error message is a little bit strange and in my opninion you might consider it misleading.
It looks like there’s something wrong with the way we’re creating our task, but in reality the
problem is that we’re capturing non-Sendable types inside of our task.
We can fix this in part by making our view models sendable by annotating them with the
@MainActor attribute. We can safely do this because we expect interactions with our view
models to happen on the main actor anyway. In modernized projects, you will have opted-in
to global actor isolation and your code will run on the main actor by default which would
make this a non-issue.
Since we decided to try things the hard way to understand some of the complexities of
migrating code, we can update the declaration of CastViewModel and CrewViewModel
to the following:
@Observable @MainActor
class CastViewModel {
// ...
}
// and...
@Observable @MainActor
class CrewViewModel {
// ...
}
With these changes in place, the project will still not compile but we’ll actually be able to see
what it’s like to migrate a module that has dependencies on other modules.
Xcode will show you an error in both the fetchCast and fetchCrew functions:
func fetchCast() {
Task {
// Non-sendable type '[CastMember]' returned by implicitly
,→ asynchronous call to nonisolated function cannot cross actor
,→ boundary
I’m only showing you the error for the fetchCast function because the fetchCrew func-
tion is identical in structure.
Xcode also shows a warning to suggest a (temporary) fix for this problem. The warning looks
like this:
Add ‘@preconcurrency’ to treat ‘Sendable’-related errors from module ‘Models’ as warn-
ings
This warning suggests that we suppress warnings about sendability or Swift 6 from modules
that have not yet been updated to the Swift 6 language mode.
So in this case it’s saying you can add the preconcurrency declaration to your import of
Models so that whenever you interact with Models, and there is a problem related to
sendability you will not be prevented from compiling your application.
This is really useful when you’re migrating an application and you don’t own all of your
dependencies. This will allow you to perform the migration without having to wait for third-
party vendors. However in this case we do own all of the dependencies so it makes a lot of
sense to abandon the migration of our app target and instead start focusing on migrating our
dependencies.
As I mentioned earlier, models is the place to start because that has no dependencies let’s
take a look at what we need to do to make models work with our app.
Turn off the Swift 6 language mode for the app target and turn on the Swift 6 language mode
for the models package by updating the Models target in the Package.swift file like this:
.target(
name: "Models",
swiftSettings: [.swiftLanguageMode(.v6)]
),
When you try and compile the models package now you will get a single error in the mock
property of the movie. The error looks a little bit like this:
Static property ‘mock’ is not concurrency-safe because non-‘Sendable’ type ‘Movie’ may
have shared mutable state
This error tells us that our movie is not sendable. However it is a struct and it only has im-
mutable state. So in theory the compiler should know that our movie is supposed to be
sendable. However because our movie is a public struct the compiler will no longer auto-
matically infer its sendability. In modules that have a public struct structs will never be
automatically sendable. We always have to declare the confirmation by hand so let’s go on
ahead and update movie to be sendable:
All I had to do was add the sendable conformance to the end of my declaration. The project
compiles just fine, but we do know that the app target had an issue with our cast and crew
models being non-sendable.
We should update those to be sendable in the exact same way as we did for our Movie. I
won’t show you the code here I trust that you’re able to figure that one out on your own.
With these changes in place our models package now confirms to the Swift 6 language mode.
The next package that I think we should update is our networking package since that only has
a dependency on models. Start by turning on the Swift 6 language mode for the networking
package by updating the Networking target in the Package.swift file like this:
.target(
name: "Networking",
swiftSettings: [.swiftLanguageMode(.v6)]
),
What’s quite interesting about the networking package is that even though we’ve just turned
on the Swift 6 language mode, there are no problems to resolve. There are issues that I know of
that we will need to tackle at some point but for now let’s just move on to the UI Components
module to see if that has any problems when we turn on the Swift 6 language mode.
By now I trust that you know how to update the Swift settings for the UIComponents package.
If you’re not sure how to do it, take a look at how we did networking and modules and I’m
sure you’ll be able to find the correct line to update.
Just like the networking module the UI components module does not have any problems that
we need to resolve right now. That’s really nice! The next step for us is to move our app over
to strict concurrency checking so that we can actually continue building our project while we
resolve concurrency warnings.
Navigate to the project settings for the Chapter 12 project and look for the strict concurrency
checking settings. Make sure to set that to complete as shown in the image below.
When you build the project with strict concurrency checking set to complete you will find that
there are a couple of compiler warnings that we need to resolve. The first one that I want to
take a look at is in the CastViewModel file:
Sending ‘self.networking’ risks causing data races; this is an error in the Swift 6 language
mode Sending main actor-isolated ‘self.networking’ to nonisolated instance method
‘loadCast(for:)’ risks causing data races between nonisolated and main actor-isolated
uses
This warning is telling us that we’re sending our networking object from the main actor. which
is what our view model is isolated to, over to a non-isolated instance method loadCast. The
loadCast instance method is defined on networking and it’s an asynchronous function that
is not isolated to anything.
In other words we’re taking our main actor isolated networking object and we’re performing
non isolated work on it. There’s two ways that we could fix this.
One is to make the entire networking class main isolated. This would put the networking
object on the main actor and it should fix our problems. However, I don’t particularly like that
solution.
I think we should make our networking object itself an actor. We can do this by updating the
Networking object as follows:
Note that if we had opted Networking in to global actor isolation, we wouldn’t have
seen this problem. Networking would have been isolated to the main actor which
means it would be sendable by default. A networking module is, in my opinion, a good
example of a module that should not opt in to main actor isolation by default due to the
fact that it might be decoding heavy payloads for example.
By making Networking an actor, we also gain some data race protection that we didn’t have
before. The networking object has a cache object that we were reading and updating from
within out asynchronous functions. This could have resulted in data races.
By making Networking an actor, we ensure that the cache is protected from being accessed
from multiple asynchronous functions at the same time.
After making the Networking class into an actor, we only have a single warning left.
The following warning is shown in the MovieListViewModel file:
Passing closure as a ‘sending’ parameter risks causing data races between code in the
current task and concurrent execution of the closure
We’ve seen this before. It means that we’re capturing something that’s not sendable inside of
our task while also being able to access it outside of the task. Because we’re seeing this in
a view model, we can make the view model @MainActor annotated which should fix our
problem.
@Observable @MainActor
class MovieListViewModel {
// ...
}
Having our view model main actor isolated makes a lot of sense to me because we’re going to
be interacting with MovieListViewModel from the main actor most of the time anyway.
By making it @MainActor isolated, we know that we’re always going to be making state
changes on the correct thread and we won’t run into any warnings about accidentally manip-
ulating UI data on a background thread.
Now that all warnings are gone we can set our app’s language mode back to Swift 6. You’ll
notice that we get no new errors which means we successfully migrated the sample app from
Swift 5 to Swift 6.
In a larger project this process is going to be a lot more involved you’ll mainly want to look
out for logic changes that could be the result of migrating some piece of code over from being
completion handler based to being async.
Resolving certain warnings is also going to be a lot more complicated because you have way
more moving parts.
The process is always going to be the same.
Start by migrating your packages if you have them. Look for packages that have no depen-
dencies or as few dependencies as possible. Use @preconcurrency whenever it’s relevant
to import modules into your app without being slowed down by any warnings that these
modules produce.
Once you’ve migrated your packages you can migrate your app and hopefully by going step
by step you’ll be able to complete the refactor relatively quickly.
Migrating to Swift 6 should be done slowly, carefully and only when you feel like you have a
good grasp of everything that Swift Concurrency has to offer. If you’re just not comfortable
yet with Actors, Sendability or if you’ve attempted to migrate but you were overwhelmed by
all the work that you would have to do, that’s completely okay.
The Swift 5 language mode is not going to go anywhere anytime soon.
This means that you’re going to be able to use Swift 5 and Strict Concurrency Checking if you
want to for a really long time before you have to move on with Swift 6. The fact that you can
mix packages that are written with Swift 6 into a project that still uses the Swift 5 language
mode makes it so that you can actually start using Swift 6 without ever migrating your existing
code over. There might be some compatibility issues here and there. You’ll have to be on the
lookout for those but those can usually be resolved.
One final tip before we wrap up this chapter is that you should always read your warnings and
errors really carefully. They can be quite cryptic and it might take you a couple of reads to
actually get the information that you need in order to understand what is wrong.
I know that Apple will be working on updating these error messages in future Swift versions
so hopefully by the time you’re reading this the situation has improved a little bit and a book
update will be on its way soon.
That doesn’t change the fact that you should always read errors very carefully because it’ll
make your migration a lot smoother.
In Summary
This final chapter of practical Swift concurrency has provided you with the final tools that you
need to start your journey into Swift 6.
You already knew a lot about how Swift concurrency worked and now you actually know how
Swift 5.10 and Swift 6 are related to each other and which path you can take to migrate your
code over. You’ve also learned that you don’t have to migrate any time soon.
If you prefer to stay with Swift 5 language mode just a little bit longer you absolutely can.
There’s no plan from Apple to force you to update from the Swift 5 language mode to Swift 6
so if you don’t have the time to start your migration to Swift 6 just yet it doesn’t have to be
your top priority.
I think for most projects opting in to Swift 6.2 Approachable Concurrency and main actor
isolation should be a first step. After that you’ll want to trun on concurrency checks, and only
then should you move to the swift 6 language mode. Swift 6.2 makes this much, much, easier
than it was before.
With this final set of skills and examples our journey together has come to an end. But your
journey is just getting started! There’s still much to learn about Swift Concurrency, and there is
still lots of practice you can do before you consider yourself an expert at Swift Concurrency.
This book should have given you a good sense of the important parts of Swift Concurrency.
And you hopefully feel confident that you know enough about the topic to start using it today.
Or maybe you already started using it in some bits of your app while reading this book.
I would like to take this moment of your time to sincerely thank you for reading this book all
the way to the end. Writing this book took a lot of time and hard work and you just made it
worth every minute I spent on it. So thank you.
Cheers,
Donny