[go: up one dir, main page]

0% found this document useful (0 votes)
234 views280 pages

Practical Swift Concurrency

Uploaded by

Claudio Barbera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
234 views280 pages

Practical Swift Concurrency

Uploaded by

Claudio Barbera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 280

Contents

Practical Swift Concurrency: Make the most of Concurrency in Swift 6.2 and beyond 6

Chapter overview 8
Chapter 1 - Understanding concurrency in programming . . . . . . . . . . . . . . . 8
Chapter 2 - Looking at Asynchronous programming in Swift pre-Concurrency . . . 8
Chapter 3 - Awaiting your first async methods . . . . . . . . . . . . . . . . . . . . 8
Chapter 4 - Understanding Swift Concurrency’s tasks . . . . . . . . . . . . . . . . . 9
Chapter 5 - Existing code and Swift Concurrency . . . . . . . . . . . . . . . . . . . 9
Chapter 6 - Preventing data races with Swift Concurrency . . . . . . . . . . . . . . 10
Chapter 7 - Working with asynchronous sequences . . . . . . . . . . . . . . . . . . 10
Chapter 8 - Async algorithms and Combine . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 9 - Performing and awaiting work in parallel . . . . . . . . . . . . . . . . . 11
Chapter 10 - Swift Concurrency and your unit tests . . . . . . . . . . . . . . . . . . 12
Chapter 11 - Debugging and profiling your asynchronous code . . . . . . . . . . . 12
Chapter 12 - Adopting the Swift 6 language mode . . . . . . . . . . . . . . . . . . 13

Chapter 1 - Understanding concurrency in programming 14


Understanding single-threaded code . . . . . . . . . . . . . . . . . . . . . . . . . 14
Understanding multi-threading . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Understanding thread safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 2 - Looking at Asynchronous programming in Swift pre-Concurrency 25


Exploring DispatchQueue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Understanding what a DispatchGroup is . . . . . . . . . . . . . . . . . . . . . . . 28
Exploring semaphores and locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Chapter 3 - Awaiting your first async methods 36


Setting up the sample app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Calling an asynchronous function . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Understanding how and when we’re allowed to await a function . . . . . . . . . . 40
Understanding what happens when you write await . . . . . . . . . . . . . . . . . 43
Practical Swift Concurrency

Defining your own asynchronous functions . . . . . . . . . . . . . . . . . . . . . . 45


Performing expensive processing in an async function . . . . . . . . . . . . . 48
Determining where your async functions run . . . . . . . . . . . . . . . . . . . . . 49
Defining different async functions . . . . . . . . . . . . . . . . . . . . . . . . 50
Swift 6.1’s semantics explored . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Understanding where code will run in Swift 6.2 . . . . . . . . . . . . . . . . . 53
Adding networking to MovieWatch . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Chapter 4: Understanding Swift Concurrency’s tasks 70


Knowing how and when to create tasks . . . . . . . . . . . . . . . . . . . . . . . . 70
Exploring different ways to create tasks . . . . . . . . . . . . . . . . . . . . . . . . 72
Can we create too many tasks in Swift Concurrency? . . . . . . . . . . . . . . 76
Understanding task priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Understanding a task’s lifecycle and capture semantics . . . . . . . . . . . . . . . 82
Tasks and error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Chapter 5 - Existing code and Swift Concurrency 89


Moving from callbacks to async / await . . . . . . . . . . . . . . . . . . . . . . . . 89
Making sure you use continuations correctly . . . . . . . . . . . . . . . . . . 92
Mixing Combine and Swift Concurrency . . . . . . . . . . . . . . . . . . . . . . . . 95
Calling async code from a Combine operator . . . . . . . . . . . . . . . . . . 95
Bringing the results of a publisher into the Swift Concurrency world . . . . . 98
An introduction to async sequences . . . . . . . . . . . . . . . . . . . . . . . 100
Safely migrating towards Swift Concurrency . . . . . . . . . . . . . . . . . . . . . 102
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Chapter 6 - Preventing data races with Swift Concurrency 105


Understanding what a data race is . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Using your first actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Understanding actor reentrancy . . . . . . . . . . . . . . . . . . . . . . . . . 115
Understanding Global Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Protecting state with mutexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Donny Wals 3
Practical Swift Concurrency

Understanding Sendability in Swift . . . . . . . . . . . . . . . . . . . . . . . . . . 129


Enabling Sendability checks for the Swift compiler . . . . . . . . . . . . . . . 130
Sendability for value types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Sendability and reference types . . . . . . . . . . . . . . . . . . . . . . . . . 137
Sendability, functions, and closures . . . . . . . . . . . . . . . . . . . . . . . 139
Using sending instead of @Sendable . . . . . . . . . . . . . . . . . . . . . . 142
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Chapter 7 - Working with asynchronous sequences 147


Exploring your first async sequences . . . . . . . . . . . . . . . . . . . . . . . . . 147
Producing your own async sequences with AsyncStream . . . . . . . . . . . . . . 157
Understanding the basics of async stream . . . . . . . . . . . . . . . . . . . 158
Building an async stream based location provider . . . . . . . . . . . . . . . . . . 163
Using async streams to listen for incoming websocket messages . . . . . . . . . . 173
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Chapter 8 - Async algorithms and Combine 184


Turning a Combine publisher into an AsyncSequence . . . . . . . . . . . . . . . . 185
Calling async function from within Combine operators . . . . . . . . . . . . . . . . 190
Comparing Combine subscription and async iteration lifecycles . . . . . . . . . . . 193
Expanding your options with async-algorithms . . . . . . . . . . . . . . . . . . . . 202
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

Chapter 9 - Performing and awaiting work in parallel 205


Creating child tasks with async let . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Using Task Groups to perform work in parallel . . . . . . . . . . . . . . . . . . . . 210
Task groups and error handling . . . . . . . . . . . . . . . . . . . . . . . . . 214
Using the results from a Task Group . . . . . . . . . . . . . . . . . . . . . . . 217
Limiting the number of tasks that a Task Group performs in parallel . . . . . . . . 221
Using Task Groups for tasks with varying types . . . . . . . . . . . . . . . . . . . . 225
Understanding structured concurrency . . . . . . . . . . . . . . . . . . . . . . . . 228
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

Chapter 10 - Swift Concurrency and your unit tests 233


Writing async test cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

Donny Wals 4
Practical Swift Concurrency

Testing async sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235


Testing code that uses Swift Concurrency as an implementation detail . . . . . . . 238
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

Chapter 11 - Debugging and profiling your asynchronous code 243


Investigating task activity with Instruments . . . . . . . . . . . . . . . . . . . . . . 243
Tracking actors with Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

Chapter 12 - Migrating to Swift 6.2 264


Enabling the Swift 6 language mode in Xcode . . . . . . . . . . . . . . . . . . . . . 265
Using the Swift 6 language mode in SPM packages . . . . . . . . . . . . . . . . . . 267
Migrating from Swift 5 to Swift 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Exploring the sample code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

Donny Wals 5
Practical Swift Concurrency

Practical Swift Concurrency: Make the


most of Concurrency in Swift 6.2 and
beyond
When Apple showed us Swift Concurrency at WWDC 2021, it didn’t take long for me to get
excited about it. What surprised me was that it didn’t take long for people to start messag-
ing me (jokingly) “Practical async / await, when?”. I’m sure people knew I would jump on
Concurrency sooner rather than later.
What I didn’t tell people at the time was that I registered the domain name for the book
pretty much straight away. Looking at my domain registrar’s info, I registered the domain
for Practical Swift Concurrency on June 9th 2021. That means it took me almost two years
to get this book finished. Of course, I didn’t start on the book immediately (too much was
still happening around Swift Concurrency initially) and I took a couple of breaks (becoming a
father seems like a good excuse to pause writing for a while) but still, it took me a while to
finish this book.
If it weren’t for some nudging from some of my friends, people in the community, and people
that seemed eager to get their hands on this book I probably wouldn’t have had the book done
yet. This book is very much written for everybody that’s interested in it. It’s for everybody
that wants to learn more about Swift Concurrency without any needless complication. It’s for
everybody that has been waiting for what I have to say about Swift Concurrency.
I’d like to extend a huge thank you to everybody that helped me write this book. First and
foremost I’d like to thank my wife, Dorien for her endless patience with me bouncing around
between working, writing, traveling, and being a father. I’d like to thank Oliver, my son, for
being the best baby ever. I’d be lying if I said I didn’t write a couple of paragraphs here and
there with him sleeping in the carrying bag as I was rocking back and forth to make sure he
wouldn’t wake up.
And of course, a huge thank you goes out to the people that have helped me proofread and
improve the book’s contents and topics. Thomas, Adam, Brett, Arthur, Zgjim, Michiel, Tristan,
Tjeerd, and everybody else that I had conversations with about my book. I owe you all a lot.
Without your support this book wouldn’t have ended up as good as it has. I’m very happy

Donny Wals 6
Practical Swift Concurrency

with the result of this book and I can only hope that you, the reader, will agree with me.
If you find any mistakes, errors, or inconsistencies in this book don’t hesitate to send me an
email at feedback@donnywals.com. I’ve put a lot of care and attention into this book but
I’m only human and I need your feedback to make this book the best resource it can be. Make
sure you also reach out if you have any questions that aren’t answered by this book even
though you hoped it would so I can answer your questions directly, or possibly update the
book if needed.
Cheers,
Donny

Donny Wals 7
Practical Swift Concurrency

Chapter overview

Chapter 1 - Understanding concurrency in


programming
In the first chapter for this book, we’ll establish a baseline understanding of what it is to deal
with concurrency in code. What does it mean to run code concurrently with other code? What
does it mean to run code in parallel? And what does asynchronous mean?
This first chapter is not intended to make you master everything related to concurrency. It’s
intended to help you establish a baseline that we’ll build on throughout this book.

Chapter 2 - Looking at Asynchronous


programming in Swift pre-Concurrency
Even though Swift Concurrency deploys all the way back to iOS 13, I think it makes sense to
show you the mechanisms that were commonly used in Swift before async/await and other
Swift Concurrency features came along.
We’ll take a brief look at Grand Central Dispatch. GCD has been the concurrency utility for iOS
developers until Swift Concurrency came along. You won’t learn all the intricate details of
GCD; there simply are too many. Instead, we’ll focus on some of the problems GCD had, and
why Swift Concurrency will solve these issues.

Chapter 3 - Awaiting your first async


methods
This chapter’s purpose is to introduce you to Swift Concurrency. We’ll take a look at the basic
syntax of async/await, throwing async, and non-throwing async. You will learn about async

Donny Wals 8
Practical Swift Concurrency

and sync contexts, and how you can get yourself into an async context.
We’ll also dig in and take a brief look at what happens when you await something; this ties
back into chapter one. You will also learn how you can write your own async functions, and
how these functions don’t have to suspend.
By the end of this chapter you will be able to start writing your very first async code, and you
will have a sense of confidence when doing this!

Chapter 4 - Understanding Swift


Concurrency’s tasks
Tasks are the essential building pieces for concurrency in Swift. In this chapter, you will learn
everything you need to know about tasks, and how they relate to each other. By the end of
this chapter, you should have a good sense of how your code works when it spawns tasks, and
you will understand that:

• We don’t talk about threads in Swift Concurrency


• Task take on a lot of the role that threads used to have
• We will never have more threads than CPU cores to avoid thread explosion

Chapter 5 - Existing code and Swift


Concurrency
This chapter acknowledges that developers won’t always work on a fresh codebase. We’ll
take a look at methods to bridge existing callback based code into the world of async / await.
You will learn about continuations and how they are used to convert “regular” code to async
await code.
You will also take a first look at bridging Combine code into async await by taking code
that obtains a single value (like network calls) and awaiting the first emitted value. In the

Donny Wals 9
Practical Swift Concurrency

next chapters you will get a proper introduction into AsyncSequence to iterate over multiple
emitted values.

Chapter 6 - Preventing data races with


Swift Concurrency
A huge feature of Swift Concurrency is the existence of actors. In this chapter we’ll explore
actors in depth. You will learn how actors ensure that access to their state is free of data
races.
We’ll also look at global actors, and in particular the main actor. You will learn how the global
main actor works, and why it’s an important part of the Swift Concurrency system.
We will also explore the Sendable protocol as well as @Sendable closures. You will learn
what sendability is, and why it’s important. This chapter also explores how Swift Concurrency
is dedicated to making sure that our code is thread-safe on a compiler level by leveraging
concepts like isolation.

Chapter 7 - Working with asynchronous


sequences
In this chapter, you will learn about Swift Concurrency’s AsyncSequence protocol and how
we can iterate over a collection that produces or obtains its data in an asynchronous fashion.
We will start by exploring a built-in mechanism that allows us to receive and process contents
of a URL line by line as soon as each line is fetched from the network.
After that, we’ll look at using an AsyncStream to build your own stream of values by building
a wrapper for a websocket connection.
By the end of this chapter you will have a solid understanding of how Swift Concurrency allows
us to iterate over an asynchronous collection, and how you can leverage built-in mechanisms
to build your own asynchronous sequences of values.

Donny Wals 10
Practical Swift Concurrency

Chapter 8 - Async algorithms and


Combine
In Chapter 7, we took a look at async sequences and how they can be used in Swift. There
are lots of parallels between Apple’s Combine framework and the things we can do with
Swift Concurrency. In this chapter, you will learn more about the similarities and differences
between async sequences and Combine.
We will also take a look at the async algorithms package that Apple has released as a Swift
package that we can optionally use in our projects. This package provides certain functionality
that we have in Combine but that’s not available for async sequences out of the box. With this
package we can come a lot closer to what Combine does.

Chapter 9 - Performing and awaiting work


in parallel
Awaiting asynchronous work is great because it lets us do things like making a network call
while the user is scrolling through a list of content. In reality the true power of concurrency
isn’t to perform one task asynchronously, but to perform many tasks in parallel.
In this chapter you will learn about async let and task group, and how these two tools
allow you to spawn tasks that all run in parallel as child tasks of another task. You will see how
we can leverage these tools to build a highly concurrent data fetcher that makes hundreds of
network calls in parallel.
We will also use the information in this chapter as a bridge into fully understanding structured
concurrency and its implications.

Donny Wals 11
Practical Swift Concurrency

Chapter 10 - Swift Concurrency and your


unit tests
In this chapter, you will take a look at XCTest and to what extent it supports testing asyn-
chronous code. You will learn how you can write asynchronous tests that await your async
functions as well as tests for code that uses async sequences.
By the end of this chapter you will know exactly how you can test your async code, and where
it makes sense to fall back to more old school testing methods.

Chapter 11 - Debugging and profiling your


asynchronous code
In this chapter we focus entirely on Instruments. Knowing how you can leverage Instruments
to profile and debug your code is essential to maintaining good performance, and to improving
performance when things go wrong.
We’ll start this chapter by inspecting how Instruments visualizes our tasks. You will learn more
about the different states tasks can be in, and you will learn how you can see the structured
concurrency relationships between certain tasks in your code.
After that you will learn how you can gain insights into how your actors are scheduling work,
and how that scheduling impacts your tasks. We will work on a sample app that has a few
glaring performance issues and you’ll fix these issues while constantly verifying our work
through Instruments.

Donny Wals 12
Practical Swift Concurrency

Chapter 12 - Adopting the Swift 6


language mode
Now that you’ve learned a ton about Swift Concurrency, how it works, what it does, and how
it’s used, it’s time to take a look at Swift 6.
In this chapter our main focus will be to understand what the Swift 6 rollout looks like for
Concurrency. We will take a look at how Xcode handles Swift concurrency and the Swift 6
language mode in terms of new projects and how Swift 6 works in Swift Package Manager as
well. You will also learn about the ways in which you can make your Swift 6 code cooperate
with non-Swift 6 code and we will look at different migration strategies that you might apply
when you want to enable the Swift 6 language mode in your projects.

Donny Wals 13
Practical Swift Concurrency

Chapter 1 - Understanding concurrency in


programming
When you write code, most of it will usually be fairly straightforward. Your code is executed
line by line, functions run one at a time, and it’s relatively easy to predict which paths in your
code will be performed when. It doesn’t matter whether you write Swift, JavaScript, Python,
or any other language. Most code you write will be non-concurrent, single-threaded code.
In this chapter, we’ll explore concurrency from a high level. You will learn everything you
need to know about concurrency to understand the contents of this book and the basics of
concurrency as a topic.
It’s important to understand that concurrency is a highly complex topic, and I want to avoid
writing a book that approaches concurrency as a theoretical challenge. Instead, you will
learn the bits and pieces that you need to effectively use, understand, write, and debug Swift
Concurrency code.
By the end of this chapter you will understand what concurrency and parallelism are, why
they’re useful, and when you might want to introduce concurrency in your code.

Understanding single-threaded code


In Swift, all of the code that interacts with your user interface either by updating it or respond-
ing to user input runs on the main thread. In other words, it’s run in a non-concurrent fashion,
one line at a time, one function call at a time.
For example, let’s take a look at a code snippet for a macOS command line tool that defines a
few functions that are called in order, each awaiting input from the user:

func getFullName() {
let givenName = getGivenName()
let familyName = getFamilyName()
print("\(givenName) \(familyName)")
}

Donny Wals 14
Practical Swift Concurrency

func getGivenName() -> String {


print("What's your given name?")
return readLine() ?? "--"
}

func getFamilyName() -> String {


print("What's your family name?")
return readLine() ?? "--"
}

getFullName()

Regardless of whether you understand exactly what readLine() is or does, I’m sure you
can reason about what the execution of this code looks like.
Tip:
The sample code above is included in the code bundle for this chapter and it builds as a
command line tool. If you open the project and run it you will see each question printed
in the Xcode console. You can then type an answer right in the console and press enter to
resume the program.

First, getFullName() is called. In turn, that function calls getGivenName() which will
print some text to the standard output (usually Xcode’s console when you’re running your
app through Xcode) and then the function waits for input (that’s what readLine() does).
This user input is returned and then getFamilyName() is called. The getFamilyName()
function also prints text to the console and awaits user input again. This input is then returned
as well and we use the provided given name and family name to print the user’s full name to
the console.
Even though our code has to wait a while for the user to provide input, our code runs syn-
chronously. This means that while we’re waiting for input when readLine() is called,
nothing else is happening in our program.
It’s a single-threaded, non-concurrent program.
If you were to visualize this example in a diagram it would look a bit like this:

Donny Wals 15
Practical Swift Concurrency

Figure 1: The flow of our program as a diagram

We can see that we move from block to block in a linear fashion. It’s relatively straightforward
to reason about what our program is doing, what it will do next, and what it did previously.
When we think about this example in complete isolation, you can imagine that our CPU is
doing one thing at a time. This means that our simple program only requires a single CPU
core to run, and that CPU core is entirely dedicated to our program. This in turn means that
while our program is running no other programs are doing background work. It also means
that if the CPU only had a single core our OS would be pretty much unresponsive while our
program is running.
This is obviously not how computers work because while any program is running we can have
other programs running at the same time, and our OS should always be responsive. A single
program taking over the entire CPU is undesirable and hard to imagine these days.
You might be thinking “Ah! So that’s why a CPU has multiple cores. That means it can do
multiple things at once” and you would be partially correct. However, CPUs didn’t have more
than a single core for a very long time. Affordable consumer CPUs most certainly didn’t have
eight or ten cores like they do today until quite recently.
To allow a computer to run smoothly without programs taking over the entire CPU, we make
use of concurrency. A CPU, can do multiple things concurrently. The way this works is that
the CPU will give each process, or application, some time to run before it switches to another
process until it eventually comes back to the original program that the CPU was running.
If we visualize what our actual CPU usage looks like in the simple example you saw earlier, it
would look a little like this:

Donny Wals 16
Practical Swift Concurrency

Figure 2: The flow of our program in a concurrent environment

Notice how our app is still super predictable in terms of what it does and when. But in between
every block of our program, the CPU will briefly switch to doing some other task.
The ability to run multiple tasks in this fashion is called concurrency. We perform work on
multiple tasks concurrently. This doesn’t mean we perform this work at the same time. It just
means that we’re working on multiple things at the same time.
Now, imagine that in one of these “Run other app” blocks an app is performing a very lengthy
task that can’t be interrupted. Operations that run in this manner where they must run from
start to finish without interruption are sometimes called atomic operations.
In other words, imagine that “Run other app” is performing a very slow atomic operation. This
means that the CPU would be stuck crunching numbers for this other app, and our app is now
frozen until the CPU finds time to perform some work for us again.
This is less than ideal, and to fix that we need to have multiple CPUs, and multiple threads.

Donny Wals 17
Practical Swift Concurrency

Understanding multi-threading
A thread in computing can be thought of as a single execution context that’s often used to
group related tasks or jobs together. For example, all of our application code would be grouped
into a single thread. Each of the other apps running on our system will have one or more
threads too. The OS will spawn several threads to deal with user input, UI rendering, and
performing background tasks too.
This is true regardless of the number of CPUs that are available. You can have more than one
thread of execution on a single CPU, and the CPU will run these threads concurrently. In other
words, the CPU will allow each thread to run for a little while and then schedules the next
thread, and so on.
When we introduce multiple CPU cores, we can allow these cores to run multiple threads in
parallel. The difference between concurrency and parallelism is subtle but important. You
already know that concurrency means to work on multiple tasks at the same time. Working
on multiple tasks in parallel means that you’re not alternating between doing multiple tasks
like you are in concurrency, but you’re literally doing multiple things at once.
Let’s update our graph once more to see what it looks like if our program would run in its own
thread on a dedicated CPU core:

Figure 3: Our program on a dedicated CPU core

Notice how we have three different colored blocks, each color represents a thread that is run
on a CPU core. I’ve simplified the image a little bit compared to before because I’m sure that

Donny Wals 18
Practical Swift Concurrency

you get the point without showing everything that our program does again.
You can see that our getFullName() function can run in parallel with an OS task, and even
with the system handling mouse movement. If the OS task would run a very slow operation,
our program won’t have to wait for it like it would when we only had concurrency on a single
CPU core. Now that the OS task’s thread runs on its own CPU core, our program’s thread can’t
be blocked by it because it’s running on a different CPU core.
In this example, our code is still single-threaded but the CPU is leveraging multiple cores
to run multiple threads in parallel. As an application grows, odds are that it might have to
perform a lengthy task that shouldn’t prevent your user from interacting with your app, and
that shouldn’t prevent your app from performing other work on behalf of the user.
A common example of using multi-threading in an application is performing network calls.
Let’s look at a diagram that’s intended to provide an overview of how a network call is made
in response to a button tap on iOS.

Donny Wals 19
Practical Swift Concurrency

Figure 4: Running a network call in response to a button tap

Donny Wals 20
Practical Swift Concurrency

Notice how the dark colored boxes represent our app’s UI thread. On Apple platforms, we
refer to the UI thread as the main thread. This thread is responsible for rendering UI and
handling user interactions like taps and swipes. When the user taps the screen, we call
fetchFeedData(). This method kicks off a URLSession data task.
If we performed this task on the main thread, we would be waiting for the response for a while,
and we wouldn’t be able to animate the loading spinner at the same time because the main
thread would be blocked while waiting for our data to be fetched.
By default, URLSession will run its network calls on a separate thread (the lighter boxes).
While that thread is waiting for a response from the network, receives it, and decodes the
received data, the main thread will animate our spinner. Eventually the data is fully decoded
and we can pass the decoded data back to the main thread and update the UI.
This example clearly demonstrates how our main thread is free to do other things while a
different thread performs a lengthy operation. If we would run this example on a single-core
CPU, we would be running these two threads concurrently. Each thread would get some time
to do work before being paused so the next thread can do some work. Imagine that our CPU
would alternate between animating the spinner for the next UI draw cycle, and then back to
receiving and parsing a packet of data that was received from the server really quickly until all
the work is done.
If we run this example on modern hardware, we’ll get parallelism. Our application can leverage
multiple CPU cores on all of Apple’s platforms, which means that our CPU won’t have to
alternate between tasks. In theory, it can simply leverage one CPU core for each thread which
is far more efficient.
Notice that I mentioned that this is the case “in theory”. In practice, our app won’t be the
only thing running at all times; especially in a desktop environment like on the mac. In that
case the main thread and a background thread might share a CPU core while other processes
leverage the other cores. This is a detail that we, as programmers, typically shouldn’t concern
ourselves with. We shouldn’t focus on which thread is run on which CPU core; the system
should handle this.
In this networking example, our setup was relatively simple and we probably would never
encounter issues. However, one last topic I want to cover before we move on to more Swift-
specific code in the next chapter is the topic of data races and thread safety.

Donny Wals 21
Practical Swift Concurrency

Understanding thread safety


When you’re dealing with multi-threading you open yourself up to a whole range of possible
issues. Threading-related bugs are particularly common when you’re dealing with data access
in your code.
Imagine that we’re building a utility that will scan files in a folder for a given keyword. Each
matching line for a file will be written to an array. We’ll use multithreading to process as many
files in parallel as possible. Here’s what that might look like in a simple diagram:

Figure 5: Processing files in parallel

In this diagram you can see that each file is being read on its own thread. The work that needs
to be done for each file is the same. Loop over the lines in the file, check if there’s a match
with our keyword, and write any matches to the matchesArray. On first sight, this looks

Donny Wals 22
Practical Swift Concurrency

perfectly fine and incredibly efficient. We’re parsing three files in parallel and writing matches
as we find them.
Unfortunately, there’s a huge potential issue in this approach. Multiple threads might want to
write a match at the exact same time. If this happens, we’ll encounter what’s called a data
race (unless we protect ourselves against data races).
When a data race occurs, multiple threads attempt to access the same memory resource (our
matchesArray) at the same time where at least one of these accesses is a write operation.
This is a data race because readers of the data will get an inconsistent representation of our
matchesArray. Things could get worse when multiple threads try to write to this resource.
Data races can lead to memory corruption and can be notoriously hard to debug because
they’re not guaranteed to occur.
For example, if all threads involved in the diagram above would run concurrently on the same
CPU core we’re probably fine; only one thread is active at the same time. If we’re running
in parallel, we’d have an issue when two threads access matchesArray at the exact same
time.
To fix this, we need to synchronize access to matchesArray. You will learn a little bit about
this in the next chapter, and you’ll learn about data races and access synchronization in-depth
in the chapter on actors.
For now, it’s important that you know data races exist, and why they can occur in a program
that leverages multi-threading.

In Summary
In this first chapter of the book, you’ve learned a lot about multi-threading fundamentals. You
should have a solid understanding of the differences between concurrency and parallelism,
and what threads are. You learned that multi-threading can be done on systems with one or
more CPU cores, and that parallelism is achieved by running two or more threads in parallel
on two or more CPU cores. When a single CPU core runs multiple threads, it runs the threads
concurrently; not in parallel.
You also learned that multi-threaded environments allow for a UI to remain responsive while
expensive and slow work is done in the background. After learning this, you learned that

Donny Wals 23
Practical Swift Concurrency

multi-threading comes with the risk of data races because threads can run in parallel. As a
result we open ourselves up to multiple threads accessing the same memory address at the
exact same time. This isn’t a problem until one or more of these access operations attempts
to write to the memory address.
In the next chapter, you’ll learn about concurrency in a pre-Swift Concurrency world. If you
are already familiar with the basics of Grand Central Dispatch, feel free to jump right over to
chapter three. If you feel like you might need a bit of a refresher on GCD, please go ahead and
read the next chapter to refresh your mind. I won’t talk about GCD much in this book, but
having basic GCD knowledge will be incredibly helpful to understand some of the comparisons
made in this book.

Donny Wals 24
Practical Swift Concurrency

Chapter 2 - Looking at Asynchronous


programming in Swift pre-Concurrency
In order to have a good sense of how Swift Concurrency works, and how it changes the way
that we write our code in a major way, it’s important to understand where we’re coming from.
This is backed up by the fact that for the time being, most of us will be writing code in an
environment where we can’t 100% rely on Swift Concurrency alone. Sometimes that’s due
to the existing codebase you’re working on. Other times it could be because you’re using
existing Apple frameworks that haven’t been rewritten to fully leverage Swift concurrency.
Luckily, Swift Concurrency and the old way of doing things aren’t completely incompatible;
we can bridge from existing into Swift Concurrency if and when needed.
In this chapter, you won’t learn about this interoperability. Instead, I would like to show you
some of the current tools that we use when we write concurrency related code in Swift with
Grand Central Dispatch.
Of course this won’t be a complete introduction to Grand Central Dispatch, how it works,
and how we use it. Instead, it’s a really quick and brief introduction for concepts like Dis-
patchQueue, DispatchGroup, semaphores, and locks. Or to be more specific, this chap-
ter is intended to provide you with just enough context and information to allow you to
understand what I mean when I reference these concepts in later chapters.
Feel free to skip this chapter if you’re already familiar and comfortable with the tools that
I mentioned earlier. Otherwise, let’s dig into the most foundational part of concurrency in
Grand Central Dispatch, DispatchQueue.

Exploring DispatchQueue
In GCD (Grand Central Dispatch), we don’t interact with threads directly. Instead, we create
queues that we schedule work on. A dispatch queue will receive work items in the form of
closures, and the queue will schedule them to run as soon as possible. Often this might mean
that the work item is executed immediately, but other times it could mean that the work is
performed later depending on the queue’s configuration, and how busy the queue is.

Donny Wals 25
Practical Swift Concurrency

For example, a dispatch queue can be configured to run work serially. In this case, the dispatch
queue will perform one work item at a time, in other words, the work is performed in a serial
manner. If we schedule multiple work items in parallel, the work will be executed in a first-
come-first-served manner.
You could also configure a queue to execute multiple work items in parallel. This would allow
us to schedule several work items in parallel, and have them all be performed at the exact
same time.
By default, GCD provides us with two dispatch queues that we commonly use. The first (and
most important queue) is DispatchQueue.main. This queue is directly linked to our app’s
main thread which is responsible for rendering our UI. The main queue is a serial queue which
means that it only runs one body of work at a given time. This means that it’s extremely
important that we don’t schedule work that takes a long time to complete because that would
block our main thread from performing any other work.
The second queue is DispatchQueue.global(). This queue is a queue that can perform
multiple work items in parallel, and it’s commonly used to kick off expensive processes that
shouldn’t block the main queue. Since this queue is configured as a parallel queue, it will run
many work items concurrently.
The actual work that’s scheduled by our queues is run on threads and threads are run by CPU
cores. When our global dispatch queue runs a work item, it will do this as soon as possible. If
no available threads exist, GCD might spin up a new thread for us to perform work on. These
threads are created without taking into consideration how many CPU cores we have. The
result is that we can have way more threads than CPU cores.
Whenever we have more threads running than we have CPU cores in our processor, the CPU
will have to switch between threads so that we can run these threads concurrently. This
process of thread switching is quite expensive and relatively slow, and as the name for this
phenomenon — thread explosion — suggests, that’s not good.
Sadly, when we’re working with dispatch queues, we can’t really prevent thread explosion
from happening. It’s part of how GCD was designed, and it’s something that we have to accept.
As you’ll learn in later chapters, this problem is no longer relevant in Swift Concurrency.
Whenever we want to run work on a dispatch queue, there’s two ways for us to do this. We
can either use the sync method on DispatchQueue, or we can use the async method.
It’s important to understand that these two methods do not apply to how the dispatch queue

Donny Wals 26
Practical Swift Concurrency

will schedule the work. Instead, it applies to whether the call site (where we schedule the
work from) will wait for the work item we passed to our queue to be completed or not.
Consider the following example:

func performSync() {
print("sync: before")
DispatchQueue.global().sync {
print("sync: inside")
}
print("sync: after")
}

func performAsync() {
print("async: before")
DispatchQueue.global().async {
print("async: inside")
}
print("async: after")
}

The first function in this example produces the following output:

sync: before
sync: inside
sync: after

Because we scheduled our work using sync, our performSync function will block until the
main dispatch queue has performed our work. Sometimes this is exactly what you want to
happen, but the way the second function works is what you’d more commonly expect. Let’s
see what output a call to performAsync produces.

async: before
async: after
async: inside

Donny Wals 27
Practical Swift Concurrency

You can see that in the second situation, our function did not block until the work item was
executed. Instead, the work got scheduled and we didn’t wait for the outcome; the last print
statement was executed immediately.
Very often, this is the exact behavior that you’re looking for. You don’t want to block execution
of your code since it might take a while for your work item to be executed.

Tip: The code bundle for this chapter contains a sample app that allows you to run all of
the examples shown in this chapter. You can run the app on your iPhone, mac, or iPad to
see the results for yourself.

At this point, I think you should have a decent enough understanding of dispatch queues and
how they work. Of course, there’s still a lot to learn if you’re truly interested in the nitty gritty
details of GCD, but since this is a book on Swift Concurrency, and my goal is to provide you
with just enough information to understand the world pre-concurrency, I’d like to move on to
the next tool in our GCD toolbox; DispatchGroup.

Understanding what a DispatchGroup is


In the previous section, you learned that dispatch queues perform work either serially or
in parallel, and that we can choose to wait for the work to be completed by scheduling our
work with sync, or for the work to run without blocking our current execution context by
scheduling work with async.
These tools are great when we want to schedule work to be run explicitly on, or away from
the main thread, but it’s much harder to use these tools to run a bunch of tasks in parallel and
wait for all work to be completed before doing something else.
In these situations, we can use a dispatch group to have various work items execute, and
to perform another work item once all of our work items are completed. A dispatch group
doesn’t actually execute work; it’s up to us to decide where and how all of our work runs. The
dispatch group only tracks the number of tasks that we’ve started, and the number of tasks
that we’ve completed. Once these numbers are the same, our final work item is executed.
Let’s look at an example.

Donny Wals 28
Practical Swift Concurrency

func fetchWebsites() {
let group = DispatchGroup() // 1
var results = [Data]()
let urls = [
"https://practicalcoredata.com",
"https://practicalcombine.com",
"https://practicalswiftconcurrency.com"
].compactMap(URL.init)

for url in urls {


group.enter() // 3

URLSession.shared.dataTask(with: url) { data, response, error in


if let data = data {
results.append(data)
}
group.leave() // 4
}.resume()
}

// 2
group.notify(queue: DispatchQueue.main) {
print(results)
}
}

The code above constructs several urls, kicks off a network call for each url, appends the
fetched data to an array, and once all network calls are complete the data that we’ve fetched
is printed.
Of course, this example on its own isn’t particularly useful; but the pattern of fetching data
from various sources before using this data is useful and not too uncommon.
Let’s look at the code in more detail by going through the numbered comments.

1. First, we create a new dispatch group. We don’t need to pass this group anything. The
group, as mentioned, will only track the number of times we start work, and the number

Donny Wals 29
Practical Swift Concurrency

of times we complete work.


2. Second, we schedule a work item that will be executed on the specified queue once all
work in the group is completed. In this case, I’m printing the result of my network calls
on the main thread.
3. For each url that was created, we need to enter the dispatch group. This will increment
the counter for the number of in-flight pieces of work every time we enter the group.
4. Once the network call completes, either successfully or with an error, we leave the
dispatch group. To allow the dispatch group to increment the number of completed
tasks. And once the last task is completed, the work item that we’ve scheduled in our
call to notify will be executed.

Dispatch groups are a very useful tool, and as you’ll learn in this book — we can replace them
with a Swift Concurrency version. For now I would like to move on to a brief discussion on
semaphores and locks before we actually start digging into Swift Concurrency properly.

Exploring semaphores and locks


When you’re building apps that perform work concurrently, it’s very easy to run into situations
where two or more processes in your app require access to the same resource. For example,
multiple operations might be trying to read or write some data to a caching object which
results in a potential data race which, in turn, can lead to memory corruption and several
kinds of crashes, like for example EXC_BAD_ACCESS.
If a resource, like a cache, can’t safely handle concurrent access we say that this object is not
thread-safe. In other words, we can’t safely access and pass this object across threads.
To fix this, there are several tools available to us. They all operate on the same principle
which is to ensure that only one process at a time can access or mutate a given property on an
object. This ensures that we don’t run into data races, and prevents memory from getting
corrupted.
One tool we could use is a DispatchQueue that we configure to run work items serially,
and we can use queue.sync to wrap access to some mutable state to ensure that we don’t
have multiple threads accessing a resource at a given time, without making access to that
state asynchronous.

Donny Wals 30
Practical Swift Concurrency

Let’s take a quick look at what this looks like before we move on to locks and semaphores.

class SimpleCache<Key: Hashable, T> {


private var cache: [Key: T] = [:]
private let queue = DispatchQueue(label:
,→ "SimpleCache.\(UUID().uuidString)")

func getValue(forKey key: Key) -> T? {


return queue.sync {
return cache[key]
}
}

func setValue(_ value: T, forKey key: Key) {


queue.sync {
cache[key] = value
}
}
}

The code above will work fine, but it uses a dispatch queue which means that we’re (potentially)
creating threads whenever we want our queue to perform some work. This might lead to
thread explosion and we know from the previous section that we don’t want that.
More importantly, the section you’re reading right now is supposed to teach you about locks
and semaphores, so let’s switch gears and see how locks and semaphores relate to synchro-
nizing access to a resource like we did in the code snippet you just saw.
A semaphore is an object that can keep track of how much of a given resource is available, and
it can force code to wait for resources to become available before that code can proceed.
This description is pretty technical, so let’s use an analogy to clarify what a semaphore does.
Imagine that you’re going to a restaurant. There’s only a certain number of tables available
(let’s say ten), and there’s a waiter at the door that will either point you to your table, or they
will make you wait at the door for a table to become available.
Once the first dinner party comes in, the waiter will note down that instead of ten tables, there
are now nine tables available, and they’ll take you to your table. The next party comes in, the

Donny Wals 31
Practical Swift Concurrency

number of available tables goes down, and the people are pointed towards their table. This
keeps happening until no more tables are available.
Once all tables are taken, the people that show up at the door must wait for somebody to
free up their table. When that happens, the waiter increments the number of available tables,
back to one. And if somebody is waiting, they will decrement the available table count back
to zero and point the people to their table.
In this example, the tables in the restaurant are the resource, and we have ten resources
available. The waiter is our semaphore that keeps track of how much of the resource is
available, and whether or not new guests will need to wait for a table to become available.
When we talk about semaphores, there are essentially two kinds of semaphores that we can
distinguish. One is a counting semaphore. The waiter example is an example of this. We have
n resources and the semaphore counts how many resources are taken and / or available.
A semaphore like this can be useful in certain situations, but we generally don’t use a counting
semaphore to prevent data races where we only want at most one thread to have access to a
given resource.
To achieve the kind of exclusive access we’re after, we use a so-called binary semaphore, also
known as a lock. This kind of semaphore will only have one resource available at maximum.
A binary semaphore can, for this reason, ensure that we only access a resource in a serial
manner. Just like we did with the dispatch queue earlier.
Let’s take a quick look at what this looks like with a semaphore.

class SimpleCache2<Key: Hashable, T> {


private var cache: [Key: T] = [:]
private let semaphore = DispatchSemaphore(value: 1)

func getValue(forKey key: Key) -> T? {


semaphore.wait()
let value = cache[key]
semaphore.signal()
return value
}

func setValue(_ value: T, forKey key: Key) {

Donny Wals 32
Practical Swift Concurrency

semaphore.wait()
cache[key] = value
semaphore.signal()
}
}

Every time we call wait on the semaphore, the number of available resources either de-
creases, or we wait for a resource to become available for us. After we’ve accessed the cache
dictionary, the semaphore’s signal method is called to increment the number of avail-
able resources so that whoever is waiting to access our cache dictionary can eventually gain
access.
When running this code you might see some warnings pop up about priority inversions due to
a thread with no priority (QOS) waiting for a thread that does have a priority (QOS). This is
fine; it doesn’t impact the point that we’re trying to make in this example. It mainly shows us
that a semaphore is maybe not the most straightforward way to achieve synchronized data
access.
Another way to achieve the same effect without the warnings is to use an object that was built
for this purpose: an NSLock.
The principles behind a lock are exactly the same as those for a binary semaphore so let’s
jump right into a code sample of how it’s used.

class SimpleCache3<Key: Hashable, T> {


private var cache: [Key: T] = [:]
private let lock = NSLock()

func getValue(forKey key: Key) -> T? {


lock.lock()
let value = cache[key]
lock.unlock()
return value
}

func setValue(_ value: T, forKey key: Key) {

Donny Wals 33
Practical Swift Concurrency

lock.lock()
cache[key] = value
lock.unlock()
}
}

Looks familiar, right? It’s exactly the same as how I used the semaphore, except it’s called a
lock and we lock and unlock instead of wait and signal.
There are other kinds of locks available to us, but for the purposes of understanding how and
when a lock is useful, it’s not really relevant for us to explore these options.

In Summary
In this chapter, you’ve learned about some of the fundamental tools and principles that
were available to developers pre Swift Concurrency. Knowing about these fundamentals is
important because you’ll sometimes encounter code that uses these principles, you might
have to refactor existing code based on GCD, or you might just stumble across something on
the internet that refers back to GCD.
More importantly, as I’ve mentioned, I will sometimes refer back to GCD in the book and
assume you know the basics of what was outlined in this chapter.
In this chapter, you’ve learned about DispatchQueues and how they are used. You’ve learned
about the main dispatch queue that’s used to run our UI code, and you’ve learned that you
should never block this queue. You’ve also learned that we have a global dispatch queue that
will run code in parallel, away from the main thread.
After dispatch queues you’ve learned a little bit about data races, and how we can solve them.
After seeing an initial dispatch queue based solution, you learned about semaphores, and
that we have counting and binary semaphores. You saw how a semaphore is used, and then
you saw how we can replace a binary semaphore with an NSLock.
Now that you have a basic understanding of concurrency, parallelism, and concurrency with
GCD it’s finally time for you to start learning about Swift Concurrency and async / await. In

Donny Wals 34
Practical Swift Concurrency

the next chapter, you will be taking your first steps into a whole new world where you’ll await
your first asynchronous method call!

Donny Wals 35
Practical Swift Concurrency

Chapter 3 - Awaiting your first async


methods
In the previous two chapters we’ve focussed on setting the stage. You’ve learned about some
important terminology, and you’ve learned about the old way of doing things. In this chapter,
we will finally look at Swift Concurrency. You will take your first steps in a whole new world,
and if you’ve been working with completion handlers up until now, this chapter will help you
change the way you write and reason about your asynchronous code entirely.
By the end of this chapter you will have learned the following:

• How to call asynchronous functions in Swift Concurrency, and how to handle potential
errors
• What happens when you call an asynchronous function
• How to define an asynchronous function of your own

Before we dig into async / await, let’s make sure that you’re all set up with the sample app for
this chapter so you’re able to follow along with the code-along parts in this chapter.

Setting up the sample app


The sample app for this chapter doesn’t do much. In fact, it doesn’t do anything at all just yet.
In order to fix that, you will be adding code to the project to retrieve data from a remote server
and more. Okay, I lied a little in the previous sentence... we’re not going to use a remote
server. Instead, we’ll use a local server that’s used to serve movie data to the app. This local
server will use simple static JSON files so it’s really nothing too special but you will have to
run a local server to make this all work.
The code bundle for this book contains a dedicated folder for each chapter and a folder called
movies. Use your terminal to navigate to the movies folder. For example, if you have the
book in your computer’s home directory, you would navigate to ~/PracticalSwift-
Concurrency/code/movies. In this folder, you can start your local server by typing the
following command:

Donny Wals 36
Practical Swift Concurrency

python3 -m http.server 8080

All Macs should come with Python pre-installed so you should have no issues running this
command. If you do run into issues because you don’t have python available, you can install
Python yourself via the official Python website or you can run a local server on port 8080
through any other means you have available. For example, if you have node and npm installed
you could make use of the http-server package.
When you have your local server running, you have a static file server in the current working di-
rectory. In other words, it allows us to access URLs like http://127.0.0.1:8080/1.json
to access the json files in the movies directory.
With the server running, you’re all good to work on the sample app for this chapter and for
other chapters too. Whenever a chapter requires you to leverage our local server I will make
sure to remind you to start your local server.
Note that the local server will only be available when you’re running the sample code on your
Mac or if you’re running code on the iOS simulator. If you want to run the sample code on your
iOS devices you should make sure that:

• Your iPad/phone and Mac are on the same network.


• You replace all occurrences of 127.0.0.1 in the sample code with your Mac’s IP ad-
dress.

The simplest way to find your Mac’s IP address is to open the System Settings app on your
Mac, navigating to the Network tab and then click on your active network connection (usually
WiFi or Ethernet, it will have a green dot alongside it). This will show you your Mac’s local IP
address that allows you to connect your phone to your Mac as long as they’re both on the
same network. Note that your Mac’s IP address will be different on a different network and
some networks rotate IP addresses so if something’s not working, always double check that
you’re using the right IP address.
You can find the same instructions that are listed in this section in the README.md file for
the book’s code bundle.

Donny Wals 37
Practical Swift Concurrency

Calling an asynchronous function


To call an asynchronous function in Swift, we don’t have to do much. Really, all we need to do
is call it like a normal function but prefix the call with await.
Let’s jump right in and look at an example of how we can await the result of a network call
in Swift concurrency.

let url = URL(string: "https://practicalswiftconcurrency.com")!

do {
let (data, response) = try await URLSession.shared.data(from: url)
let htmlBody = String(data: data, encoding: .utf8)!
print(htmlBody)
} catch {
print(error)
}

In just a few lines of code, we can fetch data from the network using URLSession. In case
you’re wondering what the same code would roughly look like without async / await, here’s a
small code sample for you:

let url = URL(string: "https://practicalswiftconcurrency.com")!

URLSession.shared.dataTask(with: url) { data, response, error in


if let error = error {
print(error)
return
}

guard let data = data else {


return
}

let htmlBody = String(data: data, encoding: .utf8)!


print(htmlBody)

Donny Wals 38
Practical Swift Concurrency

}.resume()

It doesn’t take much effort to argue that the async / await version of this code is a lot easier to
write, read, and reason about.
The async / await based version of the code reads a lot like synchronous code while still being
non-blocking for the duration of the network call.
Let’s talk a little bit about the syntax and semantics of awaiting a function for a bit.
We had to call the data(from:) method on URLSession as follows:

try await URLSession.shared.data(from: url)

First, we must use the try keyword when calling the data(from:) function because
it’s a throwing function. This means that if anything goes wrong with the network call,
data(from:) will throw an error. This is quite convenient because we don’t have to un-
pack a Result object like we commonly do when we’re calling asynchronous functions with
completion handlers.
After the try keyword, we write await. This keyword is mandatory when calling any asyn-
chronous function that’s marked with the async keyword. We’ll explore why that’s the case a
bit later in the chapter. For now, it’s important that you know that an async function should
always be called with an await.
It’s important that we always put try and await in the correct order. For example, you can’t
write await try. It’s either await myFunction() or try await myFunction().
If you do end up writing await try, Xcode will help you correct this which is quite nice.
We’ll take a close look at what happens when you await something in the next section, but
for now all you need to know is that our function is essentially “paused” (suspended) until
the function we’re awaiting is completed and then our initial function can resume.
The following image visualizes this idea of being paused until the work we were waiting on
has completed.

Donny Wals 39
Practical Swift Concurrency

Figure 6: Awaiting an async method

Note that the someAsyncWork() function has a different color than the myFunction()
block. This is to indicate that awaiting the async function pauses execution of myFunc-
tion(), and frees up the thread that myFunction() was running on. Once someAsync-
Work() completes our original function can continue executing where it left off.
The explanation above paints a somewhat simplified picture of what it’s actually like to call
an async function. There are some rules and requirements that we must respect whenever
we call an asynchronous function.

Understanding how and when we’re


allowed to await a function
While we always have to call an async function with the await keyword, we can’t just write
await someFunction() wherever we want. We can only call an async function from a
context that is already asynchronous.
The reason this limitation exists is because the Swift compiler has to compile asynchronous
code slightly different than it compiles your regular non-asynchronous code. It does this so
that your code is compiled in a way that allows for suspending and resuming of a function.
There are several ways for you to create an asynchronous context.
For example, in SwiftUI we can run asynchronous code when a view appears by using the
task view modifier:

Donny Wals 40
Practical Swift Concurrency

struct FetchingView: View {


var body: some View {
Text("A simple example view...")
.task {
let url = URL(string:
,→ "https://practicalswiftconcurrency.com")!
do {
let (data, response) = try await
,→ URLSession.shared.data(from: url)
let htmlBody = String(data: data, encoding: .utf8)!
print(htmlBody)
} catch {
print(error)
}
}
}
}

The body of our task will be executed by SwiftUI using the same constraints and rules as
onAppear. Essentially this means that our task will be run whenever our view will be shown
to the user. If SwiftUI removes the view from the view hierarchy it will automatically mark the
task as cancelled.
A Task in Swift Concurrency is the main unit of concurrency that we reason about. We’ll take
a deep dive into tasks and how they work in the next chapter, so I won’t explain them too
deeply for now. Just know that a Task is your basic unit of concurrency, and that any async
work that you do in Swift Concurrency is part of a Task in one way or another.
While it’s great that SwiftUI has a convenient way to create a Task for us, we probably want to
go async outside of SwiftUI too. We can do this by creating our own Task object as follows:

Task {
let url = URL(string: "https://practicalswiftconcurrency.com")!
do {
let (data, response) = try await URLSession.shared.data(from:
,→ url)

Donny Wals 41
Practical Swift Concurrency

let htmlBody = String(data: data, encoding: .utf8)!


print(htmlBody)
} catch {
print(error)
}
}

You can put this code wherever you want; for example, you could use it in a viewDidLoad
function of you’re using UIKit:

func viewDidLoad() {
Task {
// ...
}
}

Note that the body of your task is run asynchronously, so any code that you put after your
task will not wait for the work inside of your task to be completed. For example:

func viewDidLoad() {
print("one")

Task {
// ...
print("two")
}

print("three")
}

The code above will print the following:

Donny Wals 42
Practical Swift Concurrency

one
three
two

That’s because the code that we write inside of the Task is scheduled to run as soon as
possible. This usually means that the function we were already in has to finish first, and after
that the contents of the Task can start running.
Earlier in this chapter, you saw the await keyword for the very first time. We know that we
need to write await in front of any calls to async functions, and we know that we can wait
for that function to complete in a non-blocking way, but let’s take a closer look at that so we
can understand why using await does not block whatever function you’re in.

Understanding what happens when you


write await
One of the questions I get asked most whenever I explain async / await to people is “How do I
make sure this function call doesn’t block the main thread?”. And to answer that question, we
need to understand a lot of different things about concurrency in Swift. One of these things
is that we need to understand how Swift handles both our async functions and our await
calls.
Functions that are marked with async inform the Swift compiler that we’re dealing with
a function that might suspend. In other words, it tells the Swift compiler that this function
might, at some point, wait for something to complete before it can resume its execution.
When we call such a function with the await keyword, we create a suspension point for Swift.
Essentially, every await is a point at which Swift can suspend our function by taking the
current call stack, and put it aside. As a result, the thread that we were running on is freed up
to perform other work until our awaited function completes and our function (or task) can
resume.
Let’s explore this with a bit of a more visual guide.

Donny Wals 43
Practical Swift Concurrency

Figure 7: Example of a synchronous call stack

In the image above, you can see the call stack for a synchronous function. As a function calls
other functions we build up a stack of function calls. Once a function completes, it’s popped
off of the stack and the function that came before that function continues running. Once that
function completes, it too is popped off of the stack and the function that called it resumes.
This continues until the outermost function is completed.
In a function that runs synchronously, once a function starts running, a thread must continue
running this function until it’s completed. When one of the functions in the call stack is
extremely slow, that means that the thread is stuck running that function until the slow work
is completed.
An asynchronous function however, is slightly different. Where a regular function must be
executed in one go, building up and unwinding its call stack uninterrupted, an asynchronous
function does not have this limitation. It has several predefined places (where we write await)
where it’s possible for Swift to take the call stack for a function and put it aside for a while.
Let’s see what this looks like in an image.

Figure 8: Example of an asynchronous call stack

Donny Wals 44
Practical Swift Concurrency

The image above shows that we call a function and that function has an await in its body.
The system then takes the call stack for that function call, and it puts it aside. Once this has
happened, we can run other work. Notice how the other work is run on a different thread as
indicated by the background color of these blocks. At the point of calling our async function
the original call stack is moved aside as indicated by its background becoming transparent.
Once the work by processFile(_:) is completed, the original call stack is restored and
our code continues to run. In this image, the original function continues running on the thread
we started off on. This is not guaranteed to happen though. If the system determines that
it’s safe (and allowed) for another thread to resume a function then the system will place the
relevant call stack on an available thread.
Because of how a suspended task has its call stack put aside, using an await does not block
a thread. Instead, it does the exact opposite; it allows the system to take your function and
temporarily set it aside until it can be resumed. This frees up the original thread to do other
work until our original function can continue running.
Note that when an awaited function completes, all of the work that is awaited within that
function must also have completed. This might sound obvious to you, but it’s actually a
specific feature of Swift Concurrency that has a name. It’s called structured concurrency
and we’ll learn more about it in Chapter 9 - Performing and awaiting work in parallel.
Now that you know more about the await part in async / await, let’s dig into the async
part a bit more.

Defining your own asynchronous


functions
When we talk about await, we must also talk about async. You already know what happens
when you await a function call, and in passing I mentioned that we must call functions that
are annotated with async from an asynchronous context (like a Task) using the await
keyword.
So what does an async function look like? And what makes an async function asyn-
chronous?

Donny Wals 45
Practical Swift Concurrency

That’s what you’ll learn in this section.


In its simplest form, we can define an asynchronous function as follows:

func doSomethingAsync() async {


// ... perform some work
}

What we’ve defined here is a function that will be run asynchronously. It doesn’t do anything
if you would literally copy this into a project, but it would do so asynchronously.
An async function does not always run in parallel with other functions or in a non-blocking
fashion if it is somehow tied to an actor. You will learn more about what this means, and how
it works in Chapter 6 - Preventing data races with Swift Concurrency. So for now, let’s just
assume that our function will run asynchronously.

Determining where an async function will run can be a bit of a puzzle depending on the
Swift version you’re using. Throughout the rest of this section, you will learn about the
default behavior that Swift 6.2 and earlier have for async functions. In the next section,
you will learn more about the different configuration options that are available in Swift
6.2 and how they can change the way your code runs.

Note that in a plain Swift 6.2 program that has none of the newer build settings enabled, it
doesn’t matter where we call our asynchronous function from. What matters is whether or
not the function itself is somehow tied to an actor. You’ll learn more about isolating functions
to actors in Chapter 6 - Preventing data races with Swift Concurrency but it’s important
that I mention this now so that you more or less understand when an async function runs
asynchronously.
Generally speaking, you’ll only mark a function as async if:

• Your function has to await other async functions


• Your function performs a costly operation that might take a while to complete

In the first case, your function would end up looking a little bit as follows:

Donny Wals 46
Practical Swift Concurrency

func doSomethingAsync() async {


await doSomethingElse()
await doMoreWork()
}

A more sensible example would be making a network call that you want to await the results
of. We can do this as follows:

func loadData() async {


do {
let url = URL(string: "https://practicalswiftconcurrency.com")!
let (data, response) = try await URLSession.shared.data(from:
,→ url)
let decoder = JSONDecoder()
let model = try decoder.decode(Model.self, from: data)
// use model...
} catch {
// something went wrong, handle the error
}
}

This example is a lot more useful than the previous one. We load data and decode the loaded
data into a model. Since the network call can fail by throwing an error, we must use a do {}
catch {} block to handle any thrown errors.
Sometimes, you might not want to handle the error immediately and instead have callers of
your function handle the errors. In this case that means that instead of handling errors inside
of the loadData function we expect callers of loadData to handle any thrown errors.
If we want to do this, we can add a throws to our method declaration as follows:

func loadData() async throws {


let url = URL(string: "https://practicalswiftconcurrency.com")!
let (data, response) = try await URLSession.shared.data(from: url)
let decoder = JSONDecoder()

Donny Wals 47
Practical Swift Concurrency

let model = try decoder.decode(Model.self, from: data)


// use model...
}

If we want to write a throwing asynchronous function, we must mark the method as async
throws. The async keyword must always come before the throws keyword. Similarly, if
we call a throwing asynchronous function we must write try await. Writing await try
would be a compiler error, but you already knew that of course.

Performing expensive processing in an async function


If your function doesn’t call other async functions but instead performs heavy processing it’s
wise to take a moment and consider whether you need to perform your work atomically, or if
it’s okay for the work to be interrupted intermittently.
An operation that must run atomically is an operation that cannot be interrupted while it’s
active. Some common examples of operations that you usually want to do atomically are
writing to a file on disk, or mutating a dictionary, or writing to a database.
Consider the following scenario. You might be analyzing some data in a for loop where
it would be okay for small pauses to exist in every loop iteration. If you don’t absolutely
have to perform all of your work in one go, you can tell Swift that it’s okay to interleave your
asynchronous function with other work; this would allow your function to temporarily give
up its thread to allow other tasks to make some progress before your loop continues.
It’s important to consider this because if you don’t there’s a chance that you end up claiming
one or more threads for a long time, making it harder for other tasks to perform work and
make progress.
In a situation where you feel like it makes sense for your function to temporarily free up the
thread by suspending you can yield your task. This would look as follows:

func veryLongLoop() async {


for item in veryLongList {
// process the item

Donny Wals 48
Practical Swift Concurrency

await Task.yield()
}
}

Adding calls to Task.yield in strategic places allows for your code to run in a highly con-
current manner while you make sure that you don’t claim a thread for longer than needed.
Adding a call to Task.yield does not mean that your function will always yield and be
suspended. If no other work needs to be performed, your loop would continue as if you never
yielded. If other work does need to be performed, your loop would be suspended until the
system ends your yield.
While it’s important that you’re aware of Task.yield and how it allows you to optimize
your code for maximum concurrency, it’s not a construct you’ll have to add to your code often.
In fact, I would argue that Task.yield is quite obscure and will mostly be used by people
writing code that interacts with low-level components that don’t have support for concurrency
yet.
Note that just like any other await, calling Task.yield() makes it so that there’s a chance
that your loop continues on a different thread than the thread you were on before your yield.
This might sound scary to folks familiar with thread safety and data races but it’s really not
that scary. As you’ll learn in later chapters, Swift does a really good job of helping us write
code that is safe, even when crossing thread boundaries.

Determining where your async functions


run
When you write async code, you typically want that code to run concurrently with other code.
If all of your code runs serially, then there might not be a point in writing async functions at
all because you wouldn’t be leveraging any concurrency features.
In this section, I’d like to explore some of Swift 6.2’s settings and features that impact the way
your code runs. I won’t be digging into them too deeply because to paint the full picture we
need to understand more about topics that I cover in later chapters. However, I think it makes
sense to start establishing knowledge about the way concurrency works now.

Donny Wals 49
Practical Swift Concurrency

In Swift Concurrency, a function can either be isolated or nonisolated. In more practical


terms, this means that a function either runs on a specific actor, or it runs on the global
executor without belonging to any actors at all. A very common practical example of this is
that functions can either be isolated to the main actor, or to no actor at all. Yes, it’s true that
functions can be isolated to other global actors and actor instances too, but I think for now we
should keep things simple so let’s take a simplified view and explore the differences between
being isolated to the main actor and not being isolated at all.
We’ll talk about actor isolation and global actors in Chapter 6 - Preventing data races with
Swift Concurrency.
To paint the full picture of how things work, let’s look at a handful of function definitions first.
Then, I’ll explain how these functions would behave in Swift 6.1 and earlier. Then, I’ll introduce
two of Swift 6.2’s new features that change the way your code will run (when opted-in to these
changes).

Defining different async functions

class MovieRepository {
@MainActor
func loadMovies() async throws -> [Movie] {
// ...
}

@MainActor
func makeRequest() -> URLRequest {

func perform<T: Decodable>(_ request: URLRequest) async throws -> T


,→ {

func decode<T: Decodable>(_ data: Data) -> T {

Donny Wals 50
Practical Swift Concurrency

}
}

The code above contains four different kinds of functions. There are two functions that are
isolated to the main actor through an explicit @MainActor annotation. One is async, the
other isn’t.
We also have two nonisolated functions. These functions are just written as plain func-
tions, and because MovieRepository isn’t an actor or annotated with @MainActor, we
consider both perform and decode to be nonisolated.
Let’s take a look at where these functions will run using Swift 6.1’s semantics.

Swift 6.1’s semantics explored


When you’re used to code that doesn’t use Swift Concurrency, you know that every function
you call will always run on the dispatch queue that you called it from:

func testFunction() {
print(Thread.current)
}

DispatchQueue.main.async {
// testFunction will run on main
testFunction()
}

DispatchQueue.global().async {
// testFunction will run on a background thread
}

In Swift 6.1 things work a bit differently for functions that are isolated to an actor and for
nonisolated async functions. Let’s go through the functions we defined earlier one by
one to explain what they are, and how they run when they get called from different places.

Donny Wals 51
Practical Swift Concurrency

@MainActor
func loadMovies() async throws -> [Movie] {
// This function will _always_ run on the main actor
}

The first function we’ll look at is an isolated async function. It’s isolated to the main actor
which means that when this function is called it will run on the main actor. It’s also an async
function which means that the function can suspend when calling other async functions. It
doesn’t matter where we call this function from. It’s isolated to the main actor so it runs on
the main actor.
Next, let’s look at an isolated non-async function:

@MainActor
func makeRequest() -> URLRequest {
// This function will _always_ run on the main actor
}

Similarly to the previous function, this one is isolated to the main actor which means it’s
always going to run on the main actor. No matter where it’s called from. The function is not
async which means that it cannot suspend to call any async functions.
Next up, let’s talk about a nonisolated async function:

func perform<T: Decodable>(_ request: URLRequest) async throws -> T {


// This function will never run on the main actor in Swift 6.1
}

In Swift 6.1, a nonisolated async function will always run on the global executor, no matter
where it’s called from. In terms of running on the main actor or not running on the main actor,
this means that perform will never run on the main actor. It doesn’t matter whether you
call this function from a main actor isolated function or from another nonisolated function. A
nonisolated async function in Swift 6.1 never runs on main.
Now, let’s look at a nonisolated non-async function:

Donny Wals 52
Practical Swift Concurrency

func decode<T: Decodable>(_ data: Data) -> T {


// This function will run on the actor/isolation context it was
,→ called from
}

A nonisolated non-async function is really just a normal function that you would have written
prior to adopting Swift Concurrency. The nice thing is that these functions will behave exactly
like you’re used to. Call it from the main actor, it will run on the main actor. Call it from
elsewhere, it will run where you called it from.
That said, it does introduce some confusion around how nonisolated behaves in Swift.
Depending on whether a function is async, in Swift 6.1 and earlier, nonisolated can mean
“will inherit the actor that we called it from” or “will never run on an actor no matter where it
was called from”. That’s why Swift 6.2 contains some changes to where your code runs, and
how you can control where code runs.
Time to explore default actor isolation and actor inheritance in Swift 6.2.

Understanding where code will run in Swift 6.2


If you open an existing project in Xcode 26, nothing will change in terms of how your code
runs. Swift 6.1’s rules will still apply to your code, and what you just learned still works.
That said, there are two interesting features in Swift 6.2 that we should explore to understand
where async functions will run.
First, let’s talk about default actor isolation.

Default Actor Isolation Generally speaking, less concurrency in your app will make your
app more stable and your code easier to work on. It’s rare for apps to be performing tons of
work that truly benefits from being async throughout. That’s why, in Swift 6.2 the new default
for all your code is to run on the main actor unless you specifically opt-out of running on the
main actor. This might sound scary at first but in reality, this will mean that most of your code
will behave the same as it did without Swift Concurrency. Try and remember how frequently
you explicitly decided to run code on a global dispatch queue versus just running your code

Donny Wals 53
Practical Swift Concurrency

“wherever”. Wherever most frequently would have been the main dispatch queue for most
apps.
By making running on the main actor the new default, you would basically change the follow-
ing code:

@MainActor class MovieRepository {


// ...
}

To look like this:

class MovieRepository {
// ...
}

You simply remove your @MainActor annotations because in Swift 6.2 you will receive these
annotations by default.
When you create a new Xcode project in Xcode 26, global isolation is set to be the MainActor
by default. You can look for the “Default actor isolation” build setting to turn it on or off for
your project.
To leverage global isolation in a Swift Package, you can add the following configuration:

swiftSettings: [
.defaultIsolation(MainActor.self)
]

This will isolate all code in your package to the main actor by default (unless you explicitly
opt-out for a type or function). You can mix different isolation settings throughout your project
since it applies on a package level. Your app should benefit from bein main actor by default in
most cases, but if your project contains a package that does loads of heavy processing it might
make sense to use a nonisolated default for that package instead. To be nonisolated,
you can set the defaultIsolation setting to nil to use nonisolated as your default
isolation.

Donny Wals 54
Practical Swift Concurrency

In addition to this “main actor by default” mode, Swift 6.2 comes with a feature that changes
the behavior of nonisolated async functions to me more inline with how noniso-
lated non-async functions work.

Understanding nonisolated(nonsending) and @concurrent Don’t be alarmed by the two


pretty complex sounding keywords in this section’s title. You really don’t have to understand
what nonisolated(nonsending) means exactly to understand the impact it has on your
code. And luckily @concurrent more or less does what you might expect it to do.
One of the issues that you frequently run into with Swift 6.1 and older is that you’re introduc-
ing concurrency through nonisolated async functions even when you didn’t explicitly
intend to do so. The consequence of this is that when you pass data from one nonisolated
async function to another with the Swift 6 language mode or strict concurrency turned on,
you must make sure that whatever you’re passing around is sendable. You’ll learn more about
sendability later, so for now just know that sendable objects are safe to pass from one isolation
context to another.
Furthermore, the behavior of isolated functions is consistent between async and sync func-
tions. For a nonisolated function, the behavior for a non-async function is different than
an async function.
In Swift 6.2 this means that the following code won’t behave the same as it did in Swift 6.1:

nonisolated func perform<T: Decodable>(_ request: URLRequest) async


,→ throws -> T {
// This function will never run on the main actor in Swift 6.1
// This function will run on the caller's actor (if any) in Swift
,→ 6.2
}

Our function is now marked nonisolated explicitly because it would otherwise receive
an implicit @MainActor annotation when we use global isolation. And with the new non-
isolated(nonsending) feature, the function will inherit the caller’s isolation context.
This means that the function will run on the main actor when it’s called from the main actor,
or elsewhere if that’s where it was called from. This makes the behavior for nonisolated
async consistent with nonisolated and non-async.

Donny Wals 55
Practical Swift Concurrency

This feature is automatically turned on when you create a new project in Xcode 26 through
the “Approachable Concurrency” setting. For existing projects you can opt-in by setting the
“Approachable Concurrency” build setting to “YES”.
Or if you’re using SPM, you must add the following feature flag:

swiftSettings: [
.enableExperimentalFeature("NonisolatedNonsendingByDefault")
]

To gain the same features as approachable concurrency would get you, you should also enable
a second feature:

swiftSettings: [
.enableExperimentalFeature("NonisolatedNonsendingByDefault"),
.enableUpcomingFeature("InferIsolatedConformances")
]

While inheriting a caller’s isolation makes it much easier to write code that doesn’t require
everything to be sendable, there are times where you want to make 100% sure that a function
does not inherit its caller’s isolation. To do this, you can mark a nonisolated function as
@concurrent:

@concurrent
nonisolated func perform<T: Decodable>(_ request: URLRequest) async
,→ throws -> T {
// This function will never run on the main actor in Swift 6.2 (it
,→ will run on the global executor)
}

Note that only nonisolated aysnc functions can be marked with @concurrent. For
example, the following is not allowed:

Donny Wals 56
Practical Swift Concurrency

@MainActor @concurrent func perform<T: Decodable>(_ request:


,→ URLRequest) async throws -> T

However, a function that isn’t explicitly isolated can be marked @concurrent and it will be
nonisolated automatically:

@MainActor class MovieRepository {


@concurrent
func perform<T: Decodable>(_ request: URLRequest) async throws -> T
,→ {
// This function will be nonisolated implicitly because it's
,→ @concurrent
}
}

So while nonisolated in Swift 6.2 will inherit the caller’s isolation if you opt-in to non-
isolated(nonsending) by default, an @concurrent function will not:

@MainActor class MovieRepository {


@concurrent
func perform<T: Decodable>(_ request: URLRequest) async throws -> T
,→ {
// This function will _never_ run on the main actor
}

nonisolated func perform<T: Decodable>(_ request: URLRequest) async


,→ throws -> T {
// This function will inherit the caller's isolation
// for example, it runs on main actor when called from the main
,→ actor
}
}

So, to summarize this. When you want code to just run wherever while also being able to
call other async functions, you write a nonisolated async function. If you want code to

Donny Wals 57
Practical Swift Concurrency

never run on the main actor, you make it nonisolated @concurrent so it always runs
on the global executor.
Using @concurrent should not be your “default”. You should think carefully about intro-
ducing concurrency because while it might sound like a good idea to run as little work on the
main actor as possible, context switching isn’t free, and writing good concurrent code is hard.
You need far less concurrency in your app than you might initially think.
Before we move on, here’s a list of every way that a function can be declared now, and where
that function will (or might) run. Given the following constraints:

• Default actor isolation is enabled


• nonisolated(nonsending) is turned on through Approachable Concurrency

class MovieRepository {
func loadMovies() async throws -> [Movie] {
// runs on the main actor due to implicit main actor isolation
}

func makeRequest() -> URLRequest {


// runs on the main actor due to implicit main actor isolation
}

nonisolated func perform<T: Decodable>(_ request: URLRequest) async


,→ throws -> T {
// runs on the caller's actor (if any) due to
,→ nonisolated(nonsending)
}

nonisolated func decode<T: Decodable>(_ data: Data) -> T {


// runs on the caller's actor (if any) due to
,→ nonisolated(nonsending)
}

@concurrent func perform<T: Decodable>(_ request: URLRequest) async


,→ throws -> T {
// runs on the global executor due to @concurrent

Donny Wals 58
Practical Swift Concurrency

@concurrent func decode<T: Decodable>(_ data: Data) -> T {


// runs on the global executor due to @concurrent
}
}

This list shows where every function will run, and it shows that with Swift 6.2, default actor
isolation, and nonisolated(nonsending) behavior between async and non-async
functions is consistent again.
For the rest of the book, I will assume that you’re using Swift 6.2 with Default actor isolation
enabled since that’s an opt-in feature. Here’s how that changes the list of function you just
saw:

class MovieRepository {
func loadMovies() async throws -> [Movie] {
// runs on the main actor due to implicit main actor isolation
}

func makeRequest() -> URLRequest {


// runs on the main actor due to implicit main actor isolation
}

nonisolated func perform<T: Decodable>(_ request: URLRequest) async


,→ throws -> T {
// runs on the the global executor since it's a nonisolated
,→ function without nonisolated(nonsending) enabled
}

nonisolated func decode<T: Decodable>(_ data: Data) -> T {


// runs on the caller's actor because it's not an async function
}

@concurrent func perform<T: Decodable>(_ request: URLRequest) async


,→ throws -> T {

Donny Wals 59
Practical Swift Concurrency

// runs on the global executor due to @concurrent


}

@concurrent func decode<T: Decodable>(_ data: Data) -> T {


// runs on the global executor due to @concurrent
}
}

I understand this might be a little confusing and it will take time to develop an intuition for
where things run. The sample code for this chapter contains a lot of examples that show
where different functions will run depending on project settings. I highly recommend that you
explore these samples to get an understanding of different ways your code can be written,
and what the impact of Swift 6.2’s settings is on where your code will run.
Let’s put everything you’ve learned so far to the test by adding some network calls to this
chapter’s sample app!

Adding networking to MovieWatch


Now that we know some of the theory behind async and await it’s time to put our knowl-
edge in practice.
Go ahead and open the MovieWatch.xcodeproj project in this chapter’s code bundle.
You’ll find that the app doesn’t do much when you compile and run it right now. That makes
sense because the entire networking portion hasn’t been implemented yet. To fix this, we’ll
start by adding some code to MovieDataSource.swift. It’s currently an empty class so
let’s change that.
The app needs to be able to make three network calls in total:

• Fetch a certain page of movies


• Fetch crew for a movie
• Fetch cast for a movie

Start by defining the following three async methods:

Donny Wals 60
Practical Swift Concurrency

class MovieDataSource {
func fetchMovies(_ page: Int) async throws -> [Movie] {

func fetchCastMembers(for movieID: Int) async throws ->


,→ [CastMember] {

func fetchCrewMembers(for movieID: Int) async throws ->


,→ [CrewMember] {

}
}

Each method will follow the same pattern which is to fetch data, decode the data into the
expected model object, and then return the decoded array of models.
Here’s what the implementation for fetchMovies(_:) should look like:

func fetchMovies(_ page: Int) async throws -> [Movie] {


let url = URL(string: "http://127.0.0.1:8080/\(page).json")!
let (data, response) = try await URLSession.shared.data(from: url)

let movies = try JSONDecoder().decode([Movie].self, from: data)


return movies
}

Notice how we can fit all of this logic in just four lines of code. It’s really quite cool. If anything
goes wrong with either our network call or the JSON decoding an error will be thrown from
fetchMovies and the caller of this method can handle the error as needed (and possibly
retry the network call).
Before looking at the implementation for the other two methods, try writing them yourself.
The URLs that you should use are:

Donny Wals 61
Practical Swift Concurrency

• http://127.0.0.1:8080/cast-\(movieID).json
• http://127.0.0.1:8080/crew-\(movieID).json

Once you’re done implementing these methods, the solution is available below:

func fetchCastMembers(for movieID: Int) async throws -> [CastMember]


,→ {
let url = URL(string:
,→ "http://127.0.0.1:8080/cast-\(movieID).json")!
let (data, response) = try await URLSession.shared.data(from: url)

let castMembers = try JSONDecoder().decode([CastMember].self, from:


,→ data)
return castMembers
}

func fetchCrewMembers(for movieID: Int) async throws -> [CrewMember]


,→ {
let url = URL(string:
,→ "http://127.0.0.1:8080/crew-\(movieID).json")!
let (data, response) = try await URLSession.shared.data(from: url)

let crewMembers = try JSONDecoder().decode([CrewMember].self, from:


,→ data)
return crewMembers
}

As I mentioned earlier, the implementation for all three methods is very similar.
Note that, since we’ve enabled global actor isolation which means that these functions all run
on the main actor. Calling await from the main actor to suspend our functions is fine; an
await is not blocking. That said, decoding JSON might take a while. With moderately small
responses JSON decoding is super fast, you won’t have any issues. But if you’d fetch a JSON
body that’s lots of megabytes large, decoding is going to cost time. Calling decode on the
main actor is potentially an expensive operation.
To avoid paying a decoding cost on the main actor, you have several options. The simplest

Donny Wals 62
Practical Swift Concurrency

would be to make a change to how class MovieDataSource is declared. We can opt-out


of global actor isolation for this function by marking the type as nonisolated as follows:

nonisolated class MovieDataSource

This removes the inferred main actor annotation from MovieDataSource and makes it
so all members of MovieDataSource are nonisolated too. However, if you’re using
default settings for a new project in Xcode 26 “Approachable Concurrency” is turned on which
means your nonisolated async functions run on the callers actor. This means that calling your
functions from the main actor would result in them running there. To avoid this, you can mark
the functions as @concurrent so they always run on a background thread:

@concurrent
func fetchMovies(_ page: Int) async throws -> [Movie] {
let url = URL(string: "http://127.0.0.1:8080/\(page).json")!
let (data, response) = try await URLSession.shared.data(from: url)

let movies = try JSONDecoder().decode([Movie].self, from: data)


return movies
}

Note that @concurrent can be used to offload work from the main actor even if you don’t
mark the data source as nonisolated which is what I’ve done in the sample project for this
cahpter.
Now that these methods are defined, we can call them from the relevant places in our code.
Start by opening PopularMoviesViewModel.swift. This is where we’ll add the code
that calls fetchMovies(_:) to retrieve the pages of popular movies.
The pattern that we’ll follow is to have our SwiftUI view call a regular synchronous method. In-
side of that method, we’ll start a new Task to call our async method to retrieve movies. After
that, we’ll update our @Published property movies with the newly retrieved movies.
We’ll start by implementing fetchPage(_:) and then we’ll implement the other two meth-
ods.
We really only need to change a single line in fetchPage(_:) to call our data source’s
fetchMovies(_:) method instead of returning an empty array:

Donny Wals 63
Practical Swift Concurrency

private func fetchPage(_ page: Int) async throws -> [Movie] {


guard isLoading == false else { return [] }

isLoading = true
defer { isLoading = false }

// this used to return an empty array


return try await movieDataSource.fetchMovies(page)
}

Next, let’s go ahead and implement fetchMovies(). Based on what you’ve seen already,
you might have some idea of how to do that. The key is to start a new Task and call the
fetchPage(_:) method from inside of your task:

func fetchMovies() {
currentPage = 1

Task {
do {
let fetchedMovies = try await fetchPage(currentPage)
movies = fetchedMovies
} catch {
// handle the error in some way
}
}
}

After adding this code, you can go ahead and run the project. You should see a list of movies
appear in the app.
To handle our error, there are several options available. One option is to just ignore any errors
thrown like we do now. We could also add an @Published var error: Error? to our
PopularMoviesViewModel and have our SwiftUI view present an alert with a relevant
message to the user when the error is set to a non-nil value. It’s really up to you and the
requirements of your app to decide what to do with certain errors. Just know you can catch
errors like you would normally and decide what to do from there.

Donny Wals 64
Practical Swift Concurrency

The fetchNextpage() method in our PopularMoviesViewModel should follow a


fairly similar pattern to what we did before; try to fetch movies and append them to the
current list of movies. Here’s what the implementation should look like:

func fetchNextpage() {
currentPage += 1

Task {
do {
let fetchedMovies = try await self.fetchPage(currentPage)
await MainActor.run {
movies += fetchedMovies
}
} catch {
// handle the error in some way
}
}
}

Note that a Task in Swift Concurrency has an interesting quirk. The closure that we pass to
a Task can be a throwing closure which means that we don’t have to catch errors that are
thrown from within the closure.
Usually, this is quite frustrating because the compiler doesn’t inform us when we accidentally
forget to handle errors. But in this case, we might choose to ignore errors anyway which
means that it’s kind of nice that we’re allowed to write the following instead:

func fetchNextpage() {
currentPage += 1

Task {
let fetchedMovies = try await self.fetchPage(currentPage)
await MainActor.run {
movies += fetchedMovies
}
}

Donny Wals 65
Practical Swift Concurrency

The errors thrown by us are now essentially ignored because the Task doesn’t require us
to handle the errors. In essence, we let our task throw an error but since we’re not explicitly
waiting for any results from our Task object, we don’t have to handle the error. Again, usually
this is not what we want and it’s quite frustrating that Task makes it so easy to not handle
errors.
In the code snippet above it’s kind of nice that we don’t have to deal with errors since we’ve
decided we’re ignoring errors anyway. At the same time I would probably argue that having
an empty catch is better than this because it’s easy to overlook that errors could occur here,
and an empty catch would make the decision to ignore errors more explicit.
Doing that would look as follows:

func fetchNextpage() {
currentPage += 1

Task {
do {
let fetchedMovies = try await self.fetchPage(currentPage)
await MainActor.run {
movies += fetchedMovies
}
} catch {
// we're ignoring the error.
}
}
}

Now that we have our main page done, let’s add some more network calls to fetch cast and
crew for a movie’s detail page.
Open CastList.swift and look at the code that’s already there:

Donny Wals 66
Practical Swift Concurrency

struct CastList: View {


@Environment(\.movie) var movie

@State var cast = [CastMember]()


@Environment(\.movieDataSource) var movieDataSource

var body: some View {


let _ = Self._printChanges

VStack {
ForEach(crew, id: \.uniqueId) { castMember in
PersonCell(person: castMember)
}
}
}
}

You can see that this view has an @State property for the list of cast members, and that
the view has access to our movieDataSource object. This means that we can directly use
the movie data source in our view to update the list of cast members with the task view
modifier:

var body: some View {


let _ = Self._printChanges

VStack {
ForEach(cast, id: \.uniqueId) { castMember in
PersonCell(person: castMember)
}
}.task {
do {
cast = try await movieDataSource.fetchCastMembers(for:
,→ movie.id)
} catch {
cast = []

Donny Wals 67
Practical Swift Concurrency

}
}
}

Notice that I have to use a do { } catch { } here because unlike the Task initializer,
the task view modifier does not take a throwing closure. All errors thrown by our async work
must be handled inside of our closure. In this case, I’ve decided to ignore errors but you could
leverage a second @State property to keep track of any errors that might have occurred and
present an alert or render text if needed.
Next, go to CrewList.swift and apply the same pattern you just saw there to fetch the
movie’s list of crew members:

var body: some View {


VStack {
ForEach(crew, id: \.uniqueId) { crewMember in
PersonCell(person: crewMember)
}
}.task {
do {
crew = try await movieDataSource.fetchCrewMembers(for:
,→ movie.id)
} catch {
crew = []
}
}
}

With this code in place, you can run the MovieWatch app and see that all screens are now
populated with data. Use the segmented control on the movie detail page to activate the crew
or cast tabs and notice how the data for each section appears nicely.
This is great! You’ve just completed your first super-simple networking feature built with async
/ await. Feels good, right?

Donny Wals 68
Practical Swift Concurrency

In Summary
In this chapter you’ve made your fist steps with Swift’s async / await features. You learned
how you can call asynchronous functions with the await keyword and when you’re allowed
to do so. You learned that you can only call asynchronous functions from an asynchronous
context like a function that is already async or from a new Task.
After establishing some of the calling basics you learned what happens when you await a
function and why using await does not block your current thread. After that, we moved on
to defining asynchronous functions.
As the final part of this chapter, you’ve written your very first basic networking layer for
this book’s MovieWatch sample application. You saw that you can get a lot of work done
with relatively little code, and that the code you write with Swift Concurrency is much more
straightforward than code you write with callbacks.
In the next chapter of this book, you will learn more about Swift’s Task objects that we can
use to jump from a synchronous context to an asynchronous context.

Donny Wals 69
Practical Swift Concurrency

Chapter 4: Understanding Swift


Concurrency’s tasks
Knowing how you can define and call asynchronous functions is a great start to learning Swift
Concurrency. However, Swift Concurrency is a lot bigger than just async / await. Behind
these two keywords the Swift team has built a whole new concurrency system that uses all
kinds of new building blocks that we use to build and structure our asynchronous code.
Before Swift Concurrency we modeled our code using DispatchQueue objects. A queue
will run bodies of code as we added them to the queue, and we can configure queues with
different kinds of priorities as well as configure whether a queue should run work in parallel
or serially.
With Swift Concurrency, we don’t use DispatchQueue anymore. Instead, we make use of
tasks to kick off and manage our asynchronous work.
In this chapter, you will learn about the following:

• How to create and configure Task objects


• The lifecycle of a Task

By the end of this chapter you will have a strong sense of how tasks fit in Swift’s Concurrency
system. It’s important to bear in mind that this chapter will not teach you everything there
is to know about tasks. For example, I won’t go into actor inheritance and child tasks very
deeply. These topics will be covered in their respective chapters. Covering them now wouldn’t
make too much sense since we’re still at the start of your journey into working with Swift
Concurrency, and you’ve already had to digest loads of information as-is.

Knowing how and when to create tasks


I’ve mentioned that tasks are Swift Concurrency’s main unit of Concurrency. This means that
every piece of asynchronous work that you perform in Swift will run as part of a task.
Sometimes you will have created this task explicitly, and other times you might run as part of
a task that you can reasonably expect to have been created elsewhere.

Donny Wals 70
Practical Swift Concurrency

The main reason for you to create your own tasks is when you wish to execute a piece of
asynchronous code from a place that otherwise doesn’t support concurrency. An example of
this is kicking off asynchronous code from within a UIViewController in its viewDid-
Load function. Or maybe you want to call an asynchronous function from a SwiftUI Button’s
action handler.
A common sign that tells you that you need to go async is when Xcode shows you the following
compiler error:

'async' call in a function that does not support concurrency

This tells you that you’re currently in a non-async context that doesn’t support being sus-
pended. Or in other words, the place we’re calling our function from isn’t part of a Task.
Another reason to create a new task is when you want to run a body of work concurrently with
other work. Every Task you create will begin running immediately after it’s created, and it
will run concurrently with other tasks.
Knowing exactly when it makes sense to create an extra Task to make a piece of code run
concurrently with other pieces of code can be a pretty complex decision to make. For example,
when you’re processing a relatively large amount of data you’ll want to make sure you perform
this operation efficiently.
Initially, you might think of Task as a perfect tool to split all the work out into pieces of
work that run concurrently, and that might actually work perfectly fine. On the other hand
introducing concurrency into a part of your codebase that doesn’t need concurrency increases
complexity without offering a significant benefit.
Sometimes the benefit of running lots of work concurrently is quite evident. For example, if
you’re writing code to fetch data from many different URLs and you want this operation to be
done as fast as possible, you can create a task for each request you’ll make so you’re making
requests in parallel.
On the other hand it’s less likely for you to gain benefits from asynchronously mapping over
an array of model objects when all you’re doing is transforming them into a different domain
model. This kind of operation will generally be quite fast even when you do it all on the main
thread. And if you do want to perform your work away from the main thread it’s usually more
desirable to create an async function that’s nonisolated (and @concurrent depending

Donny Wals 71
Practical Swift Concurrency

on your project settings) and to perform your mapping in there. You’d map one object at a
time, but without blocking the main thread.
The bottom line here is that you should always try and stay away from trying to solve perfor-
mance problems that you don’t have. Especially when you’re trying to solve your non-existent
problem with concurrency. If you’re unsure whether or not a piece of work is causing you
performance issues you’ll want to run you code with Instruments and use the Time Profiler
instrument to make sure which code is causing you trouble.

Exploring different ways to create tasks


In the previous chapter you’ve already seen that we can create a new Task object as follows:

Task {
let userInfo = try await fetchUserInfo()
}

This way of creating tasks is the recommended, and most commonly used way to create and
kick off asynchronous work in Swift Concurrency. You already know that creating a task as
shown above will run work concurrently with other tasks that are active.
A task created in this manner will be scheduled immediately but it’s important to know that
the task might not run immediately. This is fully dependent on where you create the task.
For example, if you create a new task from a main actor annotated function as shown below:

@MainActor func startATask() {


print("Pre Task")
Task {
print("In Task")
}
print("After Task")
}

This code would start a new task that runs on the main actor but since the main actor is already
running our function, the printed output of this function would be:

Donny Wals 72
Practical Swift Concurrency

Pre Task
After Task
In Task

It’s possible to create a task that does start running immediately as follows:

@MainActor func startATask() {


print("Pre Task")
Task.immediate {
print("In Task")
}
print("After Task")
}

By using Task.immediate you ensure that the task starts execution immediately instead
of as soon as possible. In this case, that means that our task starts running on the main actor
immediately, interrupting execution of startATask entirely. Our new task will run until it
runs into a suspension point (an await or yield() call) or until the task completes. If our
task suspends, the main actor will continue running startATask until our newly created
task is resumed.
So while immediate forces our newly created task to start running immediately, it doesn’t
mean that our task must finish before the function that created it can finish.
My general recommendation is to just use Task and only reach for immediate when you
find that it would truly benefit your situation.
We call tasks that are created as shown above unstructured tasks. As the name suggests, an
unstructured task does not participate in structured concurrency. For now, you don’t need to
worry about what structured concurrency is; we’ll cover this in depth in Chapter 9 - Performing
and awaiting work in parallel.
Note that an unstructured task is not a child task of the context you created the task from.
Unstructured tasks inherit certain bits and pieces from the context that they are created in:

• Actor isolation
• Task local values

Donny Wals 73
Practical Swift Concurrency

I won’t go into details on these two topics right now. For now, it’s important that you know
that an unstructured task inherits the two bits of context I just mentioned from the place that
the task is created from. In practical terms this means that a task that is started from a context
that runs on the main actor will mean that the newly created unstructured task will also be
running on the main actor.
Creating new tasks should be a rare occurence in most of your code, especially when you’re
already in a function that’s async.
There are scenarios where you might want to make sure you don’t inherit anything from the
context you start out from. For example, you might be creating a new task from your main
actor (which runs code on the main thread) while you want to make sure that your task runs
its body anywhere except the main thread. In this kind of situation, a detached task might be
the tool you’re looking for.
Generally speaking it won’t be common for you to run into situations where you must use
a detached task. Offloading work from the main thread can be done by correctly defining
your async functions as nonisolated (and @concurrent) and spawning new tasks for
parallelism is less practical than using tools like a task group which we’ll cover later.
We can create a detached task using the Task.detached method:

Task.detached {
let userInfo = try await fetchUserInfo()
}

While a task that’s created with the plain Task initializer inherits the actor and task local values
from the context it was created in, Task.detached inherits neither of those attributes from
its context.
Like I mentioned earlier, this means that a Task that’s created from within a function or task
that runs on the main actor will, itself, run on the main actor. It inherits the actor from the
context it was created in. A detached task that’s created from within that same function will
not run on the main actor. It does not inherit the actor from the context it was created in.
Earlier, I mentioned that detached task should not be needed often and you might be wonder-
ing why you wouldn’t want to detach your tasks from the main actor.

Donny Wals 74
Practical Swift Concurrency

The example you saw earlier is a perfect example of a detached task that probably should not
have been detached in the first place. Regardless of where the task runs exactly, and which
context it was created in, we’re not blocking any threads with the work in this task.
In other words, if we would use an unstructured task instead of a detached task to await
fetchUserInfo, the result would be the exact same.
When you look at the contents of the task, you will see that the task immediately suspends to
await the call to fetchUserInfo.
Unless fetchUserInfo was explicitly annotated or written to run on the main actor (or any
other actor), that method is considered nonisolated. And if you’re not using noniso-
lated(nonsending) by default, all nonisolated async methods will run on a background
thread. If your are using nonislated(nonsending) by default, annotating your function
with @concurrent ensures that your function never runs on the main actor.
And since an await is not a blocking operation but a suspension point, this means that if we
have an unstructured task that’s running on the main thread, an await inside of that task
does not block that thread. Instead it frees up the thread to do other work while our task is
suspended and awaiting the result of our method call. Of course, the method your calling
might block the main actor but that’s not the point. The point here is that the await itself is
never blocking.
You will learn more about this reasoning in Chapter 6 - Preventing data races with Swift
Concurrency which is where we’ll take a deep dive into actors and isolation.
Your key takeaways from this section should be the following: - Depending on your compiler
settings and how your async function is defined, your async function might already be running
on a background thread - Awaiting an async function creates a suspension point, this allows
the thread that was running your task to make progress on other tasks while you’re suspended
- In Swift Concurrency, an async function does always not run on the thread it was called from.
The actual run destination depends on the function definition and your compiler settings.
A third way to create a new task that is quite common in SwiftUI apps is to use the task view
modifier. In the previous chapter you’ve leveraged this view modifier to kick off the fetching
of crew and cast members in the MovieWatch sample app. The task view modifier creates
a new task whenever your SwiftUI view is about to appear for the first time, very much like the
onAppear view modifier. The main reason to favor task over creating a new Task instance
yourself in onAppear is that the task created in the task view modifier is automatically

Donny Wals 75
Practical Swift Concurrency

cancelled whenever your SwiftUI view disappears. This means that there’s less cleanup for
you to do, and it’s harder to make mistakes.

Can we create too many tasks in Swift Concurrency?


The short answer to this is no. We should, generally speaking, not worry about the number of
tasks we create. The Swift team itself says that it’s absolutely safe to create large numbers of
tasks and they can all run concurrently. And by large, think in the order of magnitude of tens
of thousands of tasks.
On the flip side, Swift Concurrency ensures that there’s only a certain number of threads
spawned in our application. It does this to prevent thread explosion. In theory, this means that
tasks that are actively blocking their thread without suspending can be hugely problematic.
On a device with six cores, having six tasks blocking at once would mean that our entire
application is now waiting for threads to become available.
In practice, it’s highly unlikely that you’ll be writing code that will perform long-running and
never-suspending work on all cores. One of the key rules is that we should always make
forward progress in our tasks and when we know that we’re going to be performing work that
takes a very long time to complete we should leverage the tools mentioned in this section to
allow our task to be suspended every once in a while.
One way to demonstrate this is to run a sample app that spawns a number of tasks and then
blocks them immediately for a few seconds. In the sample project for this chapter, you’ll find
the following code:

class TaskRunner {
static func run(tasks: Int) {
for i in 0..<tasks {
spawnTaskAndSleep(for: 3)
print("Spawned task for \(i)")
}
}

static func spawnTaskAndSleep(for seconds: Int) {


Task {

Donny Wals 76
Practical Swift Concurrency

let taskId = UUID()


print("Task \(taskId) started at \(Date())")
sleep(UInt32(seconds))
print("Task \(taskId) ended at \(Date())")
}
}
}

When you call the run method that’s defined on TaskRunner with a nice high number like
10 and run this sample on the iOS simulator you’ll find that it produces an output that looks a
bit as follows:

Task 8466FD0C-1B86-48EC-9991-56C9E4182308 started at 2022-11-24


,→ 19:35:48 +0000
Task 8466FD0C-1B86-48EC-9991-56C9E4182308 ended at 2022-11-24
,→ 19:35:51 +0000
Task 8296A72C-4001-4EFF-A48F-5E110F1A3050 started at 2022-11-24
,→ 19:35:51 +0000
Task 8296A72C-4001-4EFF-A48F-5E110F1A3050 ended at 2022-11-24
,→ 19:35:54 +0000
Task 623DCA9C-71F0-4B97-90EA-733226067F86 started at 2022-11-24
,→ 19:35:54 +0000
Task 623DCA9C-71F0-4B97-90EA-733226067F86 ended at 2022-11-24
,→ 19:35:57 +0000
Task 696E43AD-F49F-4455-A6BE-FD9C883BF302 started at 2022-11-24
,→ 19:35:57 +0000
Task 696E43AD-F49F-4455-A6BE-FD9C883BF302 ended at 2022-11-24
,→ 19:36:00 +0000
// ...more output...

As you can see, every task starts and ends before the next task can start. In other words, we’re
only running one task at a time. This is due to a limitation in the number of threads that are
available on the simulator.
When you run the example’s macOS target you’ll find that you see a different output; you

Donny Wals 77
Practical Swift Concurrency

should see that the console prints information about multiple tasks in parallel because there
are more threads available on your mac.
Try bumping the number of tasks you spawn in ContentView.swift to something very
high like 100 and you’ll see that we process batches of tasks instead of processing only one at
a time.
You’ll observe the same effect as you saw on macOS when you run the example app on an iOS
device since iOS devices have multiple CPU cores just like a mac does.
The reason we’re seeing the output we saw is that sleep blocks our task for the duration of
the sleep by sleeping the underlying thread. The task does not give up its thread because
the thread is sleeping. This means that nothing else can leverage that thread for the duration
of the sleep.
If we refactor the spawnTaskAndSleep to use a non-blocking way of sleeping you’ll find
that this problem goes away:

static func spawnTaskAndSleep(for seconds: Int) {


Task {
let taskId = UUID()
print("Task \(taskId) started at \(Date())")
try await Task.sleep(for: .seconds(seconds))
print("Task \(taskId) ended at \(Date())")
}
}

Running this version of the code results in an output that looks a bit as follows:

Task 8D234A08-673F-42B8-A89E-DD814C211398 started at 2022-11-24


,→ 19:41:24 +0000
Task F2A27C48-16A6-45BC-B9D2-E6943227A5BA started at 2022-11-24
,→ 19:41:24 +0000
Task 793C6BA2-8881-4FE4-93F7-550ED982F6D0 started at 2022-11-24
,→ 19:41:24 +0000
Task D283C286-91E3-40BA-A070-837A4C216D28 started at 2022-11-24
,→ 19:41:24 +0000

Donny Wals 78
Practical Swift Concurrency

Task F84129E2-BD98-4A52-9FD6-31BBADEA893E started at 2022-11-24


,→ 19:41:24 +0000
Task 577B5378-CA2E-47F9-8CB7-6C909ECAAA90 started at 2022-11-24
,→ 19:41:24 +0000
Task 9A1D62CF-43DD-49E1-9F62-E80F9E63D6D4 started at 2022-11-24
,→ 19:41:24 +0000
Task 1052CA37-0F5A-4567-BD83-22EE13FD5C85 started at 2022-11-24
,→ 19:41:24 +0000
Task F7D74581-94A9-491B-A383-FAE9B6B9EB7D started at 2022-11-24
,→ 19:41:24 +0000
Task A0AB9299-A09D-481E-BDE3-7E2EB3E3CEC0 started at 2022-11-24
,→ 19:41:24 +0000
Task 8D234A08-673F-42B8-A89E-DD814C211398 ended at 2022-11-24
,→ 19:41:27 +0000
Task F2A27C48-16A6-45BC-B9D2-E6943227A5BA ended at 2022-11-24
,→ 19:41:27 +0000
Task 793C6BA2-8881-4FE4-93F7-550ED982F6D0 ended at 2022-11-24
,→ 19:41:27 +0000
Task D283C286-91E3-40BA-A070-837A4C216D28 ended at 2022-11-24
,→ 19:41:27 +0000
Task F84129E2-BD98-4A52-9FD6-31BBADEA893E ended at 2022-11-24
,→ 19:41:27 +0000
Task 577B5378-CA2E-47F9-8CB7-6C909ECAAA90 ended at 2022-11-24
,→ 19:41:27 +0000
Task 9A1D62CF-43DD-49E1-9F62-E80F9E63D6D4 ended at 2022-11-24
,→ 19:41:27 +0000
Task 1052CA37-0F5A-4567-BD83-22EE13FD5C85 ended at 2022-11-24
,→ 19:41:27 +0000
Task F7D74581-94A9-491B-A383-FAE9B6B9EB7D ended at 2022-11-24
,→ 19:41:27 +0000
Task A0AB9299-A09D-481E-BDE3-7E2EB3E3CEC0 ended at 2022-11-24
,→ 19:41:27 +0000

You can see that all tasks start in rapid succession, then give up their thread while they are
suspended to await the Task.sleep call, and then resume once the sleep is over.

Donny Wals 79
Practical Swift Concurrency

The lesson here is that you want to be mindful of the work you’re doing in your tasks. A key
rule for tasks is that they should always be making forward progress (or give up their thread).
If you decide that you want to perform blocking work on a task it’s a good idea to manually
allow Swift Concurrency to suspend your task so other tasks can make progress in between
the work you’re doing.
As you’ve learned in the previous chapter, you can voluntarily give up your thread by calling
Task.yield() in between performing heavy work. For example, if you’re processing a
large number of files in a single task it might make sense to allow the concurrency system to
suspend your task in between files as follows:

Task {
for file in files {
processFile(file)
await Task.yield()
}
}

As robust as Swift Concurrency aims to be, it really relies a lot on developers being good
citizens. I’ve mentioned it before but the key rule here is ensure that your tasks are either
making progress or yield their thread like we do in the example above.
Generally speaking, it’s pretty safe to say that it’s not likely that you will be writing a lot of
code that is problematic. You can safely assume that any code that’s provided by lower level
systems will correctly suspend and resume as needed. An example of this is URLSession’s
async data method. When you’re calling this method from a Task you can be certain that
you’re not blocking anything.
As always, if you suspect that you might be performing a slow blocking task in a Task, measure
your code with Instruments using the Time Profiler and take it from there. You will learn more
about profiling your async work with Instruments in Chapter 11.

Understanding task priority


In addition to making sure that your tasks don’t contain long running pieces of blocking code,
it’s important to make sure that you help the system schedule your tasks in a correct manner.

Donny Wals 80
Practical Swift Concurrency

To do this, we can mark our tasks with one of three priority levels:

• userInitiated
• utility
• background

These tasks range in importance from the user absolutely needs this to use the app all the way
down to we need to do this, but it’s okay if it’s done a bit later. The default task priority that’s
used by tasks is userInitiated. This will ensure that your task is run as soon as possible,
on processor cores that are as fast as possible (on systems with different types of cores), and
before tasks that have a lower priority.
While it sounds great that the default priority is to mark your task as extremely important,
it’s likely that a lot of the tasks that you want to perform should really be utility tasks or
possibly even background tasks.
Deciding the correct priority for your tasks can be pretty tough. The thing that you should
always ask yourself is how limited the user is within your app while you’re running a certain
task.
For example, if you’re fetching data that should be shown in your app it makes sense to run
this as a userInitiated task. You want the data to be shown as soon as possible because
until the data is shown the user is looking at a spinner.
On the other hand, if your app has a data exporting option where you convert a bunch of
model objects to a JSON file and write it to disk it might make sense to do this as a utility
level task. This task can run while the user is using the app so it doesn’t need to compete with
tasks that help the user actually use the UI. You could even run a task like this as background
because the user will most likely not be sitting and waiting for the export to complete.
A classic example of a task that can run as a background task is when you have a process in
place to synchronize data that was created locally over to a backend service. This sync can
happen at any time, doesn’t need to run with a super high priority, and the user doesn’t even
need to know when this process starts or finishes. They just need to be able to trust that the
sync is performed automatically at an appropriate time. That’s exactly what a background
priority task gets you.
Being mindful of the task priorities that you use when scheduling your tasks will help the Swift
Concurrency system to optimally schedule and perform your work.

Donny Wals 81
Practical Swift Concurrency

At the same time, you shouldn’t get too hung up on trying to decide whether some piece
work should be userInitiated or utility. The system default is userInitiated
and more often than not this default is absolutely fine to be used unless it’s crystal clear that
a lower priority level is more appropriate.

Understanding a task’s lifecycle and


capture semantics
When you create a new Task, there are a couple of interesting things to know. Study the
following code for a moment. Can you spot two issues in the code below?

class TaskLifecycle {
let networking = Networking()
var items = [Response]()

func loadAllPages() {
Task {
var hasMorePages = true
var currentPage = 0

while hasMorePages {
let page = await networking.fetchPage(currentPage)
if page.hasMorePages {
currentPage += 1
items.append(page)
} else {
hasMorePages = false
}
}
}
}
}

Donny Wals 82
Practical Swift Concurrency

The first issue you might have noticed is that we use items instead of self.items. Nor-
mally, this would result in a compiler error that informs you about having to explicitly use
self to make the capture semantics of self explicit.
There are various situations where you don’t need to capture self explicitly. The reason for
this is that some closures that capture self have a clearly defined lifecycle with a beginning
and end of the work. This means that these closures don’t create a retain cycle that will never
be broken. The idea is that even if self would be captured for a little longer than we’d like,
eventually our retain cycle solves itself.
In most cases in Swift Concurrency this is fine; a Task will usually end (eventually) so we’re
not keeping self alive forever.
However, in the code above we’re loading potentially lots of pages from the network. Imagine
that our API returns up to a hundred pages and a user dismisses the screen that initiates this
loading after the first couple of pages were fetched.
Our implicit self capture would stick around until after all pages are loaded even though we
already know we’re never going to present these pages. We can fix this with a [weak self]
which will work exactly like you’re used to. A weakly captured self will make sure that self
can be released when the user dismisses the screen they were looking at which is great.
However, this doesn’t fully solve our issue. We know self will be deallocated but unfortu-
nately, we’ll still be performing lots of work before our Task completes.
The reason our work doesn’t stop is that our Task’s lifecycle is not bound to anything. In
other words, our task begins running when it’s created and it doesn’t stop until all work is
done. If we want to make sure our task is cancelled when self goes out of scope we need to
explicitly check whether self exists, and we should cancel our task when self is gone.
The following code shows how we can do this:

class TaskLifecycle {
let networking = Networking()
var items = [Response]()

func loadAllPages() {
// we've added a weak self here
Task { [weak self] in

Donny Wals 83
Practical Swift Concurrency

var hasMorePages = true


var currentPage = 0

while hasMorePages {
guard let self = self else {
return
}

let page = await self.networking.fetchPage(currentPage)


if page.hasMorePages {
currentPage += 1
self.items.append(page)
} else {
hasMorePages = false
}
}
}
}
}

On every iteration of the while loop, we check if self still exists. If it doesn’t we return
from the Task which will end the task and stop any work that we would otherwise perform.
The way that Task captures self implicitly, and the fact that a Task doesn’t have its lifecycle
bound to anything means that we should always be very mindful of whether or not we should
be capturing self weakly, and whether or not we should cancel our tasks at certain points
when self is no longer around.
If you’re only performing a single networking call in a task, you might not have to check
whether self still exists, and a [weak self] might not make sense.
The reason for this is that your network call will most likely start while self still exists because
your Task begins running as soon as it’s created. If you capture self weakly and unwrap it
right after your task starts you’re still retaining self for the entire duration of your Task. In
other words, you didn’t gain anything from your weak self if you’re going to require self to
exist for the entire duration of your task anyway.
The above is essentially the reasoning behind why a Task has an implicit self capture. The

Donny Wals 84
Practical Swift Concurrency

work you’re doing will usually complete eventually so keeping self around for just a bit longer
usually doesn’t cause any issues. In Chapter 8 we will see that tasks that can run for a poten-
tially infinite amount of time exist, and you will learn how to properly avoid retain cycles for
these kinds of tasks.
In Chapter 9 we will dig deeper into task cancellation, and more specifically we’ll take a look
at how cancellation propagates through our tasks using a concept called cooperative task
cancellation.

Tasks and error handling


In the previous section I mentioned that tasks have an implicit self capture which means
that you can freely use and access members of self inside of a task. This is a huge gotcha for
many developers because there are no compiler errors or warnings surrounding the behavior
so it’s easy to accidentally capture self strongly through the implicit capture.
Another behavior that can be slightly surprising when using Task is that a task created with
the Task or Task.detached initializer will swallow any errors thrown from inside of the
task’s body.
You’ve already read about this in the previous chapter but since this is quite a big deal in my
opinion, I wanted to make sure I covered this again in a dedicated section in this chapter
too.
You saw this example earlier in this chapter:

Task {
let userInfo = try await fetchUserInfo()
}

Notice how we call a throwing method with the try keyword but there’s no do or catch
in that code. The Task will happily catch errors for you without any complaining. This is
convenient when we want to ignore errors on purpose which is something we shouldn’t be
doing. Ignoring errors should be a conscious decision that we’ve made clearly and obviously.
Quietly ignoring errors isn’t great because it hides our intent; it’s never clear whether we’re
ignoring errors on purpose or not.

Donny Wals 85
Practical Swift Concurrency

Unfortunately this is something you have to know and be aware of; the compiler can’t help
you here. Whenever you call a throwing method in a Task at the very least add a do {}
catch {} with an empty catch clause that holds a comment. Something like this would
do just fine:

Task {
do {
let userInfo = try await fetchUserInfo()
} catch {
// ignore errors...
}
}

This makes our intent clear and will not leave our coworkers wondering whether we meant to
ignore errors or not.
When you create a task, you can assign the task itself to a property. Your task can then
eventually produce a value which we can extract through the task’s value property. Note
that even when our task returns nothing, we can access the value property to know when
our task completed.
If the task can throw an error, we have to await the value with the try keyword. We don’t
need the try when our task can never thrown an error. The following code demonstrates this
by showing you an incorrect and a correct example of awaiting a task’s value property:

let taskOne = Task {


return try await fetchUserInfo()
}

// Compiler error: Property access can throw, but it is not marked


,→ with 'try' and the error is not handled
let userInfoOne = await taskOne.value

let taskTwo = Task {


do {
return try await fetchUserInfo()

Donny Wals 86
Practical Swift Concurrency

} catch {
return nil
}
}

// this is fine because our task will never throw an error


let userInfoTwo = await taskTwo.value

The fact that it’s allowed for a task’s value to produce an error is the entire reason why your
task allows you to throw errors from within your task closure. The closure itself is allowed
to be throwing which means that it’s perfectly fine for us to “leak” our errors from the task
without any consequences.

In Summary
In this chapter, you’ve learned a lot about how Task can be used in Swift concurrency. We
started by looking at situations where you’ll want to (or have to) create new Task objects.
You learned that a Task in Swift encapsulates a body of asynchronous work and that a Task
runs concurrently with other Task object.
You also learned that there are three ways to create a Task. Two that inherit certain bits and
pieces from the task it’s created from, and one that runs completely detached from other
tasks. Generally speaking, you won’t be using detached tasks much; it’s preferred to use a
regular unstructured Task. And even then, you should always carefully consider whether a
new Task is what you need.
After that, we talked about Task lifecycles and capture semantics. You learned that a Task
implicitly captures a reference to self which means that we can access members of self
without explicitly referencing self and without capturing it in our Task closure’s capture
list. You also learned that a Task runs until all its work is completed, and that its lifecycle is
not bound to the place it’s created from. This means that we need to carefully consider if and
how we should be cancelling our tasks if the task’s creator is deallocated.
We wrapped this chapter up with a brief overview of how Task swallows errors without any
compiler errors or warnings. In my opinion this is an unfortunate gotcha of how Task works

Donny Wals 87
Practical Swift Concurrency

and it’s something you have to know and remember. Ignoring errors should always be a
conscious decision so it’s a good idea to make your intent clear at all times, even when your
code compiles when you don’t make your intent clear.
In the next chapter, we’ll take a look at bridging your existing code over to the world of Swift
Concurrency through continuations.

Donny Wals 88
Practical Swift Concurrency

Chapter 5 - Existing code and Swift


Concurrency
We’ve covered a lot of ground with Swift Concurrency, but you may still be wondering how
to transition your codebase from pre-concurrency to Swift Concurrency. In this chapter, I’ll
answer that question and help you find a sensible way to migrate. Note that we’re going
to cover no concurrency to Swift 6.2 as our migration path. If you’ve already migrated your
codebase (partially) to Swift Concurrency before Xcode 26, you might have done a bunch of
work that’s no longer needed with Swift 6.2, and things will almost certainly work slightly
different when you compare your current codebase to the way we’ll use Swift 6.2.
In this chapter I will operate under the assumption that your code uses all of Xcode 26’s default
settings. This means that your code will run on the main actor by default, and no other settings
are turned on.
Throughout this chapter, a lot of what you’ve learned comes together. We’ll look at:

• Moving from callbacks to async / await


• Mixing Combine and Swift Concurrency
• Safely migrating towards Swift Concurrency

If you’re interested in learning more about Swift 6.2, make sure to check out Chapter 12 -
Migrating to Swift 6.2. That chapter covers strategies to apply to projects that have already
adopted Swift Concurrency but have not yet migrated to Swift 6.2.

Moving from callbacks to async / await


More often than not, we learn and apply new technology within the context of an existing
codebase or project. Swift’s new concurrency features are no exception. It’s likely that you’re
currently working on a codebase that makes heavy use of callbacks to communicate the
results of an asynchronous piece of code back to the caller of a function (assuming you
haven’t adopted Swift Concurrency yet).
The most common example of this is making a network call by calling a function on an

Donny Wals 89
Practical Swift Concurrency

object that wraps an object representing your networking layer. Depending on your app’s
architecture, this object could have many names, but let’s call it a view model.
Your view model might have a function that retrieves a certain page worth of content from a
service via a network call. It might also handle caching and enrich the data in some way.
The following code represents what this might look like when modeled as protocols. Note
that I’m not suggesting protocols are the best way to model networking interactions; I’m only
using them because they’re a convenient way to have something that compiles, while also
allowing me to demonstrate the API for a setup.

protocol Networking {
func performRequest(_ request: URLRequest, completion: @escaping
,→ (Result<(Data, URLResponse), Error>) -> Void)
}

protocol PagesViewModel {
var network: Networking { get }
func fetchPage(_ index: Int, completion: @escaping (Result<Page,
,→ Error>) -> Void)
}

In the snippet above, you can see how the view model object has a fetchPage(_:completion:)
method. That method calls the networking service which in turn would perform a network
call using URLSession.
A great first step to migrating that code over to async / await would be to update the view
model to have an async version of fetchPage(_:completion:) alongside the existing
fetchPage(_:completion:) method:

protocol PagesViewModel {
var network: Networking { get }
func fetchPage(_ index: Int, completion: @escaping (Result<Page,
,→ Error>) -> Void)

// async version of the fetchPage method


func fetchPage(_ page: Int) async throws -> Page

Donny Wals 90
Practical Swift Concurrency

At this point we could do two things:

1. We implement this function using async / await and we have duplicated business logic
between the two versions of fetchPage.
2. We create a bridge between the async / await world and the callback based world so we
can slowly migrate over and avoid duplicating business logic.

The second option is the one I would suggest. Reimplementing everything in one go is almost
never a good idea because you’d be making way too many changes all at once and you would
have no way to properly test your work while you’re migrating. Furthermore, having duplicated
business logic is a good way to increase your maintenance burden and introduce bugs over
time. You don’t want this.
In order to build a bridge between async / await and callbacks, you can leverage continuations.
A continuation is used as follows:

func fetchPage(_ page: Int) async throws -> Page {


return try await withCheckedThrowingContinuation { cont in
// ...
}
}

The code snippet above uses Swift Concurrency’s withCheckedThrowingContinua-


tion to create a suspension point that will remain suspended until we decide it’s time to
resume and produce a result. In other words, we call a function that can be awaited and won’t
complete (or rather, resume) until we tell it to.
By calling our callback based function inside of our closure, we can resume the continuation
when we’ve obtained our result. Once we resume the continuation, it can produce a result
which is what we return from our new async fetchPage(_:) function.
In total, there are three ways to resume a continuation:

• By resuming it with a Result<T, Error>


• By resuming it with some output

Donny Wals 91
Practical Swift Concurrency

• By having the continuation throw an error

All three of these options are defined as overloads for the resume method on continuations.
While using a continuation in a situation like this doesn’t look too complex, there are a couple
of rules to keep in mind when working with continuations.

Making sure you use continuations correctly


Continuations are an incredibly useful tool for developers to slowly but surely bridge callback
based code (or other concurrent code) into the world of Swift Concurrency, you’ve already
seen that. You’ve also seen that creating a continuation creates a suspension point that must
be awaited and resumed by us. If we never resume our continuation, our function never
completes because it would be suspended forever.
So the first rule to make sure that you use your continuations correctly is the following:

Always make sure that you complete your continuations at some point.

It doesn’t matter how long your continuation takes to complete, as long as it completes
eventually. You only need to ensure that every codepath within the code you’re bridging into
Swift Concurrency has a correctly implemented exit point that resumes your continuation.
This usually means that you should pay extra attention to making sure that every codepath in
a callback based function actually calls the completion handler instead of failing silently.
It’s also important that you only resume your continuations once. Resuming a continuation
more than once is an error and will result in a crash. Depending on what kind of continuation
you’ve used, this might be a very vague crash, or a pretty clear one.
There are two kinds of continuations that you can use:

• Checked
• Unsafe

A checked continuation is created by either calling withCheckedContinuation or


withCheckedThrowingContinuation. The difference between these two kinds
of continuations is that withCheckedContinuation does not allow you to resume
your continuation by throwing an error. The throwing variant of this method does allow

Donny Wals 92
Practical Swift Concurrency

propagation of errors which means that you would have to try await your calls to
withCheckedThrowingContinuation.
A checked continuation is a continuation that performs runtime checks to ensure that you
don’t complete your continuations more than once, and to ensure that your continuation
doesn’t get deallocated without first resuming it which would cause any code that awaits your
continuation to be hanging forever, and resources that are held on to by the continuation to
be retained which would be a leak.
When you accidentally break one of the rules while using a checked continuation, you will
get a nice runtime crash or warning that has a clear message and stack trace. This is exactly
what you want during your development phase because debugging a checked continuation is
much, much easier than debugging an unsafe one. The image below shows an example of
what a crash for a checked continuation looks like in Xcode:

Figure 9: A crash for a checked continuation

An unsafe continuation is used to solve the exact same problem as a checked continuation
except it doesn’t eagerly perform runtime checks. This means that a checked continuation

Donny Wals 93
Practical Swift Concurrency

has a little bit of overhead that an unsafe continuation doesn’t have, but otherwise they work
the same and have the same rules and requirements.
When you accidentally misuse an unsafe continuation, your app will crash at runtime with a
much vaguer error message. Debugging this will be much harder so it’s recommended to only
use an unsafe continuation when you know that you’re using continuations correctly. The
image below shows an example of a crash for an unsafe continuation:

Figure 10: A crash for an unsafe continuation

The code bundle for this chapter contains a sample app that allows you to run examples of
continuation misuse in both a checked and unsafe continuation. I highly recommend that
you take a look at the code to see exactly what’s happening and to experience these crashes
for yourself.
Generally speaking, you should avoid using unsafe continuations. There’s a very minor runtime
overhead associated with checked continuations but in my experience that overhead is not
worth the hassle of dealing with mistakes that can occur in unsafe continuations. If you’re
concerned about the exact amount of overhead that you’re incurring by using a checked

Donny Wals 94
Practical Swift Concurrency

continuation, I can only recommend that you profile your code with Instruments and then
make an informed decision about whether or not you should switch to unsafe continuations.

Mixing Combine and Swift Concurrency


When Apple introduced Swift Concurrency, I’m pretty sure lots of folks were wondering if, how,
and whether Swift Concurrency would replace Combine. With the introduction of the Asyn-
cAlgorithms these questions have changed to how and when Combine will be replaced
by Swift Concurrency.
With this book, I would like to avoid the discussion of what will replace what entirely. Instead,
I’d like to look at the reality of the situation which is that it’s highly likely that you’ll run into
situations where you’ll either be calling an async function from within a Combine operator, or
where you’re looking at getting the output of a publisher into the world of Swift Concurrency.
In this section we’ll explore both of these options.
In Chapter 7 - Working with asynchronous streams you will learn more about using async
sequences and streams, and in Chapter 8 - Async algorithms and Combine we’ll talk more
about similarities and differences between Combine and Swift Concurrency. For now, we’ll
focus solely on the interoperability of the two paradigms.

Calling async code from a Combine operator


If you’re in the process of implementing Swift Concurrency in an existing codebase, there’s a
good chance that you’ll find yourself wanting to refactor some functions that would be called
from within a Combine operator to be async functions instead of publishers that wrap some
async code.
To some extent the wish to do this makes total sense. However, we should try to avoid making
our lives more complicated than they need to be. If you want to implement a search feature
where you have a published searchText property, and you want to make network calls
based on your user’s input you might write something like the following in a purely Combine
based approach:

Donny Wals 95
Practical Swift Concurrency

class SearchFeature: ObservableObject {


@Published var searchText = ""
@Published var results: [String] = []

init() {
$searchText
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.flatMap { query in
let url = URL(string:
,→ "https://practicalswiftconcurrency.com?q=\(query)")!
return URLSession.shared.dataTaskPublisher(for: url)
.map(\.data)
.decode(type: [String].self, decoder: JSONDecoder())
.replaceError(with: [])
}
.receive(on: DispatchQueue.main)
.assign(to: &$results)
}
}

The key component in this code is the fact that we use a flatMap to take the current search
query and that we create a new publisher that performs a search query using URLSession.
Using a flatMap to kick off some new asynchronous bit of work in a Combine pipeline is
the correct way to kick off asynchronous work in response to receiving values. Because it
allows us to construct and return a new publisher who’s values will be sent downstream by
the flatMap.
It’s highly unlikely (and probably incorrect) that you’ll encounter code that performs asyn-
chronous work in a regular Combine operator like map or filter. These operators take an
input and synchronously transform the received value into a new value rather than taking an
input and returning a new publisher that performs asynchronous work.
As a rule of thumb, if you’re looking to kick off new work in response to receiving output
from a publisher, flatMap is the operator you’re looking for. Note that flatMap is not an
asynchronous operator. It requires us to synchronously create and return a publisher. It’s the
returned publisher can perform work and emit values asynchronously.

Donny Wals 96
Practical Swift Concurrency

With that knowledge in your head, we can start refactoring the code above to leverage an async
function to perform the search query. For example, we might have the following function
defined on the SearchFeature class:

func search(for query: String) async throws -> [String] {


let url = URL(string:
,→ "https://practicalswiftconcurrency.com?q=\(query)")!
let (data, _) = try await URLSession.shared.data(from: url)
return try JSONDecoder().decode([String].self, from: data)
}

We can leverage a flatMap, Combine’s Future and a Task to call this function instead of
using a dataTaskPublisher:

// Combine + async/await mixed version of the search feature


$searchText
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.flatMap { query in
return Future { [weak self] promise in
guard let self = self else {
promise(.success([]))
return
}

Task {
do {
let results = try await self.search(for: query)
promise(.success(results))
} catch {
// instead of failing, complete with empty result
promise(.success([]))
}
}
}
}
.receive(on: DispatchQueue.main)

Donny Wals 97
Practical Swift Concurrency

.assign(to: &$results)

We didn’t necessarily make our code any shorter (nor do I consider this version of the code to
be easier to read) but it does work as a means to call async functions from within a Combine
publisher.
Regardless of the Combine operator you choose to use you always provide that operator a
closure that receives some input, synchronously transforms that input into something new,
and then that something new is returned from your closure as the output of that closure.
So in our flatMap we take the output of a publisher and use that to create and return a
Future. That Future can perform asynchronous work by calling a callback based API, or by
creating a Task and using that to perform async work. Eventually the Future’s promise is
called with a result, and that result then becomes the output of the Future. And subsequently
that value is what’s passed down the Combine pipeline.
Attempting to call and await an asynchronous function inside of a map or other operator is not
the way to go. None of Combine’s operators expect us to perform asynchronous work inside
of the operator’s closure so we have to resort to constructs like a Future and flatMap to
bridge our Swift Concurrency code into the world of Combine.

Bringing the results of a publisher into the Swift


Concurrency world
When you’re integrating Swift Concurrency into your codebase, it’s quite likely that you’ll
encounter situations where you have a publisher but you want to somehow convert that
publisher into something that will work with Swift Concurrency.
Most commonly I’ve found that this might happen in a codebase that makes heavy use of
Combine for various one-off tasks like network calls. For example, consider the following
code:

class Networking {
func fetchHomepage() -> AnyPublisher<String, Error> {
let url = URL(string: "https://practicalswiftconcurrency.com")!

Donny Wals 98
Practical Swift Concurrency

return URLSession.shared.dataTaskPublisher(for: url)


.map { data, _ in
return String(data: data, encoding: .utf8) ?? ""
}
.mapError({ $0 as Error })
.eraseToAnyPublisher()
}
}

Calling the fetchHomepage function would look a bit as follows:

let network = Networking()


let homepageCancellable = network.fetchHomepage()
.sink(receiveCompletion: { _ in
// handle completion
}, receiveValue: { htmlContent in
print(htmlContent)
})

Subscribing to a publisher that will only emit one value and then complete is a perfect example
of something that could be easier to work with when it can be called as an async function
instead since it would mean that we don’t have to go through the process of setting up a
Combine subscription and retaining our cancellables.
When you have an existing method that returns a publisher, you can bridge that function into
the world of Combine by converting the returned publisher into an async sequence, and using
the very first value emitted by the sequence as your return value. The code below shows how
you can do this:

class Networking {
func fetchHomepage() -> AnyPublisher<String, Error> {
// unchanged
}

func fetchHomepage() async throws -> String {

Donny Wals 99
Practical Swift Concurrency

let asyncSequence = fetchHomepage().values

for try await value in asyncSequence {


return value
}

// we really shouldn't ever get here...


return ""
}
}

Every Combine publisher has a values property that we can access to obtain an async
sequence that emits all values that are emitted from the publisher as soon as the publisher
emits them. Since we know our publisher will only emit a single value, we can leverage an
async sequence and return the first emitted value from our asynchronous function. This is a
nice pattern to leverage when bridging single-value publishers from Combine over to async /
await.

An introduction to async sequences


Async sequences look and feel a lot like regular sequences in Swift with one major difference.
Async sequences do not necessarily have all of their values available at the time you start
your iteration. Instead, an async sequence will make values available as soon as they are
obtained.
In the code you saw earlier, we converted a publisher to an async sequence through the
publisher’s values property. This means that once we iterate over the sequence, we’ll receive
every value that’s emitted by the publisher as soon as it’s emitted. The for try await
loop that you saw in that code waits for our async sequence to make values available. Every
time our source publisher emits a value, the async sequence will make the value available to
us and the for loop’s body runs.
Because our publisher can complete with an error, we need to not just await values from the
sequence, we must also write try to make sure that we handle any errors that are produced
by the publisher. This is very similar to how you call async functions.

Donny Wals 100


Practical Swift Concurrency

Because we know that our fetchHomepage publisher only emits a single value, I wrote
return value in the async for loop’s body. Just like in a regular for loop, doing this will
return from the enclosing function. In other words, it will make the version return the first
(and only) string that’s emitted by the fetchHomepage publisher.
Calling this function is now as straightforward as calling any other async function:

let htmlContent = try await network.fetchHomepage()

Note that it would also be possible to accumulate all values emitted by a publisher and return
them as an array:

func fetchHomepage() async throws -> [String] {


let asyncSequence = fetchHomepage().values
var values = [String]()

for try await value in asyncSequence {


values.append(value)
}

return values
}

This would be useful if our publisher returns multiple values and we want to return them all
from our async function. Note that our async for loop will keep running until the publisher
who’s values we’re iterating over has completed. If the publisher never completes, our for
loop will also never complete. This can be an important detail to keep in mind in situations
where you’re bridging never ending publishers over to Swift Concurrency.
As mentioned before, we’ll take a much deeper dive into async sequences in Chapter 7 -
Working with asynchronous streams. For now, this is everything you should know about async
sequences and how they can be used to bridge Combine code over to async await.
The nice thing about what we’ve done above is that we can keep both the Combine version
of the code as well as have an async version of the code. This allows us to slowly but surely
migrate over to Swift Concurrency where it makes sense, without changing our entire codebase
all at once.

Donny Wals 101


Practical Swift Concurrency

Safely migrating towards Swift


Concurrency
A migration to new technology is never easy, and there is no single recipe that works for
everybody. However, there are just a couple of notes I want to leave you with to make sure
that you set yourself up for success when you decide you want to migrate your codebase over
to Swift Concurrency.
The first and most important lesson I’ve learned from going through various codebase migra-
tions over the years is that it always pays off to spend plenty of time learning the technology
you want to migrate to. Whether it’s Combine, SwiftUI, Swift Concurrency or anything else,
you’ll want to make sure you really understand how and where the new technology fits your
codebase, team, and your roadmap.
If you only scratch the surface of Swift Concurrency and then begin your migration there will
still be lots of fundamental knowledge to gain during the migration. You’ll want to try and
make sure that you at least understand everything that I cover in this book for example. That
should give you a good overview of good practices, which tools are available within Swift
Concurrency, and whether there are any pitfalls to be aware of during your migration.
On top of that, I’ve found that learning and build proof of concept projects in small groups of
2-3 people where you actually pair program is a great way to level up your learning experience.
It’s far more effective to explore together than it is to explore on your own.
Once you feel confident that you’re ready to begin your migration towards Swift Concurrency
you’ll want to think about where you should start refactoring. It might sound tempting to
start by writing new features in Swift Concurrency. That should allow you to get started as
soon as possible without any major work up front. It really does sound perfect.
However, in practice this usually means that writing new features is more complex, you end up
with lots of glue code to connect your new feature to the old way of doing things and generally
speaking things tend to get messy and full of compromise.
In my experience, an effective way to refactor is to work your way from the outside of your
codebase to the core of your codebase. Usually this means that you’ll be working from your
views or view models down to your core services. For every layer you refactor, the layer that
sits below it should get an overloaded version of an existing function. You saw how to do

Donny Wals 102


Practical Swift Concurrency

this earlier when you learned about continuations and when you saw how you can convert a
Combine based function over to Swift Concurrency.
Having the old and new API side by side allows you to refactor slowly but surely without
refactoring business logic all at once. When you choose to replace code immediately I’ve often
found that the scope of the refactor grows rapidly and the likelihood of introducing subtle
bugs increases.
During a major migration I’ve more often than not found that slow and steady wins the race;
make sure you have a plan, and make sure you execute on it.
Once all code in a given layer only calls the async version of a method you can of course safely
remove the old version of the method and begin refactoring its contents over into your async
version of the method. From there, you can start adding async versions of methods in layers
that are used by the layer you’re refactoring in the same you would have done for your view
layer.
Alternatively, you could decide to refactor feature by feature instead of layer by layer. This
can absolutely work fine too depending on how well your features are isolated from other
features.
If you’re starting your migration right now, using the new tools that we have in Swift 6.2 is
really helpful. Running your code on the main actor by default means that everything about
your codebase will be simpler than it is when everything can run anywhere by default. Leave
the main actor only when needed, and only introduce concurrency when it’s intentional. This
model, in my opinion, is very strong. And you truly do need less concurrency than you might
think. Profile your current codebase, see how much work is done on the main thread. It’s
likely more than you expected.
Opting into features like Approachable Concurrency to turn on NonSendingNonIsolat-
edByDefault will make your code more flexible, easy to understand, and you will encounter
fewer compiler errors in the Swift 6 language mode without sacrificing safety and correctness.
You will learn more about this in Chapter 12 - Migrating to Swift 6.2 so make sure to give that
chapter a read too before you start refactoring your code.
Your key takeaway for this section should be that it’s important to learn and understand Swift
Concurrency before refactoring, and that it’s important to make sure you have a good sense
of how you want to tackle your refactor in a way that won’t make you spiral out of control.
Slow and steady wins the race...

Donny Wals 103


Practical Swift Concurrency

In Summary
In this chapter, you’ve learned a lot about how Swift Concurrency interacts with existing
systems. You’ve learned about continuations and how they allow you to take any bit of
callback based asynchronous code and move it over to an async / await friendly wrapper.
After that, we did a brief exploration of how we can use both Combine and async / await in a
codebase. You’ve seen the basics of calling async functions in a Combine operator, and we’ve
looked at how we can iterate over values that are published by a Combine publisher using an
async for loop.
Lastly, we took a deep dive into establishing a migration path for an existing codebase so
it can be refactored to be fully async / await compatible. This section provided a high level
overview of how you can approach such a migration, and what some of the challenges are
that you could face.
In the next chapter, you will learn a lot about how Swift Concurrency helps prevent data races
in our code with a concept called actors.

Donny Wals 104


Practical Swift Concurrency

Chapter 6 - Preventing data races with


Swift Concurrency
Data races and thread safety are complex and common problems in concurrent code. Swift
Concurrency tries to help us avoid these problems entirely by enforcing as much thread safety
as possible at compile time.
Compile time checks and type safety are some of Swift’s key values so it’s really great that
we’re finally getting some of Swift’s benefits for concurrency related code too.
In this chapter, we will take a close look at the following topics:

• Understanding what a data race is


• Exploring actors in Swift
• Protecting state with mutexes
• Understanding Sendability in Swift
• Thread safety and Swift’s language modes

We will explore these topics by building some neat concurrency related components that you
might want to use in your projects. To be specific, we’ll build an in memory date formatter
cache and an image loader that are thread-safe and handle concurrency nicely.

Understanding what a data race is


Before we can look at solutions to data races, it’s important that we understand what a data
race is, and why data races are problematic. In a formal sense, data races occur when multiple
threads try to access a specific resource at once, and if at least one of these threads attempts
to mutate the resource. Or, in other words, we encouter data races when we try to modify a
piece of state while simultaneously reading that state.
While the explanation above is very theoretical, it’s important that you understand what a
data race is because the conditions for running into one are surprisingly common to occur
within apps that leverage concurrency without any extra protection.
For example, consider the following helper object that allows us to easily reuse date format-
ters:

Donny Wals 105


Practical Swift Concurrency

class DateFormatters {
private var formatters: [String: DateFormatter] = [:]

func formatter(using dateFormat: String) -> DateFormatter {


if let formatter = formatters[dateFormat] {
return formatter
}

let newFormatter = DateFormatter()


newFormatter.dateFormat = dateFormat
formatters[dateFormat] = newFormatter

return newFormatter
}
}

This code isn’t particularly clever, optimized, or groundbreaking in any way. In fact, it has
an enormous potential problem because this object is in no way safe to be used in a multi-
threaded environment.
Imagine that we’re calling our date formatter concurrently from various different threads
with a handful of different formatters. Because we’d be accessing formatter(using:) at
the exact same time and we’d either create a new formatter and put it in the formatters
dictionary or return a cached formatter. Since we don’t have any data race protection in place,
mutating the formatters dictionary while other threads are trying to read from it will crash
our app.
To see our crash in action, run this chapter’s sample app and click the button to run the unsafe
date formatters example. This will run the following code:

let cache = DateFormatters()

DispatchQueue.concurrentPerform(iterations: 10) { _ in
let formatters = ["YYYY", "YYYY-MM", "YYYY-MM-dd"]

let _ = cache.formatter(using: formatters.randomElement()!)

Donny Wals 106


Practical Swift Concurrency

Most of the time the program will crash with an EXC_BAD_ACCESS error when you click
the button but every once in a while you’ll get a very obscure error related to things like
NSTaggedPointerString and other objects. There’s even a very, very small chance that
the app doesn’t crash on some runs although it’s highly unlikely you’ll see that happen.
The reason we’re seeing crashes with obscure messages is that we’re reading mutable state
concurrently from multiple places while also changing the state in some situations. If we
would only read from the cache we’d be fine. For example, if we change the sample above to
look as follows, it wouldn’t crash anymore:

let cache = DateFormatters()


let formatters = ["YYYY", "YYYY-MM", "YYYY-MM-dd"]

for format in formatters {


let _ = cache.formatter(using: format)
}

DispatchQueue.concurrentPerform(iterations: 10) { _ in
let _ = cache.formatter(using: formatters.randomElement()!)
}

The reason we no longer crash is that we no longer mutate our state anymore in the concurrent
loop. We’re still accessing formatter concurrently but none of the concurrent accesses are
mutating the dictionary.
Of course, doing what I did above isn’t a solution to the underlying data race. We can’t always
know every possible formatter we might need to create. And even if we did know all values
it might not be cheap to create and cache all date formatters especially if we might never
need them. Luckily, Swift Concurrency provides a much better mechanism for fixing our data
race.
By the way, notice that I’m using DispatchQueue instead of a Swift Concurrency to run code
concurrently. I did this because Swift Concurrency schedules work different from GCD and

Donny Wals 107


Practical Swift Concurrency

I’ve found that data races are a bit harder to come by when you’re using Swift Concurrency’s
mechanisms to run work in parallel.
Back to the subject at hand...
To fix our code in a pre-concurrency world we could make use of tools like NSLock or a serial
dispatch queue amongst other tools. I won’t argue which tool is best and instead I’ll just show
you how we can fix the code using NSLock.

class DateFormatters {
private var formatters: [String: DateFormatter] = [:]
private var lock = NSLock()

func formatter(using dateFormat: String) -> DateFormatter {


lock.lock()
defer { lock.unlock() }

if let formatter = formatters[dateFormat] {


return formatter
}

let newFormatter = DateFormatter()


newFormatter.dateFormat = dateFormat
formatters[dateFormat] = newFormatter

return newFormatter
}
}

Whenever somebody calls formatter(using:) I grab my lock and I call lock on it. This
means that any subsequent calls to formatter(using:) will need to wait for my lock to
be unlocked before the code can proceed.
In the defer block right after locking my lock, I call unlock() on the lock to unlock when-
ever I exit out of my function. Once I unlock, the next caller of formatter(using:) can
obtain the lock and run the function.
Effectively this means that no matter how many calls we make to our format-

Donny Wals 108


Practical Swift Concurrency

ters(using:) function at once, only one caller will actually run at any given time.
Which in turn means that only one caller will be accessing and mutating our mutable state.
There are more efficient ways to achieve our goal like a dispatch queue with a barrier but like I
said, the point right now isn’t to explain the best way to fix data races in a world without Swift
Concurrency. We have Swift Concurrency so we can leverage it to fix our data race instead!

Using your first actor


In Swift Concurrency, actors are a tool that allows us to protect shared mutable state from
data races.
In practical terms this means that an actor can help us fix a data race like the one we had
before by serializing access to mutable state. This is essentially what we did manually with
the NSLock earlier except actors are a lot smarter about how they work than the naive lock
approach I took earlier.
Before we dig deeper into actors and focus on how they work and what their rough edges are
I’d like to show you how we can fix the DateFormatters class using an actor instead of a
lock:

actor DateFormatters {
private var formatters: [String: DateFormatter] = [:]

func formatter(using dateFormat: String) -> DateFormatter {


if let formatter = formatters[dateFormat] {
return formatter
}

let newFormatter = DateFormatter()


newFormatter.dateFormat = dateFormat
formatters[dateFormat] = newFormatter

return newFormatter
}
}

Donny Wals 109


Practical Swift Concurrency

You might have to look real close to see what we did different from the example we started
with. . .
All we did compared to the original code is change the class keyword to actor.
If you were to make this change from class to actor in your code and then would try and
run your app, you’d see a compiler error on lines where you interact with the DateFormat-
ters object. The error will look a bit as follows: Actor-isolated instance method
'formatter(using:)' can not be referenced from a non-isolated
context. This error essentially means a similar thing to the error that’s presented when we
try to call an async method from a non-async context. By changing our class to an actor,
the code is no longer fully synchronous.
By changing our class to being an actor we made all interaction with our actor asyn-
chronous. The reason for this is that our actor serializes access to its mutable state. Let’s talk
a little bit about what that means.
In the lock based version of DateFormatters, all callers to formatters(using:)
would have to wait for the lock to become available before they could run. This waiting would
happen in a blocking way which means that the thread we’re calling formatters(using:)
from is blocked until we’ve had a chance to obtain the lock and get a result.
As you know, Swift Concurrency avoids these thread blocking mechanisms in favor of having
an async call that we can await instead. Or, in other words, Swift Concurrency suspends
tasks instead of blocking threads.
Actors follow this principle too.
All state in an actor is “isolated” to that actor. This means that only the actor itself is allowed
to access that state directly, and it means that the actor decides how and when state is
accessed. Isolation is a complex and large topic within the world of Swift Concurrency, and
you’ll encounter the term isolation in different contexts. Understanding what isolation means
in the context of an actor is, in my opinion, the best way to understand isolation as a whole.
So whenever you think of actors, their state, and how that state is protected from data races, I
want you to think of actor isolation.
Actor isolation is what make actors work. It’s how actors ensure that access to their state and
functions is synchronized and free of data races.
Whenever we try to interact with an actor, the actor receives a message in its so-called mailbox.

Donny Wals 110


Practical Swift Concurrency

This mailbox acts like queue for interactions with the actors functions and state and it’s a key
component in actor isolation. The actor will take the first message in the mailbox, process it,
and then take up the next message in its mailbox. Much like a FIFO (First-in-first-out) queue.
For our example this means that we put a number of calls to formatters(using:) in the
actor’s mailbox and the actor will process these function calls one by one. However, if we’re
the last message in the actor’s mailbox we might have to wait a short while before the actor
processes our function call.
The following graphic makes this principle clear in a visual manner:

Figure 11: Sending messages to the actor mailbox

Notice how when we call a function, we put our function call in the actor mailbox. The actor
processes items one by one in the order of receiving the calls. There are some caveats here
that we’ll get into in this chapter but for now, let’s consider actors as a simple mechanism to
do “one thing at a time”.
Some folks will immediately think of an actor as a serial queue when they see this image and
explanation that goes along with it. The thought makes sense, an actor only processes one
item at a time which is exactly what a serial queue also does. There’s danger in the details
though. To understand why, we need to take a look at a concept called actor reentrancy which

Donny Wals 111


Practical Swift Concurrency

we’ll cover in the next section. Before we move on to this topic, there are a couple other
actor-related things we should cover first.
Earlier I mentioned that whenever you interact with an actor, the actor receives a message in
its mailbox and that an actor processes these messages one by one. I also mentioned that
interacting with actors is asynchronous which means that we must await our interactions.
Consider the following code:

class ItemViewModel: ObservableObject {


let dateFormatters = DateFormatters()
// ...

func formattedDate() -> String {


let date = Date()
let formatter = dateFormatters.formatter(using: "yyyy/MM/dd")
return formatter.string(from: date)
}
}

At first glance this code looks fine but when you try to compile it, you see the following error:

Actor-isolated instance method 'formatter(using:)' can not be


,→ referenced from a non-isolated context

This error tells us that we’re trying to interact with our actor without an await and we’re not
part of the actor’s isolation context. In other words, the actor might not be able to handle
our call to formatter(using:) instantly since we’re not running as a part of the date
formatters actor. To fix this we need to await the method call as follows:

func formattedDate() async -> String {


let date = Date()
let formatter = await dateFormatters.formatter(using: "yyyy/MM/dd")
return formatter.string(from: date)
}

Donny Wals 112


Practical Swift Concurrency

I’ve changed two things compared to the code from earlier.


First formattedDate is now an async function. This is required because we’re going to
have to asynchronously interact with our actor. The second change was is the await keyword
before our call to formatter(using:). This await is needed because our formatted-
Date function might suspend while we wait for the actor to process messages in its mailbox
and run our function call.
Any time we call a function or access a property on an actor we must await this operation.
The only exception to this rule is when the compiler knows we’re part of the actor’s isolation
context. Most commonly that will happen when you’re accessing state or functions from
within a function that’s defined on the actor.
Take another look at the DateFormatters actor you saw earlier:

actor DateFormatters {
private var formatters: [String: DateFormatter] = [:]

func formatter(using dateFormat: String) -> DateFormatter {


if let formatter = formatters[dateFormat] {
return formatter
}

let newFormatter = DateFormatter()


newFormatter.dateFormat = dateFormat
formatters[dateFormat] = newFormatter

return newFormatter
}
}

Notice how we don’t need to have an await whenever we check if a formatter with the given
format exists in the formatters dictionary. Or how we don’t await on the line where we
cache a freshly created date formatter.
The reason for this is that formatter(using:) runs as part of our DateFormatters
actor isolation context already; it’s defined on the actor after all. Another way to explain this
is to say that an actor will not put messages in its own mailbox.

Donny Wals 113


Practical Swift Concurrency

Sometimes though, you might have a function that’s defined on an actor where the function
doesn’t actually interact with any mutable state that’s owned by the actor at all. When that’s
the case, having to await every call to that function when it’s called outside of the actor can
be tedious. Or if you have a let property that’s defined on your actor, you might want to be
able to access that property without having to await it; it’s immutable after all.
In these cases you would know that accessing that specific property or method on your actor
will never cause data races.
Luckily, let constants can freely be accessed from anywhere without an await. The compiler
knows that your let is immutable and it will happily allow you to read it from as many threads
as you’d like; there’s no risk of data races so a let doesn’t benefit from actor isolation anyway
because it will always hold the same value anyway due to its immutability.
A function that doesn’t interact with mutable state is not considered safe to call without actor
isolation though. The compiler cannot guarantee that a function is safe to run without isolation.
If we as developers are absolutely certain that a given function is safe to call concurrently
without isolation we can mark that function as nonisolated to opt that function out of
actor isolation. This looks as follows:

actor DateFormatters {
private var formatters: [String: DateFormatter] = [:]
private let _defaultFormatter = ISO8601DateFormatter()

nonisolated func defaultFormatter() -> ISO8601DateFormatter {


return _defaultFormatter
}

// ...
}

This example is a little silly; since the default date formatter is a let we might as well interact
with it directly instead of through our defaultFormatter() function. The point wasn’t to
show you something smart; it’s to show you how you opt-out of actor isolation for a function.
In fact, when you attempt to compile the above code you’ll see an interesting warning:

Donny Wals 114


Practical Swift Concurrency

Non-sendable type ‘ISO8601DateFormatter’ in asynchronous access to actor-isolated


property ’_defaultFormatter’ cannot cross actor boundary

It’s a good thing you’re reading this book because we’ll look at sendable types later in this
chapter. So let’s not worry about that warning for now and talk about non isolated methods a
bit more.
By writing nonisolated func, we tell the compiler that our defaultFormatter()
function can be called from a non isolated context without problems; after all we know that this
function does not access any mutable on our actor so it doesn’t benefit from actor isolation. We
can call nonisolated methods on actors without awaiting them because a nonisolated method
isn’t processed as a message in the actor’s mailbox. Instead, it will be run immediately.
You’ll want to apply nonisolated very carefully and ideally you don’t apply it at all unless
it’s really needed. Applying nonisolated to functions and properties that actually do end
up reading or mutating mutable state on your actor will typically mean your code won’t
compile.
Earlier in this section I promised that you’d learn a bit about actor reentrancy and how it’s a
tricky concept. Let’s take a closer look at actor reentrancy now.

Understanding actor reentrancy


You know that actors protect their mutable state by ensuring exclusive access to members.
Only one caller can access and mutate our state at once. However, this doesn’t make an actor
analogous to a serial queue.
Whenever our actor hits an await, it suspends what it’s doing at the time and potentially
starts processing the next message in its mailbox. An actor doesn’t like to sit still and do
nothing when it knows that there’s more work to be done. And more importantly, when a
function call on an actor is suspended, we know that function isn’t going to read or write the
actor’s mutable state anyway; we can safely pick up some other work and resume the original
function call when a result comes in.
So to put that into a single sentence; actors do only one thing at a time but they don’t like
doing nothing.

Donny Wals 115


Practical Swift Concurrency

Now. Let’s imagine for a moment that our formatters(using:) function from before
was an async function that had an await somewhere in its body. For now it doesn’t matter
what we might be awaiting; I’ll introduce a new example soon that will make a little bit more
sense (but it’s also more complex).
The following diagram illustrates what a situation where we have have an await in an async
actor function looks like.

Figure 12: Actor reentrancy as a diagram

In this diagram you can see a dark gray call to actor.formatters(using:). This call
is suspended while it’s awaiting something else. The actor has picked up a new item from
its mailbox in the mean time, and it will run the remainder of the suspended call to ac-
tors.formatters(using:) once the awaited code has returned and the actor has fin-
ished the other work it was doing. The remainder of our function is essentially put back in the
mailbox once its awaited work is done except it gets to skip the queue all the way to the front,
allowing the actor to finish the work as soon as possible.
Consider the following code:

Donny Wals 116


Practical Swift Concurrency

actor ImageLoader {
private var imageData: [UUID: Data] = [:]

func loadImageData(using id: UUID) async throws -> Data {


if let data = imageData[id] {
return data
}

// buildURL constructs a URL object based on a UUID


let url = buildURL(using: id)
let (data, _) = try await URLSession.shared.data(from: url)
imageData[id] = data

return data
}
}

At first glance, this code doesn’t look too bad at all. We use an actor to maintain an in memory
cache of image data so that whenever we call loadImageData(using:), we either get
cached image data or we load new data from the network.
The execution path of this code is exactly what you might expect. We check whether we have
cached image data, if we don’t we go to the network to grab the data, we cache it, and we
return it. Of course there’s also an error path that we should think about in the real world but
for now we just throw an error to the caller of loadImageData(using:) and that’s good
enough.
Now imagine that we call loadImageData(using:) twice, concurrently, for the exact
same UUID. The actor will pick up the first call, see we don’t have cached data, and then our
function will await data from the network.
If we visualize this, here’s what that looks like:

Donny Wals 117


Practical Swift Concurrency

Figure 13: Kicking off the first network call

Because our actor knows there’s other work to do it will pick up the next call to loadImage-
Data(using:) and perform the same check. We don’t have image data for this UUID in the
cache so we fetch it from the network.
If we expand the diagram you saw earlier, here’s how we can visualize what’s happening:

Donny Wals 118


Practical Swift Concurrency

Figure 14: Kicking off the second network call

At this point we have two network calls for the exact same resource in flight. This isn’t ideal
because the images we’re loading could be pretty huge and it’s a waste of data to be loading
the same image twice at the exact same time.
The cause for our problem is called actor reentrancy.
Actor reentrancy essentially means that whenever an actor is awaiting something, it can go
ahead and pick up other messages from its mailbox until the awaited thing completes and
the function that got suspended can resume.
The result of actor reentrancy is that any assumptions we make before an await should always
be re-validated after an await. Additionally it means that in cases like ours we need to introduce
an explicit loading state for our images.
To fix this, we need to rebuild our ImageLoader to account for actor reentrancy by keeping
track of resources that we’re already loading, and reusing the in-flight fetch operations for
concurrent requests to load the same resource.
We want the flow of our code to be as follows:

• Do we have a cached state (loading or loaded) for the requested resource


– If we’re loading, return the result of the in-progress loading operation

Donny Wals 119


Practical Swift Concurrency

– If the resource is already loaded, return the cached resource

• If we don’t have a cached state, create a new loading operation for the requested and
cache it
• Await the result of the loading operation
• Cache the loaded resource
• Return the loaded resource

The first thing we should do to implement the flow above is to introduce a LoadingState
enum so that we can cache in progress loading operations as well as operations that have
already completed:

actor ImageLoader {
private var imageData: [UUID: LoadingState] = [:]

func loadImageData(using id: UUID) async throws -> Data {


// To implement..
}
}

extension ImageLoader {
enum LoadingState {
case loading(Task<Data, Error>)
case completed(Data)
}
}

With this setup, we can change the if statement that checked for the existence of cached
image data to one that checks whether a loading state exists for a given UUID and then takes
the appropriate action based on the cached state. If we have a cached state we know that
we’re either loading the requested image, or we have loaded the image before.
Notice that I’m using a Task<Data, Error> for the enum’s loading case. We can wrap
the work of retrieving our resource in a Task so that we can await the value of the task for
multiple callers of loadImageData(using:). What’s nice about this approach is that a
Task<Data, Error> can safely have its value property awaited by multiple callers. This

Donny Wals 120


Practical Swift Concurrency

means that a single task to load our image can communicate its result back to lots of callers
of loadImageData(using:) without making any extra tasks.

actor ImageLoader {
private var imageData: [UUID: LoadingState] = [:]

func loadImageData(using id: UUID) async throws -> Data {


if let state = imageData[id] {
switch state {
case .loading(let task):
return try await task.value
case .completed(let data):
return data
}
}

// To implement..
}
}

extension ImageLoader {
enum LoadingState {
case loading(Task<Data, Error>)
case completed(Data)
}
}

If a state exists in our in memory cache, the code checks whether the state is loading which
means that we can try await the outcome of the already in progress task. If we encounter
a state of completed we can directly grab the data from the associated value and return
it.
This part of the code is what will prevent us from having multiple data tasks in progress for
the same resource.
In the case that we don’t have an existing state, we should create a task, cache it, and then
await its value after caching it to make sure this all plays nicely with actor reentrancy:

Donny Wals 121


Practical Swift Concurrency

func loadImageData(using id: UUID) async throws -> Data {


if let state = imageData[id] {
switch state {
case .loading(let task):
return try await task.value
case .completed(let data):
return data
}
}

let task = Task<Data, Error> {


// buildURL constructs a URL object based on a UUID
let url = buildURL(using: id)
let (data, _) = try await URLSession.shared.data(from: url)
return data
}

imageData[id] = .loading(task)

do {
let data = try await task.value
imageData[id] = .completed(data)
return data
} catch {
imageData[id] = nil
throw error
}
}

By creating our task first, then caching it, and then awaiting the task’s value, we’ll have added
the in progress task to our cache before we suspend. This means that subsequent concurrent
calls to loadImageData(using:) will see our up to date cache and know that they can
go on ahead and reuse the task we’ve already created by awaiting that task’s value.
After we obtain our task’s value, we set the state in our cache to be .completed with the
data we just loaded. Note that I only need to do this in the code path where we’ve just created

Donny Wals 122


Practical Swift Concurrency

the task; there’s no point in doing this on both code paths.


Should our task fail, we want to make sure that we clear out the existing, failed, task that we
have cached so that we can retry our loading operation later. This is a choice I’ve made for
this example and you might want to opt for a third possible state (.failed for example) to
represent that you have attempted to load this image, but that the operation failed.
After we’ve removed the failed operation from the cache, we throw the error we just received.
Again, this is a choice I’ve made to force callers of our function to deal with the error, a different
approach might be more appropriate for your use case.

Understanding Global Actors


When you’re working with Swift Concurrency using Xcode 26’s default settings, all your code
will be “isolated to the main actor by default”. This means that all your code, unless specified
otherwise, runs on the main actor (the main thread). This isn’t something magical that you
can’t do on your own. Prior to Swift 6.2 it was common to annotate certain classes (like view
models and views) with an @MainActor declaration to explicitly isolate those objects to the
main actor. That @MainActor declaration uses a global actor called the main actor.
Again, if you create a new project in Xcode 26 your code will be isolated to the main actor by
default. This means that your code gets @MainActor added to it implicitly. Exceptions are
your actors (those can’t be isolated to a global actor), classes that you’re explicitly isolated to
another global actor, and declarations that you’ve marked as nonisolated.
In addition to using the @MainActor global actor, we have other ways to dispatch code to
the main thread.
For example, you can use MainActor.run:

await MainActor.run {
// update UI
}

Similar to DispatchQueue.main.async, we can pass a closure that’s executed on the


main thread.

Donny Wals 123


Practical Swift Concurrency

However, there’s more ways to run code on the main thread. Imagine that you have some
function in your code that should always run on the main thread. We can achieve this by
annotating that function with an @MainActor annotation:

@MainActor
func updateUI() {
// update UI
}

Again, the above is actually the default for new projects created with Xcode 26 and for projects
that opted in to the main actor by default compiler setting. That said, I think it makes sense for
us to explore global actors through the @MainActor declaration anyway since it’s a built-in
global actor with a very clear purpose.
We added a special annotation to the updateUI function to make sure it always runs on the
main actor. This annotation is written as @MainActor and it’s used to refer to a global actor.
We can apply a global actor annotation to the following declarations:

• Functions
• Objects (classes, structs)
• Closures
• Properties

Applying the @MainActor annotation to a class declaration ensures that accessing any
property or method on that class is done through the main actor. In other words, it allows us
to make sure that any interactions with the annotated object occur on the main thread.
A common advice is to, for example, annotate view models and observable objects for SwiftUI
with the main actor because these objects often end up triggering UI updates which should
be done through the main actor. It’s advices like these that led to Apple introducing the
main actor by default configuration in Xcode 26; most apps really don’t need to have tons of
concurrency by default.
Annotating an entire object as @MainActor will make it seem as if that entire object is
itself defined as an actor because access to all state on that object will be subject to actor
isolation; just like it would if you had defined your object as an actor. The key difference is
that with @MainActor you don’t actually define your object as an actor, and your object

Donny Wals 124


Practical Swift Concurrency

will synchronize all of its method calls and property accessing using the main actor which
effectively means that accessing properties and calling methods on the annotated object is
always done on the main thread.
If all you’re looking for is synchronization around mutable state and you don’t care about
making sure your code runs on the main thread, you should define your actor as an actor
instead of using the @MainActor annotation. The @MainActor annotation is only useful
on an entire object when you require all property access and method calls on that object
to occur on the main actor. I can’t stress enough that this should be your default mode of
operation. Less concurrency and fewer actors make your code more predictable and robust.
Concurrency introduces complexity and more often than not, this complexity isn’t required.
That said, all rules surrounding how you should interact with your object will be pretty much
the same as they are when you would have defined your object as an actor. There is one
major exception though. If you’re interacting with an @MainActor annotated object from a
context that already runs on the main actor, you don’t have to await your method calls and
property access like you would otherwise. The reason for this is that you’re already running
on the main actor which means that the same rules apply as when you call a method on an
actor from within that actor.
While the main actor is currently the most useful example of a global actor we can define our
own global actors by annotating an existing actor with @globalActor and conforming it to
the GlobalActor protocol:

@globalActor
actor MyActor: GlobalActor {
static let shared = MyActor()

// ...
}

The GlobalActor protocol requires us to implement a static shared property that is used
as the instance to defer executing code to whenever we annotate something with our global
actor. In other words, when we annotate a function with @MyActor the instance of MyActor
that we created as the shared instance is the actor instance that will receive the call to our
annotated function in its mailbox.

Donny Wals 125


Practical Swift Concurrency

While actors are a fantastic tool to help us protect mutable state, they introduce concurrency
in our apps. When all you need from an actor is to protect a relatively small dictionary, making
every interaction with that dictionary is a pretty large cost to pay. All functions that interact
with your actor must be async functions. This means that you can only call them from tasks or
other async functions. Adding actors to an existing codebase is a large cost because it often
comes with a significant cost.
It’s worth considering a different data race protection mechanism when you want to avoid
introducing concurrency around the state you need to protect. Let’s talk about Mutex.

Protecting state with mutexes


At the beginning of this chapter I explained how you could leverage a lock to protect mutable
state without actors. Mechanisms like NSLock work, but they are considered to be somewhat
outdated and they don’t enforce data race protection in the way that we’d like. We still have
a var and we need to remember to acquire and release our locks when accessing and / or
mutating that var.
A more modern and Swift Concurrency friendly approach to protecting our property is to use
a mutex.
Mutexes in Swift are a cross-platform mechanism to safely and correctly protect mutable state
from data races.
Let’s see how we can define a property that’s protected by a Mutex:

import Synchronization

class DateFormatters {
private let formatters: Mutex<[String: DateFormatter]> = Mutex([:])
}

The Mutex object is defined in the Synchronization framework so we must import that
if we’re using Mutex.

Donny Wals 126


Practical Swift Concurrency

While we’re protecting mutable state, the formatters property must be declared as let
because we’re using it to hold a Mutex that wraps our dictionary. Properties of type Mutex
must always be defined as let.
Now let’s see how we can read values from our formatters property by partially implement-
ing a mutex-based version of the formatter(for:) function that you’ve seen before.

func formatter(for format: String) -> DateFormatter {


let formatter = formatters.withLock({ dict in
return dict[format]
})

if let formatter {
return formatter
} else {
// ...
}
}

To access the dictionary that’s protected by our Mutex, we call withLock on it. This will
attempt to acquire a lock through our Mutex, and it will automatically release our lock when
our closure ends. Since withLock is the only way to access our state, we know that we’re
always going to be free of data races and can’t forget to acquire a lock before accessing the
dictionary.
If you return a value from the closure passed to withLock, the returned value becomes the
return value of your call to withLock. In this case I’m returning dict[format] which
means that I’ll return a formatter if one exists, and nil is returned otherwise.
Now let’s see how we can update our dictionary if an existing formatter isn’t found:

func formatter(for format: String) -> DateFormatter {


let formatter = formatters.withLock({ dict in
return dict[format]
})

if let formatter {

Donny Wals 127


Practical Swift Concurrency

return formatter
} else {
let newFormatter = DateFormatter()
newFormatter.dateFormat = format

formatters.withLock { dict in
dict[format] = newFormatter
}

return newFormatter
}
}

We can create a new date formatter and mutate our dict from inside of the withLock
closure. The dict that’s passed to the closure is passed as an inout value which means
that mutating the dict we receive will mutate the original dictionary.
Working with mutexes is a little bit more involved than working with actors, but it does mean
that all extra work is done in one place. If we use an actor we’re suddenly introducing a lot
of concurrency into our codebase which is quite wasteful when it comes to only having to
protect a property.
If you benchmark this mutex based dateformatter against an actor based solution, the actor
based solution is much slower due to the system having to suspend functions, hopping from
one isolation context to the other, and then going back. The mutex approach is blocking the
calling task while we wait for a lock, but in practice reading / assigning a property is so fast
that in all tests I’ve done there were no measurable downside to blocking while waiting for a
lock.
In the end, it’s your decision to decide which solution to pick. The tradeoff is always going
to be between performance, ease of use, and amount of concurrency you want to introduce
in your code. I highly recommend you run experiments and measure performance using
instruments to figure out the best solution for your use case.
When you use a class with mutex to protect mutable state, you might find that Swift starts
complaining about your class being non-Sendable. You didn’t get this complaint when you
were using actors, so let’s dig into Sendable and see what’s up with that.

Donny Wals 128


Practical Swift Concurrency

Understanding Sendability in Swift


The focus of this chapter is to understand how Swift Concurrency helps us prevent data races
at the compiler level. Actors are an invaluable tool in writing thread-safe code as you have
seen in this chapter so far.
However, Swift Concurrency does more than just introduce us to actors. It also checks whether
objects and closures that we pass around in our applications can safely be passed from one
isolation context to another, potentially passing boundaries from one thread to the next. In
other words, Swift Concurrency can enforce thread safety at compile time. That’s precisely
what we’re seeing when we see warnings and errors about classes with mutexes being non-
Sendable.
Objects like actors can safely be passed around across isolation boundaries; they were made
for that. Other objects like structs, classes, closures, and enums are not always safe to pass
around. They might hold some non-thread-safe (non-Sendable) state that would make it
unsafe to pass an instance of a particular class from one place to the next. However these
objects aren’t always unsafe to pass around. Our mutex-based class is a great example of such
a class.
Another example is a struct that doesn’t hold on to any reference types. A struct like that can
safely be passed around in our app; after all structs are value types so we’re passing around
copies of structs instead of references to a single instance of our struct. A class that only holds
immutable state where each property defined on that class is a struct can be safely passed
around because there’s no way to get into a data race when it’s impossible to mutate the data
that we’re passing around.
Objects that can be passed across concurrency contexts safely are referred to as Sendable
in Swift. Sendable is defined as a protocol in Swift and it has no required properties or
methods. It’s a so-called marker protocol that will mark our object as Sendable so that
the compiler can check whether our object actually meets all of the requirements for being
sendable.
So what are these requirements exactly, and what can we do to make sure that our objects
pass the compiler’s sendability checks? And most importantly, how can we make sure the
compiler shows us all of the errors we might have in our code surrounding sendability?

Donny Wals 129


Practical Swift Concurrency

Enabling Sendability checks for the Swift compiler


Certain Swift Concurrency features are considered to be too much of a source code breaking
change to enabled by default. Sendability checking is considered one of these features by
the Swift core team. To enable strict checking for Sendability, and to see the relevant com-
piler errors and warnings you should enable a concurrency feature called Strict Concurrency
checking in your project.
This will make sure that the compiler flags any problems in your code in the exact same way
that it would when you’re using the Swift 6 language mode.
Note that when you’re using Swift 6.x you’re using the Swift 5 compiler mode by default when
you’re creating new Xcode projects. This means that you have access to Swift 6.x’s features
but the compiler is enforcing Sendability checking like it would in Swift 5.10. Most of your
Xcode projects will be using the Swift 5 language mode. When you create new SPM packages
using Xcode 16 or newer, you will be creating new packages using the Swift 6.x’s toolchain. By
default that will mean that you’re using the Swift 6 language mode for your SPM packages
unless you set your language version to Swift 5 manually.
SPM packages that were created with a toolchain that’s older than Swift 6 will use the Swift 5
language mode. You can check this by looking at your Package.swift file. Here’s an example of
a package that will use the Swift 6 language mode since it was created with the Swift 6.2 tools
version. The comment on the first line of the file is used by the compiler to determine this.

// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.

import PackageDescription

let package = Package(


name: "Chapter6",
targets: [
.executableTarget(
name: "Chapter6"),
]
)

Donny Wals 130


Practical Swift Concurrency

Now let’s see an example of an older package that will use the Swift 5 language mode:

// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.

import PackageDescription

let package = Package(


name: "Chapter6",
targets: [
.executableTarget(
name: "Chapter6"),
]
)

Because we’re defining our package with the 5.10 tools version, the compiler will use the Swift
5 language mode for this package.
New packages should always be created using the latest tools version since using an older
tools version means there will be SPM features you can’t use if they were made available in a
newer toolchain.
It’s much better to explicitly set your Swift language version in your package description
instead:

// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.

import PackageDescription

let package = Package(


name: "Chapter6",
targets: [
.executableTarget(
name: "Chapter6",

Donny Wals 131


Practical Swift Concurrency

swiftSettings: [.swiftLanguageMode(.v5)]
),
]
)

If you’re creating a new package, I recommend that you don’t drop your swift language version
down to Swift 5 unless you’re running into problems you can’t solve in the Swift 6 language
mode. Writing new code with Swift 6 is much easier than migrating from Swift 5’s language
mode to the Swift 6 language mode.
In any event, if you have a package or project that uses the Swift 5 language mode, you can
opt-in to stricter concurrency checks in preparation for the Swift 6 language version.
To enable strict concurrency checks in an SPM package, you need to pass the enableExper-
imentalFeature flag to your target settings with a value of StrictConcurrency:

// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.

import PackageDescription

let package = Package(


name: "Chapter6",
targets: [
.executableTarget(
name: "Chapter6",
swiftSettings: [
.swiftLanguageMode(.v5),
.enableExperimentalFeature("StrictConcurrency")
]
),
]
)

Note that setting this flag in a project that uses the Swift 6 language mode has no effect since
you’d already have the full suite of sendability checks available under Swift 6.

Donny Wals 132


Practical Swift Concurrency

To enable strict concurrency checking in an Xcode project, navigate to your project’s build
settings tab and search for “strict concurrency”. By default, you will find this build setting
to have a value of Minimal. As you can imagine, that will perform a very minimal set of
constraints like explicit Sendable annotations for example (you will learn about Sendable
annotations shortly).
You could bump the concurrency checks to be Targeted which will enable sendability and
actor isolation checks for your project using the same constraints that will be used in Swift 6.0.
I would recommend you set this setting to be at least Targeted for existing projects where
you want to ensure that your code is as thread-safe as possible.
The third setting is Complete. This will enable the full set of concurrency checks that exists
in Swift 6.x. This includes sendability checking, actor isolation checks, and more. For new
projects I would recommend you jump straight to setting your checking settings to Complete
which will make sure that your code is compatible with the Swift 6 language mode right
away.
Refer to the screenshot below to see how you can find and set your strict concurrency checking
settings.

Figure 15: Enabling strict concurrency checking

Once you have your concurrency checks set to Targeted or Complete you will be able
to follow along with the sections that follow, and see the compiler errors I will mention and
resolve. If you’re not seeing the same errors, make sure that your concurrency checking
settings are configured correctly.
In addition to enabling strict concurrency checking, you can take a look at Chapter 12 - Migrat-
ing to Swift 6.2 to learn more about migrating to Swift 6.2.

Donny Wals 133


Practical Swift Concurrency

Sendability for value types


Value types in Swift are types like structs and enums. Instances of these objects are almost
always copied whenever they get passed around in your application. There are situations
where Swift does not immediately make a copy of your structs and instead it will make a copy
when you’re about to mutate the given struct instance. This is known as copy on write or
COW and it’s an optimization that has been added to objects like Dictionary and Array
amongst others.
The result of this is that value types can usually be passed around in applications that leverage
concurrency without any issues. After all, when we pass an instance of a value type from one
place to the other, or more concretely from one task or thread to another task or thread, the
receiving end of the instance will obtain its own copy of our type.
Swift Concurrency uses a protocol called Sendable to express whether or not an object can
be safely passed across concurrency boundaries. Or in other words, the Sendable protocol
indicates whether an object is thread safe.
A struct’s copying behavior makes structs quite safe by default but not every struct is auto-
matically sendable. Let’s look at an example:

class Movie {
// ...

var isFavorite = false


}

struct MovieViewData {
let movie = Movie()
}

Given a setup where we have a struct that has a reference to a class as one of its properties,
we know that copying the struct will not copy the instance of the class that the struct points
to. We only copy the pointer to the instance itself, resulting in two different structs that point
to the same class instance.
This means that when we pass instances of MovieViewData around in an application
we will create copies of that MovieViewData instance. Because the movie property on

Donny Wals 134


Practical Swift Concurrency

MovieViewData is a pointer to a class instance, the pointer is copied. This results in both
copies of MovieViewData pointing to the exact same Movie instance.
When we try to use an instance of MovieViewData in a way that would have us pass that
instance across concurrency boundaries Xcode will show us a warning:

func example1() {
let data = MovieViewData()

Task {
// Capture of 'data' with non-sendable type 'MovieViewData' in a
,→ `@Sendable` closure
print(data.movie.isFavorite)
}
}

For now, don’t worry about what an @Sendable closure is. The point is that the Swift
compiler inferred that our MovieViewData instance cannot be passed across concurrency
boundaries because one or more of its properties is not sendable.
If we’d change our Movie object from a class to a struct, we’d see that the example()
function shown above suddenly no longer presents warnings:

struct Movie {
// ...

var isFavorite = false


}

struct MovieViewData {
let movie = Movie()
}

The reason for this is that MovieViewData now implicitly conforms to the Sendable
protocol because all of its members are also sendable.
Let’s take a closer look at all rules that make a value type implicitly conform to Sendable:

Donny Wals 135


Practical Swift Concurrency

• All members of the struct or enum must be sendable and

– The struct or enum is marked as frozen or


– The struct or enum is not public or
– The struct or enum is not marked as @useableFromInline

If your struct or enum does not meet all of the above requirements, you can manually add
Sendable conformance to tell the compiler to check the sendability for your object.
For example, when I make the MovieViewData a public struct, the warning that was
resolved earlier shows up again:

public struct MovieViewData {


let movie = Movie()
}

func example() {
let data = MovieViewData()

Task {
// Capture of 'data' with non-sendable type 'MovieViewData' in a
,→ `@Sendable` closure
print(data.movie.isFavorite)
}
}

Go ahead and try the above in this chapter’s code examples to see the warning for yourself.
Regardless of what the compiler is telling us, we know that all of the members on our struct
are sendable so we can tell the compiler that we want our struct to be considered Sendable
anyway:

public struct MovieViewData: Sendable {


let movie = Movie()
}

By manually adding conformance to Sendable, we tell the compiler that our object should
be sendable and the compiler will actually verify this at compiler time. This means that

Donny Wals 136


Practical Swift Concurrency

breaking the first rule of sendable structs (All members of the struct or enum must be sendable)
will still result in a warning. For example, we can change the Movie struct back to being a
class and our warning re-appears.

class Movie {
// ...

var isFavorite = false


}

public struct MovieViewData: Sendable {


// Stored property 'movie' of 'Sendable'-conforming struct
,→ 'MovieViewData' has non-sendable type 'Movie'
let movie = Movie()
}

The compiler will proactively tell us that we should be careful with our MovieViewData
object because the movie property is not sendable which means that our entire object is not
correctly implementing the Sendable protocol which can lead to data races if we pass our
MovieViewData instances across concurrency contexts.
All the rules you saw just now apply equally to enums. When you have a plain enum with no
cases that have associated types you’ll find that your enums are pretty much always sendable.
However, when you add associated values to your cases, you should make sure that these
associated types conform to Sendable to ensure that your enum is also sendable.
We saw how using a class as a property on our structs makes the struct non-sendable. Luckily
this doesn’t have to be the case. We can make reference types conform to the Sendable
protocol with a little bit of care, attention, and manual work.

Sendability and reference types


Reference types in Swift are classes, functions, closures, and of course actors. When talking
about sendability for reference types it should more or less speak for itself that actors are
always considered to conform to Sendable implicitly. They actively prevent data races

Donny Wals 137


Practical Swift Concurrency

through their actor mailbox which allows actor instances to be passed across concurrency
boundaries safely.
We’ll talk about sendability for functions and closures separately so in this section I would
like to focus attention on classes and the Sendable protocol.
Classes can be manually marked to conform to the Sendable protocol but there are a few
requirements that I’ll dig a bit deeper into in just a moment. Let’s list out the requirements
first:

• A sendable class must be final


• All properties on the class must be sendable and immutable (declared as let)
• The class cannot have any superclasses other than NSObject
• Classes annotated with @MainActor are implicitly sendable due to their synchroniza-
tion through the main actor. This is true regardless of the class’ stored properties being
mutable or sendable.

The Movie class object you saw before can be made Sendable by marking the class as fi-
nal, making its isFavorite property a let and making the class conform to Sendable
explicitly:

final class Movie: Sendable {


// ...

let isFavorite = false


}

Not marking the class as final or keeping isFavorite a var will result in compiler warn-
ings that tell us exactly what we should do to satisfy the Sendable protocol.
In some legacy codebases you might have classes where you manually ensure that a class is
thread safe. For example, you might have used NSLock to serialize access like we did at the
start of this chapter. Or maybe you’re using a serial dispatch queue to synchronize access to
mutable state. Or maybe you’ve set up a class that implements some convenience methods
for subclasses that are otherwise fully immutable.
In an ideal world, you would refactor this legacy code to properly meet the requirements that
the Sendable protocol imposes. You might want to switch to actors or flatten your class

Donny Wals 138


Practical Swift Concurrency

hierarchy eventually. Unfortunately, refactoring can take time and in a sufficiently complex
codebase you might not be able to perform all refactoring in one go.
When you’re certain that you’ve taken the needed steps to ensure that your class is thread-safe
and fully free of data races you can force the compiler to accept your Sendable conformance
without actually verifying your conformance by marking your class as @unchecked Send-
able:

final class Movie: @unchecked Sendable {


// ...

var isFavorite = false


}

The compiler will see that our Movie object is @unchecked Sendable and it will blindly
accept that our class is sendable. Note that this does not guarantee that our Movie instance
can safely be passed across concurrency boundaries. It simply means that we told the compiler
that even though the class doesn’t look like it conforms to Sendable, we want it to pretend
that it does because we’re taking full responsibility for ensuring that our class is free of data
races.
If that sounds dangerous to you, then you’re absolutely right. Using @unchecked Send-
able at the wrong time just to get the compiler to stop complaining is not a very good idea.
The concept of sendability in Swift is intended to help you write code that is free of data races.
It’s not intended to be a roadblock that you should find your way around using whatever tools
you can find.
Now that you know about sendability for classes, actors, and value types it’s time to take a
look at sendability for functions and closures.

Sendability, functions, and closures


In the previous sections, we focussed on sendability as something that means as much as
“This object is thread-safe because my (mutable) state can be accessed safely”. For a function
or closure to be considered thread safe, we need to shift our thinking a little bit. After all,

Donny Wals 139


Practical Swift Concurrency

functions and closures do not “own” any state in a way that allows us to access or interact
with that state by accessing properties or calling method directly on that closure or function
like we can with structs and classes that can expose methods and properties.
So what do we mean when we say that a certain closure for example is sendable?
When a closure or function is sendable, the closure or function in question does not capture
any non-sendable objects. And because functions and closures do not conform to protocols
we declare sendability for these types using the @Sendable annotation.
For example, we can define a sendable closure as follows:

var sample: @Sendable () -> Void

Similar to how we define an @escaping closure, a closure that is sendable will have an
@Sendable annotation before we write the closure’s actual type. Note that closures that
are both @Sendable and @escaping would be marked as @Sendable @escaping ()
-> Void.
For functions this looks as follows:

@Sendable func sampleFunc() {


// ...
}

The sendability of the function is declared before the func keyword.


When you mark a closure or function as sendable you’re essentially saying “this closure (or
function) cannot capture or use any non-sendable state or objects”. If you do accidentally
capture something that’s not sendable in an @Sendable context like we do in the following
code snippet, you will see the compiler warning that’s written in a comment:

class Movie {
// ...

var isFavorite = false


}

Donny Wals 140


Practical Swift Concurrency

let movie = Movie()


var sample: @Sendable () -> Void = {
// Capture of 'movie' with non-sendable type 'Movie' in a
,→ `@Sendable` closure
movie.isFavorite = false
}

Our closure that’s supposed be sendable captures and interacts with a non-sendable instance
of the Movie class. This isn’t allowed because it breaks the sendability of our closure. We
can fix our issue by making sure that our Movie class is sendable, or if we’re only reading
properties on that instance, we could capture those in a capture list instead of using (and
implicitly capturing) the movie as a whole:

let isFavorite = movie.isFavorite

var sample: @Sendable () -> Void = { [isFavorite] in


print(isFavorite)
}

The code above doesn’t work if:

• You want to mutate the isFavorite property


• You want to always have the latest / current value for isFavorite

The second point is especially interesting, if we capture isFavorite, we will capture its
value at the time we create the closure. So if isFavorite is false when we make the
closure, we capture the property as being false. If the isFavorite property on the movie
changes between creating and executing the closure, we’d see the old value in our closure.
The only way to properly fix the problem at hand is to either make our Movie conform to
Sendable, or to remove the @Sendable annotation from our closure if it’s a closure we
wrote ourselves.
Annotating a function or closure with @Sendable carries a heavy semantic meaning. It
means you intend to use that closure or function in code that is concurrent, and it means that
you want to make sure that calling your closure or function is completely thread-safe.

Donny Wals 141


Practical Swift Concurrency

When you don’t intend to be calling your function or closure from async methods you
shouldn’t default to applying @Sendable all over the place. Doing this will make it much
harder to work with your own code, sometimes for no good reason at all. Of course, if you do
think you have a good reason it makes sense to ensure that certain functions can safely be
called from multiple threads; but in my opinion your efforts are better spent on making sure
your classes and structs are thread-safe than it is to make sure that all your functions only
operate on sendable objects.
The @Sendable declarations for closures is mostly useful when you’re passing state into a
closure, and you want to be able to use that state from other places at the same time. That’s
why a sendable closure can’t capture any non-Sendable state at all. However, sometimes the
compiler can proof that transfering non-Sendable state from one isolation context to another
is safe because the original isolation context transfers the state to your closure and then it
never accesses it again. This situation is solved by marking a closure as sending instead of
@Sendable.
Let’s take a closer look at sending next.

Using sending instead of @Sendable


When Swift concurrency was first introduced, tasks, tasks groups, continuations, and more
marked the closures they’d run as @Sendable. This made code that looks as follows incorrect
in the Swift 6 language mode (or Swift 5 mode with strict concurrency checking):

class NotSendable {}

// nonisolated because we're assumning @MainActor by default is on


nonisolated func runWork() {
let notSendable = NotSendable()

// Modern Swift versions use a Task with a sending closure so


,→ you'll never see this error in recent Xcode versions.
Task {
// Capture of 'notSendable' with non-Sendable type 'NotSendable'
,→ in a '@Sendable' closure; this is an error in the Swift 6
,→ language mode

Donny Wals 142


Practical Swift Concurrency

print(notSendable)
}
}

While the compiler is correct, our class isn’t sendable so it’s not safe to use it from multiple
tasks at once, the compiler should be able to proof that this code is perfectly fine.
We create an instance of our NotSendable class inside of runWork and it’s assigned to a
local property. This means that we know that after runWork returns, we do not have any
references to our NotSendable instance anymore. We only use notSendable inside of
our Task so once we’ve transfered our instance from the isolation region in runWork to the
one owned by our Task we could say that we transferred ownership of our instance from one
isolation region to the next.
Because @Sendable places a hard-constraint on what we’re allowed to capture (only send-
able state), we can use the new sending keyword to indicate that we’re okay with capturing
non-Sendable state as long as it’s safe.
So in Swift 6 you’ll see that the code shown above works perfectly fine. The compiler can
proof safety, so we’re okay to capture notSendable in our Task.
The following code is fine too:

// nonisolated because we're assumning @MainActor by default is on


nonisolated func runWork() {
let notSendable = NotSendable()

print(notSendable)

Task {
print(notSendable)
}
}

While we do interact with notSendable inside of runWork, the compiler can proof that
we don’t interact with runWork anymore after we’ve sent it into our Task.
The compiler will flag an issue if we access nonSendable after our task:

Donny Wals 143


Practical Swift Concurrency

// nonisolated because we're assumning @MainActor by default is on


nonisolated func runWork() {
let notSendable = NotSendable()

Task {
// Sending value of non-Sendable type '() async -> ()' risks
,→ causing data races; this is an error in the Swift 6 language
,→ mode
print(notSendable)
}

print(notSendable)
}

The compiler can’t proof that our Task and access to notSendable won’t race so it flags
an issue.
What’s interesting is that the following is perfectly fine (assuming we have main actor by
default turned on):

// We're assumning @MainActor by default is on


func runWork() {
let notSendable = NotSendable()

Task {
print(notSendable)
}

print(notSendable)
}

Because everything in the code above is isolated to the main actor, the compiler knows that
our access to notSendable after the task is safe. The task can’t run in parallel with runWork
since the main actor will only be doing one thing at a time.
Usually when you define functions that take a closure that needs to be run in a thread-safe
manner you’ll want to mark the closure as sending first and then switch to @Sending only

Donny Wals 144


Practical Swift Concurrency

when you find that it sends you down a path of compiler errors that are hard to solve. Here’s
how you define a sending argument on a function:

// nonisolated because we're assumning @MainActor by default is on


nonisolated func myTask(work: sending () -> Void) {
work()
}

This is quite similar to defining a @Sendable closure except you use sending instead.
Features like sending make writing concurrent code much easier than it was with just
Swift 5.10. We’re now able to let the compiler verify that our code is safe even when we’re
technically passing non-Sendable state across isolation boundaries. Swift 6 will allow us to
do that as long as the compiler can ensure exclusive access. In practice this means that we
see fewer Sendability related warnings which makes adopting Swift Concurrency much more
straightforward than it was in Swift 5.10. Combined with features like @MainActor by default
and nonisolated(nonsending) to inherit actor isolation writing concurrency-friendly
code is becoming more and more straightforward with every Swift release.

In Summary
In this chapter, you have been introduced to the basics and more advanced uses of actors.
You learned what data races are in programming, why they are hard to debug, and how they
would traditionally be solved in a pre-Swift concurrency world.
Next, you learned that we can leverage actors in modern applications to ensure exclusive
access to mutable state that’s owned by an actor. You learned that actors leverage a so-called
actor mailbox to receive messages that are handled one at a time. You also learned that
whenever an actor is processing a message and it hits an await, the actor suspends the
current function (or message) and starts processing the next item in its queue. You learned
that this can lead to unexpected changes to assumptions you made before the await, and
you’re learned that this principle is called actor reentrancy.
After that, you learned about global actors and how they can be used to build a single actor
that can be leveraged to synchronize code all throughout your codebase onto a single actor.

Donny Wals 145


Practical Swift Concurrency

You’ve learned that this principle is mostly useful for the main actor, and that it will become
more useful once Swift Concurrency officially supports custom executors.
We wrapped this chapter up by looking at a concept called sendability, the Sendable protocol
and the @Sendable annotation. You’ve seen how these concepts help us to make sure that
our code is thread-safe and how we can use these concepts to make sure that our objects can
be passed across concurrency boundaries safely.
After that, you also learned about the sending keyword which allows us to safely transfer
non-Sendable state from one isolation context to the next as long as we do so with exclusive
access.
In the next chapter, we will take a look at a completely different part of Swift Concurrency that
I haven’t mentioned much about just yet; async sequences.

Donny Wals 146


Practical Swift Concurrency

Chapter 7 - Working with asynchronous


sequences
In the past six chapters of this book we’ve explored Swift Concurrency and async / await as tools
that allow us to perform some body of work and wait for the result. In reality, asynchronous
or concurrent work does not always follow this pattern where we kick off work, wait a while,
and then receive a single result. Sometimes we want an asynchronous body of work to tell
us about the progress it’s making. Or maybe you want to read streaming data from a URL, or
process messages from a websocket as they become available.
These are just a few of the many examples where an asynchronously generated stream of
values, or stream of information, is useful. These asynchronously produced values can vary
wildly in terms of what they represent, but we know they’re all being produced as part of a
single “thing” or process that we want to perform.
In this chapter we will explore Swift Concurrency’s AsyncSequence which allows us to
asynchronously receive and process values. We’ll look at some of the built-in sequences that
we can use and how we can make our own sequences.
In the next chapter we’ll continue exploring async sequences and you’ll learn how async
sequences relate to Combine and we’ll look at a special Swift package that Apple made to
extend what we can do with asynchronous sequences in Swift.

Exploring your first async sequences


Almost anybody that has worked with Swift for a little while is familiar with writing simple,
straightforward for loops. For example, to iterate over all items in an array you can write the
following code:

let myArray = [1, 2, 3]

for int in myArray {


print(int)
}

Donny Wals 147


Practical Swift Concurrency

Hopefully, this code doesn’t look unfamiliar to you at all. All we do here is iterate over each
element in an array using a standard Swift for loop.
Now imagine that we’d like to receive all data from a URL where it’s possible (or even required)
to process each line on its own. For example, we could be dealing with a document that looks
like this:

year,make,model,body_styles
2023,Acura,ILX,"[""Sedan""]"
2023,Acura,Integra,"[""Hatchback""]"
2023,Acura,MDX,"[""SUV""]"
2023,Acura,RDX,"[""SUV""]"
2023,Acura,TLX,"[""Sedan""]"
2023,Alfa Romeo,Giulia,"[""Sedan""]"
2023,Alfa Romeo,Stelvio,"[""SUV""]"
...

This kind of document is called a CSV (Comma Separated Values) document where each
line in the document represent an entry in a spreadsheet or database. For each entry the
information fields are separated by commas. It’s a common kind of file to use when you want
to exchange tabular data between two people that might not use the same spreadsheet editor
for example.
A logical way to process a file like the above would be to go line by line, extract the fields into
a model object, and essentially keep doing this until the end of the file.
I’m not saying this is the best or most efficient way to process a CSV file; it’s just an example that
happens to fit nicely with what I’d like to show you in this section. In fact, the implemantation
you’re about to see is incredibly naive to the point that it will certainly have some bugs when
applied to a different data set.
We could write the following to parse the file you just saw:

Donny Wals 148


Practical Swift Concurrency

struct Car {
let year: String
let make: String
let model: String
let body_styles: String
}

let csvData = try Data(contentsOf: url)


let csvString = String(data: csvData, encoding: .utf8)
let csvLines = csvString?.components(separatedBy: "\n") ?? []
for line in csvLines {
let components = line.components(separatedBy: ",")
guard components.count == 4 else {
continue
}
let car = Car(year: components[0],
make: components[1],
model: components[2],
body_styles: components[3])
cars.append(car)
}

print(cars)

Again, the code above is far from a completely valid implementation of a proper CSV parser.
The point of this example is that we have a file where each line in the file represents an instance
of our data model. So what we end up doing is parsing our file line by line, extracting what we
need, and we create our model objects using that input.

Tip:
Follow along with the examples in this section by running the local server in the code
bundle for this chapter.

If you run the sample app for this chapter you can give the code you just saw a spin and play
around with it. You’ll see a warning about synchronously loading data from a URL which we
shouldn’t be doing. That’s fine; we’ll fix that in just a moment.

Donny Wals 149


Practical Swift Concurrency

Now imagine that this file is very large, thousands of lines, and we need to load it from the
internet before we can parse it. We’d probably spend quite some time to fetch Data and then
transform all data into a String before we could begin splitting the large string into an array
of strings by splitting the string up based on new line characters.
A transfer of data over the internet (or reading a file from disk for that matter) is usually done
by sending chunks of bytes from the server to a client. The client can parse and buffer these
bytes and merge all received chunks together as the result of the work that’s performed. This
ability to send chunks seperately is the foundation of the TCP protocol which is the most
commonly used protocol for network communication.
When we parse bytes as soon as they are loaded, we can actually start passing the data that
we’ve received to a piece of code that’s waiting for the data we’re loading.
For example, we could process lines one by one if we are given them by the network or file
system.
A setup like this would allow us to receive and process every line in the CSV file as soon as the
line is received from the server. That means that we might not have the entire file yet because
the server is still sending data but that’s okay. We’re only interested in parsing one line at a
time until we’ve parsed all lines anyway. The sooner we can begin parsing the first line, the
better.
We can do this by leveraging an asynchronous for loop and a very neat property on URL.
When we have a URL that contains our data, we can access the lines property on it to
asynchronously loop over all of the lines that the URL will load.
Here’s how we would adapt the code you saw earlier to do exactly what I just described:

var cars = [Car]()

// We can await values to become available


for try await line in csvURL.lines {
let components = line.components(separatedBy: ",")
guard components.count == 4 else {
continue
}
let car = Car(year: components[0],
make: components[1],

Donny Wals 150


Practical Swift Concurrency

model: components[2],
body_styles: components[3])
cars.append(car)
}

print(cars)

Notice how little code I changed. The key difference between the before and after code is in
the following line:

for try await line in csvURL.lines {

A new property on URL called lines allows us to process the contents of the provided URL
line by line, asynchronously. It’s almost like it gets us an array of lines that are present in the
file that’s hosted at the given URL. The key difference? Not all values are loaded by the time
we want to iterate over the sequence of lines.
In other words, as soon as we’ve fetched a full line from the network we can process that line.
After that, we are suspended and waiting for the next line to be loaded. This process repeats
until all lines are loaded and processed by our for loop.
Usually when you write a for loop, you iterate over an object that conforms to Swift’s Se-
quence protocol.
In this case, we’re iterating over an object that conforms to AsyncSequence. This means
that we’re still dealing with a sequence-like object, but not all values in the sequence are
known up front. This is the key difference between AsyncSequence and Sequence.
We can iterate over an asynchronous sequence with a for loop but we must await for each
value to become available.
Notice how there’s also a try written before the await. Certain sequences might encounter
errors while obtaining or producing values. These errors are then be surfaced to us using a
throw. Whenever a sequence throws an error, we know that the sequence has ended (with a
failure). The for loop ends, and our program continues.
Handling a for loop’s error looks a little bit as follows:

Donny Wals 151


Practical Swift Concurrency

do {
for try await line in csvURL.lines {
// ...
}
} catch {
print("something went wrong", error)
}

In essence, handling an error that is thrown by an async sequence is no different from handling
an error that is thrown by a function that you’ve called.
It’s important that you understand that an asynchronous for loop’s control flow is the same
as that of a synchronous one. We can exit out of our loop using a break, if we’d like to skip
processing a given element in the loop we can use the continue keyword, and when we
want to return something from a for loop that’s written within a function we can use return
just like we normally would.
With that knowledge in mind, I’d like you to look at the following code and try to reason about
what happens when this code runs:

var employees = [Employee]()


let csvURLPartOne = URL(string: "path/to/csv/part-1.csv")
let csvURLPartTwo = URL(string: "path/to/csv/part-2.csv")

for try await line in csvURLPartOne.lines {


// ...
}

for try await line in csvURLPartTwo.lines {


// ...
}

In this example, our csv file is split up into two parts, and we want to asynchronously load and
process each file by loading them in parallel. When we run the code above, your expectation
might be that both for loops will run simultaneously because they leverage async sequences.
Sadly, this is not the case.

Donny Wals 152


Practical Swift Concurrency

Because asynchronous for loops have the same rules as normal for loops, the first loop must
complete before the second loop can start. We can add some print statements to make this
concept more clear:

var employees = [Employee]()


let csvURLPartOne = URL(string: "path/to/csv/part-1.csv")
let csvURLPartTwo = URL(string: "path/to/csv/part-2.csv")

print("about to process...")

for try await line in csvURLPartOne.lines {


// ...
}

print("processed first sequence...")

for try await line in csvURLPartTwo.lines {


// ...
}

print("processed second sequence...")

Adding these print statements paints a somewhat clearer picture. If I would have shown you
this a couple of paragraphs sooner you would probably have 100% expected that these print
statements would be printed in order. And that’s entirely correct.
Knowing that, it also makes sense that these for loops do not run in parallel.
So how can we run both loops in parallel then? Well, there’s a few options.
The most obvious solution that only leverages Swift Concurrency features that you’ve seen
before is to give each loop its own task:

// ...

Task {
for try await line in csvURLPartOne.lines {

Donny Wals 153


Practical Swift Concurrency

// ...
}
}

Task {
for try await line in csvURLPartTwo.lines {
// ...
}
}

While this works to run code in parallel, it does prevent us from knowing when both tasks are
completed in a nice way. We can fix this by assigning these two tasks to their own variables
and awaiting their values:

Task {
let csvURL = URL(string: "http://127.0.0.1:8080/cars.csv")!

let task1 = Task {


for try await _ in csvURL.lines {
print("Loop one received line...")
}
}

let task2 = Task {


for try await _ in csvURL.lines {
print("Loop two received line...")
}
}

try await (task1.value, task2.value)


print("both sequences are processed")
}

This solution isn’t very clean at all, and we can do much better using tools like async let
and TaskGroup which you will learn about in the Chapter 9 - Performing and awaiting work
in parallel when we talk about parallelizing work and structured concurrency.

Donny Wals 154


Practical Swift Concurrency

For now, your key takeaway should be that two for loops written right underneath each other
act like any other for loop would; the first loop must complete before the second loop can
start.
Oh, one more thing I want to show you. We can put our asynchronous for loops inside of
asynchronous functions. These functions will not return until the for loop that’s inside of
them is completed. As you might expect, the example below is effectively the same as having
two for loops right underneath each other:

func loadOne() async throws {


for try await line in csvURLPartOne.lines {
// ...
}
}

func loadTwo() async throws {


for try await line in csvURLPartTwo.lines {
// ...
}
}

await loadOne()
await loadTwo()

I wanted to include this sample for you because I could see how you might be wondering if
async functions could help you out here.
In the example we started out with, we asynchronously fetched a csv file and we used this file
to populate an array of car objects. Here’s what that looked like:

var cars = [Car]()

// We can await values to become available


for try await line in csvURL.lines {
let components = line.components(separatedBy: ",")
guard components.count == 4 else {

Donny Wals 155


Practical Swift Concurrency

continue
}
let car = Car(year: components[0],
make: components[1],
model: components[2],
body_styles: components[3])
cars.append(car)
}

print(cars)

If you look closely, you’ll see that each line object that we receive is expected to be a String
and we transform each string into a Car. If this were all synchronous, you might say we take
an object of type [String] and we transform it to [Car] and you might be thinking “ha I
could do that with a map!”.
What’s nice about async sequences is, is that they too can be mapped. We could refactor our
code a little bit and make it so that we build an async sequence that includes a mapping step
that will be performed for every value that’s emitted by the sequence.

var cars = [Car]()


let csvURL = URL(string: "path/to/csv/")

let sequence = csvURL.lines


.map { line in
let components = line.components(separatedBy: ",")
guard components.count == 4 else {
return Car(year: "", make: "", model: "", body_styles: "")
}
let car = Car(year: components[0],
make: components[1],
model: components[2],
body_styles: components[3])
return car
}

Donny Wals 156


Practical Swift Concurrency

for try await car in sequence {


cars.append(car)
}

That’s pretty neat right? If you’re used to Combine this code will look very familiar because
Combine’s map works in a similar way.
For each element that is produced by our async sequence, we can transform that element and
return something new. Applying a map to an async sequence produces a new async sequence
that we must iterate over using await to obtain all of its values.
Other than map, we can use filter, flatMap, and more on AsyncSequence. For a full
overview I would recommend you take a look at the official documentation for AsyncSe-
quence.
While it’s great that Apple has added a lines attribute to URL and it served as a very neat way
for me to introduce async sequences to you, it’s probably not something that you’ll actually
use much.
What’s far more useful to explore is how we can implement our own async sequences, or to
see how we can have Combine and async sequences interoperate to allow us to get the best
of both worlds.
That’s exactly what we’ll shift our attention to next.

Producing your own async sequences with


AsyncStream
Once you’re ready to jump in fully with Swift Concurrency, and you want to leverage async
sequences, async functions, actors, and everything else Swift Concurrency has to offer, you’ll
usually find that some of your existing code needs to be bridged into the world of Swift
Concurrency somehow.
You’ve already seen how you can bridge existing callback based methods and functions in
your codebase over to Swift Concurrency using continuations.

Donny Wals 157


Practical Swift Concurrency

In this section, we’ll look at a different mechanism that also leverages continuations but in
a different way, allowing us to take any asynchronous work that produces values over time
(location updates, incoming websocket messages, and more), and transform that work into
something that produces an async sequence.
To do this, we can leverage something called an AsyncStream. Async streams are a contin-
uation based mechanism that allows us to have our continuation produce multiple values
which is great for tasks that have progress, or can produce a large number of values over
time.
An AsyncStream instance conforms to the AsyncSequence protocol which means that
anybody that is given our async stream will be able to iterate over our stream asynchronously.
This makes AsyncStream very useful if you’re interested in building your own objects that
produce values over time.
In this section, we’ll take a look at two objects that can be built using AsyncStreams. Of
course, these aren’t the only two kinds of objects you can build, but I’ve personally found these
to be interesting exercises in using async streams where the first example is a nice introduction,
and the second is more complex in terms of responding to events like cancellation.
The first object we’ll build is a location provider object that uses an async stream to provide
updates on the user’s location. The second object is an object that receives incoming messages
from a websocket, and nicely closes the connection whenever the async stream is cancelled
or otherwise goes out of scope.
Before we jump in and start building things with streams, let’s explore the basics of async
streams first.

Understanding the basics of async stream


We can create async streams in three different ways. The first is to create our stream with a clo-
sure that is called every time the stream is ready to produce a new value. The second approach
uses a closure that is called once with a continuation. We can then tell this continuation to
yield values, finish, or to throw an error. The third approach leverages the makeStream(of:
bufferingPolicy:) method which takes a type and a buffering policy and returns an
async stream and continuation as a tuple.

Donny Wals 158


Practical Swift Concurrency

We’ll start by exploring the first kind of async stream; the one with a closure that is called
repeatedly:

func makeStream(values: Int) -> AsyncStream<String> {


var valueCount = 0
return AsyncStream(unfolding: {
let value = await produceValue(shouldTerminate: valueCount ==
,→ values)
valueCount += 1
return value
})
}

func produceValue(shouldTerminate: Bool) async -> String? {


guard !shouldTerminate else {
return nil
}

return UUID().uuidString
}

When we create our async stream, nothing will happen initially. The stream is created, and we
can start iterating over it using an async for loop whenever we’d like.
Once we start our iteration and we start awaiting our first value, the async stream’s unfolding
closure is called. This closure is marked async which means that we’re allowed to do asyn-
chronous work from within our unfolding closure. In the example code, we await a function
call to function called produceValue(shouldTerminate:).
We expect that function to fetch or process some data to eventually produce a value that’s
returned. This returned value is then returned from the unfolding closure. In turn, the value
that we return from the unfolding closure is passed to our for loop so that we can use it.
Once our for loop body completes and we ask the stream for its next value, the unfolding
closure is called again. We then call our async function again to produce our next value, a
value is returned, and that value is then provided to our for loop.
This process repeats until we return nil from our unfolding closure. Once we return nil, we

Donny Wals 159


Practical Swift Concurrency

indicate that the stream has ended and we cannot produce a next value anymore; all work is
complete.
If we want to be able to perform a task that might throw an error in our unfolding closure, we
can use an AsyncThrowingStream instead of an AsyncStream. They function identi-
cally except a throwing stream can throw errors and it must be iterated over using for try
await value in stream instead of a plain for await value in stream.
In addition to providing values, we can also respond to cancellation for our stream. This can
be done by passing an onCancel closure to the AsyncStream’s initializer:

let stream = AsyncStream(unfolding: { [weak self] in


let nextValue = await self?.produceNextValue()
return nextValue
}, onCancel: {
// called upon cancellation
// a nice spot to do cleanup
})

The onCancel closure is called whenever the task that owns your async stream is can-
celled.
Note that breaking out of the for loop that consumes your async stream does not count as
a cancellation event. If you stop your loop, you will find that the current iteration of your
unfolding closure will continue to run, but it won’t be called again after that because your for
loop is no longer asking the stream for values.
The unfolding closure approach is particularly useful when you want to perform and await
some async work that can be contained within your unfolding closure. It’s not particularly
useful when you’re bridging a delegate based API like CLLocationManagerDelegate
because there’s no way for us to yield a new value for our stream from outside of the unfolding
closure.
Luckily, there’s a second way to build an AsyncStream that provides us more flexibility and
control (but is also a little bit more complex to manage). Let’s look at an example:

Donny Wals 160


Practical Swift Concurrency

let stream = AsyncStream { continuation in


print("will start yielding")
continuation.yield(1)
continuation.yield(2)
continuation.yield(3)
continuation.finish()
print("finished the stream")
}

for await value in stream {


print("received \(value)")
}

A key difference between the unfolding closure and the continuation based approach is
that the continuation gives us full control over how and when we produce values for our
continuation. In this example, we immediately send the values 1, 2, and 3 over the stream
and then we finish the stream. We do this by telling the continuation to yield values and
once all values are yielded we call finish on the continuation to complete the stream.
This shows that we can produce and send values whenever we decide we want to produce
and send values rather than waiting for a closure to be called.
By default, all values that we we yield are buffered. This means that even though we im-
mediately yield all values one after the other and then we complete the stream, anybody
that chooses to iterate over our stream will receive all values that have been yielded before
receiving the continuation.
If we run this code, the output will look like this:

will start yielding


finished the stream
received 1
received 2
received 3

You can tell that the closure that we passed to the AsyncStream initializer was invoked

Donny Wals 161


Practical Swift Concurrency

immediately, and that our calls to yield and finish all executed before we started our for
loop to iterate over the values yielded by our stream.
In some cases, this behavior is exactly what you want; you might be interested in all the
results that our stream ever yielded. For example, when you’re iterating over messages from
a websocket where each message you receive is a chat message that should be presented to
the user.
When you’re leveraging a continuation based AsyncStream to implement a location
provider, you’re likely not interested in all the locations that you’ve recorded for the user
before you started iterating over your stream. It’s more likely that you’re interested in the last
known location only.
When you’re only interested in the most recent n items that were sent by your AsyncStream,
you can give it a buffering policy through its initializer:

let stream = AsyncStream(bufferingPolicy: .bufferingNewest(1)) {


,→ continuation in
print("will start yielding")
continuation.yield(1)
continuation.yield(2)
continuation.yield(3)
continuation.finish()
print("finished the stream")
}

In the example above, I’ve provided a buffering policy of bufferingNewest(1). This will
make it so that the async stream buffers exactly one element and discards any values that
were yielded before the buffered value.
A buffering policy like this is useful when you want to make sure that you always receive the
last yielded value (if any) before receiving any new values. This is a very reasonable policy for
the location provider that we’ll build later.
You can also provide a buffering policy of bufferingNewest(0) which would discard any
values that weren’t received by a for loop immediately. This could be a useful buffering policy
for an async stream that yields values for events like when the user rotates their device or
taps on a button. You’re usually only interested in these kinds of events as soon as they occur,

Donny Wals 162


Practical Swift Concurrency

but once they’ve occurred they lose all relevance; you wouldn’t want to start iterating over
a stream only to be told that the user rotated their device 5 minutes ago; you’ve probably
already handled that rotation event somehow.
It’s also possible to provide numbers other than zero or one for your buffering policy. You
might be interested in getting the last four of five values from your stream instead. You can
simply provide a number that fits your requirements and you’re good to go.
In addition to buffering the newest values received by your stream, you can also use a
bufferingOldest policy. This will keep the first n values that were not yet received by a
for loop, and discard any new values that are received until space opens up in the buffer.
The sample code for this chapter contains an AsyncStreams object that you can play around
with to see the impact of using different buffering policies on what ends up in your async for
loop.
In addition to the ability to buffer values, an AsyncStream allows us to keep a reference to
our continuation outside of the closure that we pass to our AsyncStream initializer. This
allows us to yield values for our stream from outside of the initializer. For example, we can
yield values in response to certain delegate methods being called on an object that created
an AsyncStream and stored the stream’s continuation in a property.
To demonstrate how this works, let’s move on to our first example of an AsyncStream in
action and build an async stream based location provider.

Building an async stream based location


provider
As you know by now, async streams are a great tool to build your own async sequences that
yield values over time. A classic example of a value that changes over time is a user’s location.
People that use your apps typically don’t stay in a single place forever. They walk around,
they go places, and if your app helps them do this, they might be using your app while they’re
on the move.
Traditionally speaking, you’d implement functionality that involves the user’s location by
creating a CLLocationManager object and conforming one of your own objects to CLLo-

Donny Wals 163


Practical Swift Concurrency

cationManagerDelegate. Every time a new location becomes available, the location


manager informs its delegate, and from the delegate you can use the user’s location for
whatever purposes fit your use case.
We’ll follow this exact same pattern for our async stream based location provider. We’ll start
by defining an object that will create a location manager and implement the location manager
delegate protocol:

import CoreLocation

class LocationProvider: NSObject {


fileprivate let locationManager = CLLocationManager()

override init() {
super.init()
locationManager.delegate = self
}

func requestPermissionIfNeeded() {
if locationManager.authorizationStatus == .notDetermined {
locationManager.requestWhenInUseAuthorization()
}
}

func startUpdatingLocation() {
requestPermissionIfNeeded()

locationManager.startUpdatingLocation()
}
}

extension LocationProvider: CLLocationManagerDelegate {


func locationManager(_ manager: CLLocationManager,
,→ didUpdateLocations locations: [CLLocation]) {
// use new location ...
}
}

Donny Wals 164


Practical Swift Concurrency

So far so good, this code shouldn’t contain any surprises; we’re only setting up the very basics
of what we need.
When we want to make the user’s current location available through an async stream we’ll
need send locations from our locationManager(_:didUpdateLocations:) dele-
gate method. We can leverage a continuation based AsyncStream that we set up in our
startUpdatingLocation() method by storing the continuation in our class and using
it in locationManager(_:didUpdateLocations:).
The following code shows how we can do this:

class LocationProvider: NSObject {


fileprivate let locationManager = CLLocationManager()
fileprivate var continuation: AsyncStream<CLLocation>.Continuation?

// ...

func startUpdatingLocation() -> AsyncStream<CLLocation> {


requestPermissionIfNeeded()

locationManager.startUpdatingLocation()

return AsyncStream(bufferingPolicy: .bufferingNewest(1)) {


,→ continuation in
self.continuation = continuation
}
}
}

Whenever our startUpdatinglocation() method is called, we create a new async


stream, return it from our function so the caller can iterate over it, and we store the continua-
tion that’s associated with our stream so we can use it later.
Next up, we’ll need to implement the locationManager(_:didUpdateLocations:)
method from CLLocationManagerDelegate so we can send new locations over our
stream:

Donny Wals 165


Practical Swift Concurrency

extension LocationProvider: CLLocationManagerDelegate {


func locationManager(_ manager: CLLocationManager,
,→ didUpdateLocations locations: [CLLocation]) {
for location in locations {
continuation?.yield(location)
}
}
}

After adding a description for the NSLocationWhenInUseUsageDescription key to


your app’s Info.plist file you will now be able to run your app and grab the user’s location
in an async stream using the following code:

let provider = LocationProvider()


for await location in provider.startUpdatingLocation() {
print(location)
}

As usual, you’ll need to run this from a new Taks or SwiftUI’s task view modifier since we’re
doing async work by awaiting values in the for loop. The sample app for this chapter leverages
a Task that is started in response to a button tap.
There are a couple of things we don’t handle currently:

• We don’t stop updating locations when the task that wraps our for loop is cancelled; for
example when we start the iteration in a SwiftUI task view modifier and the view goes
away.
• We can only call startUpdatingLocation() once. Calling it twice will result in
the stream that we created first never ending and never receiving values.

The first item on the list can be implemented using a built-in property on continuations that
allows us to run a closure whenever the task that encloses the work we’re doing is cancelled.
The second item on the list requires us to make some changes to the way the location provider
object is implemented.

Donny Wals 166


Practical Swift Concurrency

We’ll tackle the second point first because it changes the structure of our location provider a
bit. By doing that work first, we can implement a solution for the first issue in a nicer way. But
before we start fixing problems, let’s take a look at the issue at hand in a demo:

let provider = LocationProvider()

let seq1 = provider.startUpdatingLocation()


let seq2 = provider.startUpdatingLocation()

Task {
for await location in seq1 {
print("seq1", location)
}
}

Task {
for await location in seq2 {
print("seq2", location)
}
}

The code snippet above illustrates problem two from the list above; we can’t call startUp-
datingLocation() more than once.
The code above is included in the sample app from the code bundle that’s available alongside
this book. When you run this example, on an iOS device you’ll find that only seq2 received
values; seq1 never receives any values.
When we examine the implementation of the startUpdatingLocation() method, that
makes a lot of sense:

func startUpdatingLocation() -> AsyncStream<CLLocation> {


requestPermissionIfNeeded()

locationManager.startUpdatingLocation()

Donny Wals 167


Practical Swift Concurrency

return AsyncStream(bufferingPolicy: .bufferingNewest(1)) {


,→ continuation in
self.continuation = continuation
}
}

For every call to startUpdatingLocation(), we overwrite the value of our continuation


so we only keep the most recently created continuation and discard the ones that were created
earlier.
We could try to resolve our issue by reusing the same stream for all callers which would look a
bit as follows:

actor LocationProvider: NSObject {


fileprivate let locationManager = CLLocationManager()
fileprivate var continuation: AsyncStream<CLLocation>.Continuation?
private var stream: AsyncStream<CLLocation>?

// init and request permisisons are unchanged...

func startUpdatingLocation() -> AsyncStream<CLLocation> {


if let stream {
return stream
}

requestPermissionIfNeeded()

locationManager.startUpdatingLocation()

stream = AsyncStream(bufferingPolicy: .bufferingNewest(1)) {


,→ continuation in
self.continuation = continuation
}

return stream!
}

Donny Wals 168


Practical Swift Concurrency

The location provider is now an actor to make sure that we can call our startUpdatin-
gLocation() method concurrently without any problems. I’ve added a property stream
to my actor so I can cache the stream I’ve created for reuse later.
The rest of the startUpdatingLocation() method should speak more or less for itself.
After creating a stream we assign it to the stream property, and we return it.
Because the LocationProvider object is now an actor, we also need to change the way
that we conform to CLLocationManagerDelegate a bit:

extension LocationProvider: CLLocationManagerDelegate {


nonisolated func locationManager(_ manager: CLLocationManager,
,→ didUpdateLocations locations: [CLLocation]) {
Task {
for location in locations {
await continuation?.yield(location)
}
}
}
}

The delegate method has to be a nonisolated function so that it can be bridged to


Objective-C (under the hood CLLocationManager uses Objective-C). And because
the LocationProvider is an actor, and we’ve explicitly marked the locationMan-
ager(_:didUpdateLocations:) delegate method as nonisolated, we must access
continuation from an async context and await its access since we’re sending messages
to the actor’s mailbox.
We also need to update our demo code a little bit since it needs to interact with the location
provider slightly differently:

let seq1 = await provider.startUpdatingLocation()


let seq2 = await provider.startUpdatingLocation()

Donny Wals 169


Practical Swift Concurrency

Task {
for await location in seq1 {
print("seq1", location)
}
}

Task {
for await location in seq2 {
print("seq2", location)
}
}

The change we’ve had to make is that we should now await the call to startUpdatin-
gLocation() because calling that function is now a message in the LocationProvider
actor’s mailbox.
When you run this code, you’ll find that we are now receiving locations in both of our for
loops. This is really nice, but when we closely inspect what’s happening it seems that there
might be an issue with our code...

seq1 <+52.xxxx1302,+4.xxxx7516> +/- 61.19m (speed 0.00 mps / course


,→ -1.00) @ 21/02/2023, 09:10:28 Central European Standard Time
seq2 <+52.xxxx1302,+4.xxxx7516> +/- 65.39m (speed 0.00 mps / course
,→ -1.00) @ 21/02/2023, 09:10:29 Central European Standard Time

seq1 <+52.xxxx1302,+4.xxxx7516> +/- 51.74m (speed 0.00 mps / course


,→ -1.00) @ 21/02/2023, 09:10:30 Central European Standard Time
seq2 <+52.xxxx1302,+4.xxxx7516> +/- 48.40m (speed 0.00 mps / course
,→ -1.00) @ 21/02/2023, 09:10:31 Central European Standard Time

seq1 <+52.xxxx1302,+4.xxxx7516> +/- 42.96m (speed 0.00 mps / course


,→ -1.00) @ 21/02/2023, 09:10:32 Central European Standard Time
seq2 <+52.xxxx5412,+4.xxxx6848> +/- 21.47m (speed 1.10 mps / course
,→ -1.00) @ 21/02/2023, 09:10:33 Central European Standard Time

seq1 <+52.xxxx3758,+4.xxxx7192> +/- 24.73m (speed 1.12 mps / course


,→ -1.00) @ 21/02/2023, 09:10:34 Central European Standard Time

Donny Wals 170


Practical Swift Concurrency

seq2 <+52.xxxx2772,+4.xxxx7447> +/- 29.49m (speed 1.12 mps / course


,→ -1.00) @ 21/02/2023, 09:10:35 Central European Standard Time

seq1 <+52.xxxx1778,+4.xxxx7704> +/- 35.34m (speed 1.12 mps / course


,→ -1.00) @ 21/02/2023, 09:10:36 Central European Standard Time
seq2 <+52.xxxx0790,+4.xxxx7955> +/- 41.90m (speed 1.11 mps / course
,→ -1.00) @ 21/02/2023, 09:10:37 Central European Standard Time

seq1 <+52.xxxx1754,+4.xxxx7095> +/- 46.07m (speed 1.34 mps / course


,→ -1.00) @ 21/02/2023, 09:10:38 Central European Standard Time
seq2 <+52.xxxx1897,+4.xxxx2585> +/- 52.39m (speed 1.56 mps / course
,→ -1.00) @ 21/02/2023, 09:10:39 Central European Standard Time

seq1 <+52.xxxx7846,+4.xxxx1276> +/- 51.58m (speed 1.20 mps / course


,→ -1.00) @ 21/02/2023, 09:10:40 Central European Standard Time

Notice how the output from our AsyncStream alternates between seq1 and seq2. That’s
great, exactly what we want. But take a look at both the GPS coordinates and time in each pair
of outputs. You can ignore the xxxx part in the coordinates, I’ve redacted a part of the output
a little but you should still be able to see that the location that’s passed to both sequences
isn’t the same within each pair. More importantly, notice how the time also isn’t the same for
each output. They’re about a second apart every time.
It looks like both of our for loops do receive values, but they don’t share the output from the
stream. Instead, the consume values from the stream by taking turns somehow, and once a
value is received by one loop, the other loop will never receive the value.
This is neither a bug on our end nor unexpected. Async sequences like AsyncStream do not
support sending values to multiple iterators. In a framework like Combine you could leverage
an operator like .share() on publishers that do not support multiple subscribers receiving
output from a single publisher.
Unfortunately, async sequences do not have a similar mechanism at this time. There is
some work being done in the Swift async-algorithms package to add support for an
equivalent of Combine’s share operator but at the time of writing this book this work is not
yet completed and there are a few PRs open that implement different versions of sharing
mechanisms.

Donny Wals 171


Practical Swift Concurrency

We can leverage Combine to work around this limitation of async sequences for our location
provider as you’ll see in the next chapter. I will not cover other workarounds like trying to
maintain a collection of continuations, implementing your own share operator or similar
solutions because a correct implementation of a sharing feature is not trivial to achieve, and I
do not want you to rely on incomplete or incorrect workarounds for a problem that’s being
solved in one of Apple’s packages (like async-algorithms in this case).
Now that we know that supporting multiple iterators for a single AsyncStream doesn’t
work, we can revert our code back to what it was before; a simple implementation that will
overwrite the cached continuation for every call to startUpdatingLocation().
We’ll accept that we can’t have multiple iterators for now.
If at any point the task that owns our async for loop is cancelled, we’d like to stop monitoring for
location updates. To do this, we can assign a closure to our continuation’s onTermination
property:

func startUpdatingLocation() -> AsyncStream<CLLocation> {


requestPermissionIfNeeded()

locationManager.startUpdatingLocation()

return AsyncStream(bufferingPolicy: .bufferingNewest(1)) {


,→ continuation in

continuation.onTermination = { [weak self] _ in


self?.locationManager.stopUpdatingLocation()
}

self.continuation = continuation
}
}

Whenever the task that owns our for loop is cancelled or otherwise ends the onTermination
closure is called. We can perform cleanup, like stopping our location updates, from within
this closure.

Donny Wals 172


Practical Swift Concurrency

One last thing we should take into account is to end our async stream when our Location-
Provider is deinitialized. To do this, we can implement a deinit method on our provider
and call finish() on our continuation:

class LocationProvider: NSObject {


// ...

deinit {
// ...
continuation?.finish()
}
}

This will end the stream for anybody that’s iterating over it. If we wouldn’t do this, our Loca-
tionProvider could be deinitialized but the for loop that’s iterating over our async stream
would never end. This would essentially cause the task that’s iterating over the stream to be
stuck forever since nothing is telling it the stream ended.
With this in place, we’re able to start using our LocationProvider and know that it handles
all the important bits and pieces other than having multiple iterators for the same stream.
We’ll look at a Combine based solution that’s bridged into Swift Concurrency in the next
chapter, but for now I’d like to move on to our next example; connecting to a websocket and
listening for incoming messages using an async stream. For this example, we’ll leverage the
third approach to creating async streams using the makeStream(of: bufferingPol-
icy:) method. Using this method relies on the exact same mechanisms that the second
approach uses, but it’s slightly more convenient to use in comparison. For that reason, I won’t
be explaining all the details unless they’re different from what you’ve just learned.

Using async streams to listen for incoming


websocket messages
Websockets are a very nice technology that can be used to open a connection to a server
in order to send and receive messages between a client and server without having to make

Donny Wals 173


Practical Swift Concurrency

requests to the server all the time.


Especially the “push” part of a websocket can be attractive for applications that rely on
accurate real time data that is sent to client apps by the server. Instead of having clients make
requests to the server every couple of seconds (or faster), you have an open connection and
the server can send values over the websocket directly.
On iOS, we have native support for websockets through URLSession already. Unfortunately,
the API for websockets at the time of writing this book does not yet support Swift Concurrency.
In order to use websockets with Swift Concurrency, we’ll need to perform some work of our
own.
The most interesting part of bridging a websocket into the world of Swift Concurrency is
that doing so will allow us to iterate over an AsyncStream to receive incoming websocket
messages.
Before we start implementing a Swift Concurrency driven version of a websocket listener, I’d
like to take a look at how we can listen for incoming websocket messages without a Swift
Concurrency layer around it:

let url = URL(string: "ws://127.0.0.1:9090")!


let socketConnection = URLSession.shared.webSocketTask(with: url)
socketConnection.resume()

func setReceiveHandler() {
socketConnection.receive { result in
defer { setReceiveHandler() }

do {
let message = try result.get()
switch message {
case let .string(string):
print("incoming message", string)
case let .data(data):
print("incoming message", data)
@unknown default:
print("unkown message received")
}

Donny Wals 174


Practical Swift Concurrency

} catch {
// handle the error
print(error)
}
}
}

setReceiveHandler()

In the code above, I create a URL that points to a websocket that’s running locally. In this
book’s sample code, you’ll find a folder named local-server/simple_socket that
contains a small node.js script that starts a websocket server when you run it. More detailed
instructions for running the code can be found in the README.md file.
Run the script locally by navigating to the simple_socket folder in your terminal and
typing the following command:

node index.mjs

The websocket server this creates will accept incoming connections and respond by sending
a message to the connected client on a regular interval. This is nothing fancy but sufficient for
our testing purposes.
To connect to our local websocket server, we can use URLSession and its webSocketTask
method. Just like with a regular URLSession data task, we need to call resume() on the
created websocket task to actually connect to our server.
Once connected, we need to actually start receiving messages from our websocket. We do
this by calling receive(:_) on the websocket task. Whenever the websocket receives
an incoming message from the server, our closure is called, and we need to register a new
receive closure to receive the next message.
If we don’t register a new closure, we’d receive a single message and ignore any further
incoming messages.
While Apple didn’t add an ability to iterate an async sequence to receive websocket messages,
they did provide an async version of the receive(_:) message. If we refactor the code
from before to leverage the async receive(_:) method, the code would look as follows:

Donny Wals 175


Practical Swift Concurrency

func setReceiveHandler() async {


do {
let message = try await socketConnection.receive()

switch message {
case let .string(string):
print(string)
case let .data(data):
print(data)
@unknown default:
print("unkown message received")
}
} catch {
print(error)
}

await setReceiveHandler()
}

The code isn’t very different, but at least it uses a little bit of Swift Concurrency.
The code above could be improved in two ways:

1. We should only start awaiting a new message if the socket connection is still open
2. We should only start awaiting a new message if we didn’t receive an error from the
socket

The current implementation will recursively call setReceiveHandler() even if the socket
connection is closed; that’s not ideal.
We can leverage a while loop and two checks to see if the connection should still be consid-
ered active, and one to see if the connection with the server has been closed:

func setReceiveHandler() async {


var isActive = true

while isActive && socketConnection.closeCode == .invalid {

Donny Wals 176


Practical Swift Concurrency

do {
let message = try await socketConnection.receive()

switch message {
case let .string(string):
print(string)
case let .data(data):
print(data)
@unknown default:
print("unkown message received")
}
} catch {
print(error)
isActive = false
}
}
}

Whenever we encounter an error, we set isActive to false to stop our while loop. If the
socket connection’s closeCode changes from .invalid (not closed) to anything else, we
also exit out of our while loop.
Converting the process of receiving incoming messages to be driven by an async sequence
means that in an ideal situation we’d be able to write something like the following to receive
incoming websocket messages:

// It would be great if we could receive messages like this


do {
for try await message in socketConnection {
switch message {
case let .string(string):
print(string)
case let .data(data):
print(data)
@unknown default:
print("unkown message received")

Donny Wals 177


Practical Swift Concurrency

}
}
} catch {
// handle errors
}

We can leverage AsyncStream to approach a solution that works quite nicely for a use case
like this.
The simplest thing we can attempt to write is a version of setReceiveHandler that’s
driven by a while loop wrapped in a computed property on URLSessionWebSocketTask
by extending it:

typealias WebSocketStream =
,→ AsyncThrowingStream<URLSessionWebSocketTask.Message, Error>

extension URLSessionWebSocketTask {
var stream: WebSocketStream {
let (stream, continuation) = WebSocketStream.makeStream(
of: URLSessionWebSocketTask.Message.self)

Task {
var isAlive = true

while isAlive && closeCode == .invalid {


do {
let value = try await receive()
continuation.yield(value)
} catch {
continuation.finish(throwing: error)
isAlive = false
}
}
}

return stream

Donny Wals 178


Practical Swift Concurrency

}
}

Notice how this code is quite similar to the code we wrote earlier for our location provider.
The key difference here is that the makeStream(of:) method creates both a continuation
and a stream for us which makes managing our async stream easier.
The code above would allow us to await incoming messages through the websocket task’s
stream property as follows:

let url = URL(string: "ws://127.0.0.1:9090")!


let socketConnection = URLSession.shared.webSocketTask(with: url)
socketConnection.resume()

do {
for try await message in socketConnection.stream {
// handle incoming messages
}
} catch {
// handle errors
}

This actually works quite well, it’s not perfect but we can receive messages from our websocket
connection. You can verify this using the following code or by running the relevant example
from this chapter’s sample app.

let url = URL(string: "ws://127.0.0.1:9090")!


let socketConnection = URLSession.shared.webSocketTask(with: url)
socketConnection.resume()

Task {
try await Task.sleep(for: .seconds(5))
socketConnection.cancel(with: .goingAway, reason: nil)
}

Donny Wals 179


Practical Swift Concurrency

Task {
do {
for try await message in socketConnection.stream {
// handle incoming messages
}
} catch {
// handle error
}

print("All messages received and handled")


}

After running the code above, you’ll notice that a couple of messages are printed to the console,
and then our final messages is printed to indicate that all messages have been received and
handled.
If we abstract this code into a class, we can make it more reusable and easier to work with.
It will also help surface an issue with the code that we’ll need to solve in order to correctly
handle closing our websocket connection when an instance of our class is deinitialized.
For example, we could use something like the following class as a simulation of how you might
use a websocket connection in a real app:

class SocketConsumer {
func setupConnection() {
let url = URL(string: "ws://127.0.0.1:9090")!
let socketConnection = URLSession.shared.webSocketTask(with: url)
socketConnection.resume()

Task {
do {
for try await message in socketConnection.stream {
print(message)
}
} catch {
// handle errors

Donny Wals 180


Practical Swift Concurrency

}
}
}
}

First, this example is highly simplified, but it does do a good job of showing you how we would
set up a socket connection and start iterating its stream for messages. The problem here,
though, is that when we deallocate our socket consumer, the socket connection won’t close.
Here’s what deallocating the socket consumer could look like.

struct ExampleView: View {


var body: some View {
Button("Make Consumer") {
Task {
await makeConsumer()
}
}
}

func makeConsumer() async {


var consumer: SocketConsumer? = SocketConsumer()
consumer?.setupConnection()

try! await Task.sleep(for: .seconds(2))


consumer = nil
}
}

What you can see in the code above is a simple view that has a button and we tap the button,
we start iterating our web socket for about two seconds and then we leave our function, which
means that nothing is holding on to the consumer anymore.
At that point, we would expect the socket connection to close and to no longer receive mes-
sages. Instead, we keep receiving messages because the task that actually grabs the socket
connection stream and iterates the messages, which is defined in SocketConsumer, does

Donny Wals 181


Practical Swift Concurrency

not actually end. So what we really need to do is find a way to either stop the task when
SocketConsumer goes away or to send a connection closing message to the web socket
when SocketConsumer is deallocated. So let’s go ahead and add that functionality to our
SocketConsumer right now.

class SocketConsumer {
deinit {
socketConnection.cancel(with: .goingAway, reason: nil)
}

// ... existing code

When you add this one piece of code to the SocketConsumer, we can now make sure that
whenever the SocketConsumer instacne is no longer used by anybody, we will actually
clean up after ourselves and close our socket connection. So if you were to add this code to
the SocketConsumer defnition that you’ve seen before, what you’ll find is that we actually
close our connection at the right time.
It’s this kind of clean up that makes working with things like asynchronous sequences kind of
complicated because the Task objects that we use to iterate our asynchronous sequences
don’t automatically get cleaned up because tasks only end when the code that they run ends.
In this case, that means our task would only end when it’s either cancelled or if the async
stream we’re iterating ends.
So unless we either end the stream or end the task by cancelling it, depending on whether we
have access to the source of the sequence or not, our tasks would run forever resulting in a
memory leak.

In Summary
In this chapter, you’ve learned a lot about Swift Concurrency’s async sequences. You’ve
seen that async sequences provide a very powerful and straightforward to read API to asyn-
chronously receive and process values that are generated by an async process. We started of
by looking at the URL.lines method which provides an interesting and powerful way to

Donny Wals 182


Practical Swift Concurrency

begin exploring async sequences. You learned that async sequences are used a lot like regular
for loops with the main difference being that an async loop requires you to await values in
the async sequence.
After your introduction to async sequences, you saw how you can use an AsyncStream
object to build a simple wrapper around a location manager and a websocket connection so
you can asynchronously process values from these objects using an async for loop.
You also learned that the location provider cannot share its output to multiple for loops. In the
next chapter you’ll learn how to fix this by mixing Combine and async sequences seamlessly.

Donny Wals 183


Practical Swift Concurrency

Chapter 8 - Async algorithms and


Combine
In the previous chapter you took your first look at async sequences. We’ve built two objects
that leveraged async streams to see how we can bridge code that doesn’t involve async
sequences at all into the world of Swift Concurrency and async sequences.
In this chapter we’ll continue our journey and you’ll see how you can leverage Combine in a
way that allows us to bridge a publisher into the world of Swift Concurrency. You’ll also learn
how you can go from a Combine operator like map into an async function and then output
the result of your async function down the Combine pipeline.
There are certain similarities between Combine and AsyncSequence that are hard to ignore.
Especially the notion that both Combine and async sequences deal with values over time in
some way creates a lot of overlap between the two frameworks.
However, at the time of writing this book there are still some important differences between
Combine and AsyncSequence that make it so that we can’t (easily) replace Combine code
with AsyncSequence. In this chapter, we’ll go over some similarities, differences, and ways
that Combine and AsyncSequence can interoperate to allow you to make good use of both
of these tools.
In this chapter, I will assume that you have basic knowledge on Combine. I will assume that
you’re familiar with Combine operators like map or eraseToAnyPublisher(), and that
you know what a Publisher is, how you subscribe to one, and why it’s important that
you retain your cancellables. I won’t expect that you’re a Combine guru by any means and I
will provide brief explanations of what’s going on but this chapter is not intended to be an
introduction to Combine.
Note that Swift Concurrency and Combine don’t always work as well as you’d like. Especially
when you enable Swift 6 mode or strict concurrency checks, you’ll find that Combine has
rough edges when used with Swift Concurrency. Sadly, Apple does not seem to be putting
much effort towards making Combine compatible with concurrency. And at this time Async
Sequences and Async Algorithms don’t provide a full replacement for Combine. Most things
work, some things don’t. This chapter aims to show you some of the options that are available
to you. I personally still tend to use Combine for some state observations even with Swift

Donny Wals 184


Practical Swift Concurrency

6.2. That said, I usually try a pure concurrency approach first. And with every Swift release I
rely less and less on Combine. I want you to keep this in mind while reading this chapter; it’s
about providing a full picture of the tools that are available to you.

Turning a Combine publisher into an


AsyncSequence
In the previous chapter, you’ve built a location provider object that works well as long as only
one place in our codebase is interested in iterating over the AsyncSequence that provides
the CLLocation instances that represent the user’s location.
As a refresher, here’s what the location provider we built looks like:

class LocationProvider: NSObject {


fileprivate let locationManager = CLLocationManager()
fileprivate var continuation: AsyncStream<CLLocation>.Continuation?

override init() {
super.init()
locationManager.delegate = self
}

deinit {
continuation?.finish()
}

func requestPermissionIfNeeded() {
if locationManager.authorizationStatus == .notDetermined {
locationManager.requestWhenInUseAuthorization()
}
}

func startUpdatingLocation() -> AsyncStream<CLLocation> {


requestPermissionIfNeeded()

Donny Wals 185


Practical Swift Concurrency

locationManager.startUpdatingLocation()

return AsyncStream(bufferingPolicy: .bufferingNewest(1)) {


,→ continuation in
continuation.onTermination = { [weak self] _ in
self?.locationManager.stopUpdatingLocation()
}

self.continuation = continuation
}
}
}

extension LocationProvider: CLLocationManagerDelegate {


func locationManager(_ manager: CLLocationManager,
,→ didUpdateLocations locations: [CLLocation]) {
for location in locations {
continuation?.yield(location)
}
}
}

We’re leveraging an AsyncStream to send the user’s location to whoever iterates over that
stream. We saw that an async stream can only have one object iterating over it at a given
time, so this solution isn’t great if we want a single instance of LocationProvider to be
the main source of truth for the user’s location.
Combine has a construct called a Subject which is used for similar reasons as an Async-
Stream. It allows us to send values to subscribers of our Subject as needed. For example,
a Subject can deliver a user’s location to subscribers whenever a new location becomes
available.
A key difference between an AsyncStream and a Subject is that a Subject can have
multiple subscribers. This means that we can leverage a single Combine subject as a source
of truth for our application and it will deliver values to anybody that’s interested.

Donny Wals 186


Practical Swift Concurrency

Before I show you how to bridge a Combine publisher like Subject into Swift Concurrency
as an AsyncSequence I want to show you what our location provider would look like if it
was written using a Combine Subject.
First, we’d replace the continuation property with a property called subject which will
be Combine CurrentValueSubject:

class LocationProvider: NSObject {


fileprivate let locationManager = CLLocationManager()
fileprivate var subject = CurrentValueSubject<CLLocation?,
,→ Never>(nil)

// ...
}

A CurrentValueSubject in Combine takes two generic arguments. One for the type of
objects that it will emit, and another one for the type of error that it can produce. In our case,
the CurrentValueSubject cannot produce errors so we use Never as our failure type.
We must also provide an initial value for our CurrentValueSubject so we provide nil
since we don’t immediately have a location available.
The init and requestPermissionIfNeeded methods don’t change compared to what
we had before. The startUpdatingLocation() method will update to return our Cur-
rentValueSubject instead of an AsyncStream for the time being. We’ll strip out any
nil values using compactMap, and we’ll hide the resulting type from callers of startUp-
datingLocation() by erasing our subject to AnyPublisher:

func startUpdatingLocation() -> AnyPublisher<CLLocation, Never> {


requestPermissionIfNeeded()

locationManager.startUpdatingLocation()

return subject
.compactMap({ $0 })
.eraseToAnyPublisher()
}

Donny Wals 187


Practical Swift Concurrency

And to send our user’s location over the subject, we should to update our didUpdateLo-
cations delegate method as follows:

func locationManager(_ manager: CLLocationManager, didUpdateLocations


,→ locations: [CLLocation]) {
for location in locations {
subject.send(location)
}
}

The resulting code can now be used as follows. Remember that we’re still looking at the
Combine version of this code before we bridge it over to an async sequence. This means that
we’ll subscribe to our publisher to receive values instead of using an async for loop.

var cancellables = Set<AnyCancellable>()


let seq1 = provider.startUpdatingLocation()

seq1.sink(receiveValue: { location in
print(location)
}).store(in: &cancellables)

In Combine we receive values from a publisher by subscribing to it. One way to do this is to
call sink on a publisher which will allow us to start receiving values from a publisher. The
sink method produces an AnyCancellable object that we must retain somewhere for as
long as we want to keep our subscription alive. Usually this will be done in a property owned
by the object that starts the subscription, like a view model or view controller. Whenever that
object is deallocated we want the cancellable to be deallocated as well because that will tear
down the Combine subscription so we’re no longer observing our publisher’s output.
This is very different from how Swift Concurrency and AsyncSequence work with regards
to lifecycle management but more on that later.
You’ve now seen a one on one translation of the async stream based location provider into the
world of Combine with one key difference. We have a single Combine subject that anybody
can subscribe to. This is something that we couldn’t achieve with ant built-in mechanisms on
AsyncStream.

Donny Wals 188


Practical Swift Concurrency

We can now turn this Combine publisher into an async sequence with just one extra line of
code and a minor change in our startUpdatingLocation method’s return type:

func startUpdatingLocation() ->


,→ AsyncPublisher<AnyPublisher<CLLocation, Never>> {
requestPermissionIfNeeded()

locationManager.startUpdatingLocation()

return subject
.compactMap({ $0 })
.eraseToAnyPublisher()
.values
}

Instead of returning AnyPublisher from the startUpdatingLocation method, I


return an AsyncPublisher. An AsyncPublisher is an object that implements the
AsyncSequence protocol and we can create an AsyncPublisher by accessing values
on a Combine publisher.
Notice how after my call to eraseToAnyPublisher() I access values. This is what gets
us an AsyncSequence that’s based on our Combine publisher.
If I call startUpdatingLocation multiple times now, I will be able to iterate over each
sequence that’s returned from this method and all for loops that I write will receive the same
values.
By accessing values on a Combine Publisher, you’re transforming that publisher into an
async sequence, bridging it into the world of Swift Concurrency. With this approach, we get
the best of both worlds.

Donny Wals 189


Practical Swift Concurrency

Calling async function from within


Combine operators
Sometimes you’ll find yourself in a position where you’d like to mix and match Combine and
Swift Concurrency. For example, you might be building a search feature for your app where
you want to leverage Combine to observe an @Published property in your model. You’d
debounce the changes made to this property to prevent making a network call for every typed
character and then perform a search query for each value thats emitted after the debounce.
That search query will then lead to a new value being assigned to another @Published
property.
To achieve that goal you might start writing code as follows:

struct SearchResult: Decodable {


// ...
}

class Networking {
func getResults(forQuery query: String) async throws ->
,→ [SearchResult] {
// make a network call...
}
}

class SearchService {
@Published var query: String = ""
@Published var results = [SearchResult]()

let network = Networking()

func setup() {
$query
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.map({ query in
// transform the query into a network call?

Donny Wals 190


Practical Swift Concurrency

// or maybe the result of a network call?


})
.assign(to: &$results)
}
}

The tricky part here is the map. What should we be doing in there? Your initial thought might
be to write something like the following:

.map({ query in
return try await network.getResults(forQuery: query)
})

Unfortunately that doesn’t work because Combine’s map is a synchronous non-throwing


function.
As a rule in Combine, when you want to use the output of a publisher to create a new publisher
that can perform asynchronous work you use a flatMap or a mix of a map that returns a
publisher followed by switchToLatest() to kick off the work and use the result of that
work.
If we had a Combine based networking stack we’d write the following code to solve our
problem:

$query
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.map({ query in
return self.network.getResultsPublisher(forQuery: query)
})
.switchToLatest()
.assign(to: &$results)

The map returns a publisher object and we leverage switchToLatest so we take the output
from the most recently mapped publisher and use that output as the output we’re working
with. So in this case, that means that switchToLatest() creates a publisher that outputs
[SearchResult] objects.

Donny Wals 191


Practical Swift Concurrency

Since we’d like to be able to call async functions from our map, we have to find a way to turn
an async function call into a publisher. To do this, we can leverage Combine’s Future and
Swift Concurrency’s Task:

$query
.debounce(for: 0.3, scheduler: DispatchQueue.main)
.map({ query in
Future { promise in
Task {
do {
let results = try await self.network.getResults(forQuery:
,→ query)
promise(.success(results))
} catch {
promise(.success([]))
}
}
}
})
.switchToLatest()
.assign(to: &$results)

A Future in Combine takes a closure that receives a Promise closure. We can perform our
async work inside of a Task since a Future doesn’t create an async context for us, and we
can then fulfill the Promise by calling it with the result of our work.
The Future will then emit the result that we’ve given it as its value and after that it completes.
This is a perfect tool to build a bridge between our async function in the networking layer
and Combine’s Publisher focussed world.
It’s also possible to use this pattern inside of a flatMap if you’re always interested in the
result of every change in the search query.
A little bit earlier I briefly mentioned a major difference between the lifecycle of a Combine
subscription and that of an AsyncSequence iteration. Let’s explore this difference next.

Donny Wals 192


Practical Swift Concurrency

Comparing Combine subscription and


async iteration lifecycles
In Combine, the lifecycle of a subscription is defined quite clearly. A subscription is only
active until the AnyCancellable that encapsulates it exists. As soon as the cancellable is
deallocated the subscription is torn down. If you forget to store your cancellable, that means
that your subscription will be torn down immediately after creating it and your program
probably doesn’t work as expected.
We can demonstrate the lifecycle of a Combine publisher by writing a function that stores the
AnyCancellable created by our call to sink locally:

func nonStoredCancellable() {
let cancellable = URLSession.shared.dataTaskPublisher(for:
,→ URL(string: "https://practicalswiftconcurrency.com")!)
.sink(receiveCompletion: { _ in
print("received completion")
}, receiveValue: { _ in
print("received response")
})
}

If you call this function, you’ll notice that no values are ever printed because the cancellable
that we use to store our AnyCancellable is deallocated once the nonStoredCan-
cellable() function ends.
If you move the storing of the AnyCancellable to be outside of the function and make it a
property of a class or struct, you’ll find that the network call suddenly works:

var cancellable: AnyCancellable?

func storedCancellable() {
cancellable = URLSession.shared.dataTaskPublisher(for:
,→ URL(string: "https://practicalswiftconcurrency.com")!)
.sink(receiveCompletion: { _ in

Donny Wals 193


Practical Swift Concurrency

print("received completion")
}, receiveValue: { _ in
print("received response")
})
}

Notice that the only thing that’s different compared to what we had before is where we store
the AnyCancellable.
I tend to call this behavior that Combine has “safe by default” what I mean by that is that a
Combine subscription will deallocate unless it is actively being kept alive through an Any-
Cancellable. In practice, this means that as long as your code is free from retain cycles
you won’t have any lingering Combine subscriptions that are active even though nobody is
interested in them anymore.
We can demonstrate this principle more in-depth by building an object that allows us to
publish values at arbitrary times, allowing us to see that as soon as we discard the object that
holds our cancellable is deallocated we don’t receive any more values.
The following code defined a simple sample object that will take a Combine Subject from
an external source, and it has a method to subscribe to the provided Subject:

class SubjectDrivenExample {
let subject: CurrentValueSubject<Int, Never>
var cancellable: AnyCancellable?

init(subject: CurrentValueSubject<Int, Never>) {


self.subject = subject
}

deinit {
print("subject driven example deinit")
}

func subscribe() {
cancellable = subject.sink(receiveValue: { value in

Donny Wals 194


Practical Swift Concurrency

print("received \(value)")
})
}
}

We can then use this code from an object that will own our Subject and the instance of
SubjectDrivenExample. This will allow us to subscribe to the subject, send values, and
deallocate SubjectDrivenExample as needed:

class SubjectDrivenExample {
let subject: CurrentValueSubject<Int, Never>
var cancellable: AnyCancellable?

init(subject: CurrentValueSubject<Int, Never>) {


self.subject = subject
}

deinit {
print("subject driven example deinit")
}

func subscribe() {
cancellable = subject.sink(receiveValue: { value in
print("received \(value)")
})
}
}

And lastly, we can use an ExampleRunner object in a view so we can call start, end,
and sendValue in response to button presses. The ExampleRunner is a thin wrapper
that only interacts with our SubjectDrivenExample and it allows us to perform various
operations on our subject:

Donny Wals 195


Practical Swift Concurrency

class ExampleRunner {
let subject = CurrentValueSubject<Int, Never>(0)
var example: SubjectDrivenExample?

func start() {
example = SubjectDrivenExample(subject: subject)
example?.subscribe()
}

func end() {
example = nil
}

func sendValue() {
subject.send(Int.random(in: 0..<Int.max))
}
}

We can use the ExampleRunner in a view as follows:

struct SubjectSample: View {


let runner = ExampleRunner()

var body: some View {


VStack {
Button("Start") {
runner.start()
}

Button("End") {
runner.end()
}

Button("Send Value") {
runner.sendValue()
}

Donny Wals 196


Practical Swift Concurrency

}
}
}

Once we call start on the example runner, we can call sendValue and see in the Xcode
console that a value was received. When you call end on the runner, you’ll see a message
in the console that says that the subject driven example was deinitialized and when you
call sendValue after that nothing is shown in the console. After all, there’s no more active
subscription to receive values. Call start again and you’ll be able to see received values
again.
As you can see, we don’t need to do anything to make sure we don’t have lingering subscrip-
tions; the AnyCancellable takes care of that for us.
Let’s compare this setup to one that leverages an async sequence instead. I’ll start by defining
an object we’ll use to iterate over an async stream that’s derived from our subject. It’s similar
to the SubjectDrivenExample from before:

class SequenceDrivenExample {
let subject: CurrentValueSubject<Int, Never>

init(subject: CurrentValueSubject<Int, Never>) {


self.subject = subject
}

deinit {
print("sequence driven example deinit")
}

func subscribe() {
Task { [weak self] in
guard let self else {
return
}
for await value in self.subject.values {
print("received \(value)")

Donny Wals 197


Practical Swift Concurrency

}
}
}
}

The flow of this code is the same as it was before. The key difference is that in the sub-
scribe() method we don’t subscribe to our publisher but we start iterating its sequence.
This acts and behaves identical to any other sequence we can iterate over but we still have
the convenience of sending values over a subject which makes it more convenient to run our
test.
We can use this SequenceDrivenExample from the ExampleRunner that we made
earlier by changing the example property to be our new example instead of the subject
driven one:

class ExampleRunner {
let subject = CurrentValueSubject<Int, Never>(0)
var example: SequenceDrivenExample?

func start() {
example = SequenceDrivenExample(subject: subject)
example?.subscribe()
}

func end() {
example = nil
}

func sendValue() {
subject.send(Int.random(in: 0..<Int.max))
}
}

If you run the example now and you call start, sendValue, and end you’ll notice that we
start the observation just fine, we can receive values just fine, but when we call end we don’t

Donny Wals 198


Practical Swift Concurrency

see the deinitialization message printed. And when we call sendValue after calling end we
still see values printed to the console.
It appears we’re stuck with a leak of sorts and that’s unfortunate.
The reason we have this leak is because even though we have a weak self capture on the Task,
we make the reference strong again in our guard. Since the Task closure runs immediately,
self will never be nil and after that we perform our loop with a strong self instance
captured.
We can resolve this retain cycle by capturing our subject instead of self:

func subscribe() {
Task { [subject] in
for await value in subject.values {
print("received \(value)")
}
}
}

Now try calling start, sendValues, followed by a call to end. You’ll notice that now you
do see the deinitialization message prints in the console. Great! Now call sendValues
again.
You’ll notice that we see a value printed in the console even though our example was deini-
tialized. When you call start again, send more values, call end and repeat this a few times
you’ll notice that you get a lot of duplicate values.
Te reason we have this issue is that the Task that we create in the subscribe method does
not have its lifecycle bound to anything. We start our task from within a synchronous function,
the function end and the task keeps running until its closure completes. As you know, an async
for loop doesn’t complete until the sequence its iterating over completes. And the sequence
we’re using as an example here is a sequence that never ends.
The fact that a task does not have its lifecycle bound to anything by default is what I would call
the opposite of the “safe by default” behavior we get from Combine subscriptions. Instead
of actively making sure that our subscription stays alive, we now have to make sure that our
Task ends at an appropriate time.

Donny Wals 199


Practical Swift Concurrency

The best way I’ve found to do this is to keep a reference to the Task in the object that you wish
to tie its lifecycle to, similar to how Combine works. You can then cancel the Task reference
in your deinit to end your for loop and in turn end your task:

class SequenceDrivenExample {
let subject: CurrentValueSubject<Int, Never>
var task: Task<Void, Never>?

init(subject: CurrentValueSubject<Int, Never>) {


self.subject = subject
}

deinit {
print("sequence driven example deinit")
task?.cancel()
}

func subscribe() {
task = Task { [subject] in
for await value in subject.values {
print("received \(value)")
}
}
}
}

If you have several of these iterating tasks running inside of one object, assigning each to
its own property can be cumbersome. You could create an array of type [Task<Void,
Never>] to contain all of your tasks and cancel them all in your deinit which will work fine.
Alternatively, you can leverage AnyCancellable and a small extension on Task to make
this all a little bit more convenient:

extension Task {
func store(in cancellables: inout Set<AnyCancellable>) {
cancellables.insert(AnyCancellable {

Donny Wals 200


Practical Swift Concurrency

self.cancel()
})
}
}

This extension allows you to store your Task objects in a Set<AnyCancellable> in the
exact same way Combine does it:

func subscribe() {
Task { [subject] in
for await value in subject.values {
print("received \(value)")
}
}.store(in: &cancellables)
}

I personally like this approach because it feels familiar and it’s a little bit of a nod to something
I really like about how Combine manages subscriptions.
One important thing to not on Task lifecycles is that normally a Task won’t need a mech-
anism like this if your task performs a small amount of work that will (eventually) end. You
should not riddle your codebase with Task objects and store(in:) calls just to make sure
that every single task is cancelled as soon as possible. Only do this in situations where your
task might otherwise never end, or if keeping the task alive until it’s completed is actually
problematic.
If you’re kicking off tasks with SwiftUI’s .task view modifier, the task that SwiftUI creates
will automatically be cancelled when the view goes away. This means that any async for loops
that you start from within your task view modifier will be cancelled appropriately. Note
that you would need to use a separate call to the task view modifier for every async for
loop you have. If you create multiple unstructured tasks using Task {} within the task
view modifier these unstructured tasks are not cancelled when the view modifier’s task is
cancelled.
Before we wrap up this chapter, I want to take a brief look at a package that Apple has published
to expand the capabilities of AsyncSequence tremendously. This package is called async-
algorithms.

Donny Wals 201


Practical Swift Concurrency

Expanding your options with


async-algorithms
When you’re completely new to working with async sequences or streams of values, the state
of AsyncSequence as it is in Swift 5.10 might be everything you’d expect and hope for. On
the other hand, when you’re used to reactive frameworks like Combine or RxSwift you’ll find
that there are similarities between functional reactive programming and iterating over async
sequences. After all, both provide values over time in an asynchronous fashion.
However, AsyncSequence in its current shape is not designed to do everything that a
Combine publisher can do. For example, a Combine publisher can be debounced, have its
output shared to multiple subscribers, it can merge or zip with another publisher and more.
In order to gradually introduce features like those that Combine publishers have into Swift,
Apple has created a separate package that contains lots of functionality that might eventually
make its way into the Swift language, but that exists in a separate package for now. This is to
allow quicker iteration, experimentation, and adoption without requiring Swift updates.
I won’t use this section to explain everything that async algorithms has to offer; you can check
out the GitHub repository for a full overview of everything that exists inside of the package, as
well as an overview of the different operators in this package along with their functionality.
Two of the more interesting operators in the package that I’d like to mention are removeDu-
plicates and debounce. These two operators are essential to building features like an
AsyncSequence driven search.
Consider the following in which I leverage an @Published property to create an async for
loop that’s debounced to make sure that the user hasn’t typed for 0.3 seconds before I receive
the published property’s current value, and has its duplicate values removed so I don’t search
for the same thing twice:

import AsyncAlgorithms

class SearchExample {
@Published var query = ""

Donny Wals 202


Practical Swift Concurrency

func subscribe() {
let sequence = $query.values
.debounce(for: .seconds(0.3))
.removeDuplicates()

Task {
for await value in sequence {
// perform search with a try await
}
}.store(in: &cancellables)
}
}

Notice how I can just chain these operators one after the other, just like how you would
otherwise be able to chain calls like map, filter, and sorted on a normal sequence. Or if
you’re more familiar with Combine, this looks a lot like how you’d build a Combine pipeline.
Again, the purpose of this section is not to go over everything that you can possibly find in the
async-algorithms package. Instead, I want to make sure you’re aware of this package and of
the fact that it provides several time-based and Combine-like operators to use on your async
sequences to build more elaborate sequences that, in some cases, can replace your Combine
pipelines.
Having said that, I don’t think you should go and replace all of your Combine pipelines with
async sequences as soon as possible. Combine is a fine framework, it’s stable, and it works well.
Swift’s async sequences are very nice but as you’ve seen in the previous sections and chapter
they lack some features (some of those are covered by async-algorithms) and managing
the lifecycle of your async sequences is a little bit more involved than managing a Combine
subscription.
Of course, in the end it’s entirely up to you to see how much you want to replace, and how
much of the missing features that prevent you from going all-in on async sequences are filled
in by async-algorithms.

Donny Wals 203


Practical Swift Concurrency

In Summary
In this chapter we expanded on the previous chapter and you saw how async sequences
compare to Combine, we looked at differences and similarities as well as some pitfalls that
currently exist with async sequences. These pitfalls are especially relevant if you’re currently
used to how Combine does things.
To wrap up this chapter you took a brief look at the async-algorithms package which provides
a lot of Combine-like tools that can help you use async sequences instead of Combine for
certain tasks in your code.
In the next chapter, we’ll take everything you know about concurrency, structured concurrency,
and async sequences to learn more about running multiple asynchronous tasks in parallel as
part of a single parent task.

Donny Wals 204


Practical Swift Concurrency

Chapter 9 - Performing and awaiting work


in parallel
So far in this book you have been working with async functions that return a single value, or
with async sequences. Being able to work with functions and sequences is extremely useful
on its own, but concurrency really shines once we’re able to model tasks that have many child
tasks, or flows with steps that involve collecting and processing data from multiple resources
before proceeding to a follow up step.
In Swift Concurrency we have two different tools that help us implement processes that
involve having child tasks:

• Async let
• Task groups

In this chapter, we will look at both of these tools. You will learn how they can be used
and when they should be used. Additionally, we will finally explore structured concurrency
alongside these topics.
By the end of this chapter you will have built a complex data importer that leverages Task
Groups and async let to fetch data from many API endpoints, groups them together, and
enriches objects with more data.
Let’s get started by taking a good look at async let.

Tip:
If you’re following along with the coding samples in this chapter, make sure you start your
local webserver by navigating to the code bundle’s movies folder in your terminal and
starting the server by running python3 -m http.server 8080. For more lengthy
instructions please refer to the README.md file in this book’s code bundle.

Creating child tasks with async let


In a regular async function you might write the following code to perform two network calls
that don’t depend on each other:

Donny Wals 205


Practical Swift Concurrency

func fetchDetailsFor(_ movie: Movie) async throws -> ([CrewMember],


,→ [CastMember]) {
let crew = try await fetchCrewMembersFor(movie)
let cast = try await fetchCastMembersFor(movie)

return (crew, cast)


}

The code above will work perfectly fine, there’s nothing wrong with it. We can make some im-
provements though. Crew members and cast members for our movie object can theoretically
be fetched in parallel because these calls can be made independently. The don’t depend on
each other so here’s no reason for us to be fetching crew members first and cast members
second. However, that’s exactly how the code as it stands right now is written. We can’t start
fetching cast members until the await for crew members completes. This means we make
the two network calls one by one.
We could leverage two separate Task objects to achieve parallel executing of these network
calls by creating our tasks and then awaiting each task’s value to obtain the result for that
task:

func fetchDetailsFor(_ movie: Movie) async throws -> ([CrewMember],


,→ [CastMember]) {
let crewMembersTask = Task {
return try await fetchCrewMembersFor(movie)
}

let castMembersTask = Task {


return try await fetchCastMembersFor(movie)
}

let crew = try await crewMembersTask.value


let cast = try await castMembersTask.value

return (crew, cast)


}

Donny Wals 206


Practical Swift Concurrency

With this approach we create two unstructured tasks that will start running immediately.
This means that we’ll make our network calls as soon as each task is created. After cre-
ating both tasks we await the value property on each of the tasks to obtain the task’s
result. First we wait for crewMembersTask.value, and then we wait for castMember-
sTask.value.
This is quite efficient and allows our networking code to run in parallel but there’s an issue
with the code above.
If the task that called fetchDetailsFor(_:) is cancelled, then the two tasks we create
inside of that function will not be cancelled. Other than inheriting the current actor and any
task local values, the unstructured tasks we created are in no way related or tied to the task
they were created from.
This means that if the async functions we call inside of our tasks are cancellable we miss out
on cancelling that work even though we won’t be using the result from these tasks.
In addition to this, there’s no compile- or runtime guarantee that ensures our network calls
complete before we return from fetchDetailsFor(_:). We know that in this specific
case we await the outcome of both tasks so the tasks will complete before we return but if we
refactor our code at any point we could easily forget to await both tasks and end up with a
task that’s still running when fetchDetailsFor(_:) returns.
At this point you might be wondering what the big deal is. Every code can have bugs, and this
is just something to look out for. You’re not entirely wrong; we should always pay attention to
the code write and make sure that it’s correct. However, if we can get a hand from the runtime
and the compiler, that’s way better than trusting ourselves to not make mistakes. Especially
when the language provides mechanisms that were meant to solve our bugs.
The way we should be going about writing the fetchDetailsFor(_:) method is to lever-
age child tasks that we create through an async let declaration. This will make sure that
the child tasks we create complete before our function returns even if we don’t explicitly aswait
the result of our network calls. Using an async let will also make sure that cancellation is
properly propagated from the parent task to the child tasks that we create.
First, I’d like to show you how we can rewrite the unstructured tasks example that you just
saw with async let and then I’ll talk about how it works exactly:

Donny Wals 207


Practical Swift Concurrency

func fetchDetailsFor(_ movie: Movie) async throws -> ([CrewMember],


,→ [CastMember]) {
async let crewMembersTask = fetchCrewMembersFor(movie)
async let castMembersTask = fetchCastMembersFor(movie)

let crew = try await crewMembersTask


let cast = try await castMembersTask

return (crew, cast)


}

When examining this code, you can see how the structure of this example is similar to the one
that leveraged two unstructured tasks. The key difference is in how we create our tasks.
When you define a property as async let, you can call an async function without awaiting
that function immediately. Instead, you create a child task that will run the function you’re
calling asynchronously. The function you’re calling with async let will start running im-
mediately when the async let is created. Instead of awaiting the result of the function call,
execution resumes to the next line. On the next line, we create another async let to kick
off a second bit of async work.
Once we’re interested in using the results of the async function we called, we must await the
child task that we created. We can then assign the result of our async let to a property that
will hold the result of our child task. In the code you just saw we did this by writing let crew
= try await crewMembersTask. Note that while we’re awaiting our crewMember-
sTask task, both tasks are making progress. Once crewMembersTask is finished, its result
is assigned to crew and we start waiting for the next task to complete.
Because both tasks have been running in parallel there’s a chance that the castMember-
sTask has already finished or will finish soon. That’s the nice thing about how async let
allows us to run multiple child tasks in parallel.
Up until now you have worked with unstructured and detached tasks. You already know that
an unstructured task is created with Task {} and that an unstructured task inherits things
like actors and task local values. You also know that you can create a detached task using
Task.detached and that a detached task inherits nothing from its creation context.

Donny Wals 208


Practical Swift Concurrency

Neither of these tasks are child tasks of their creation context. Some of the most importantant
differences between child tasks and unstructured/detached tasks are the following:

• A child task is cancelled when its parent task is cancelled


• A parent task cannot complete until its child tasks have completed (either successfully
or with an error)
• A cancelled parent task does not stop the child task. Cancellation between parent and
child tasks is cooperative which means the child task must check for cancellation and
explicitly respect the cancellation by stopping its work.
• Priority, actors, and task local values are inherited from the parent task

The most important rules to understand are the first two on that list. Cancelling a parent task
will mark its children as cancelled, and a parent task cannot complete unless its child tasks
are completed. This is fully unique to child tasks, and it’s what makes child tasks structured.
We’ll dig into this a bit more once we get to the section on structured concurrency.
The key reason to use an async let is almost never “I want a child task”. Having a child
task is more of a result of wanting to perform work in a specific way than a result of literally
wanting a child task to exist.
In the example you saw earlier an async let made sense because:

• We had two async function calls we wanted to await in parallel


• The two async functions had no dependencies on each other
• Our function logically couldn’t complete unless both async functions completed

By using async let we were able to perform work in parallel which sped up the execution of
our fetchDetailsFor(_:) method. Instead of having to wait for each part to be loaded
sequentially, the function is now essentially only as slow as the slowest async function we’re
calling.
An async let is very nice when you have a handful of tasks you want to perform in parallel.
However, there are situations where we might want to perform a lot of work in parallel. For
example, let’s say that for each of the movies’s cast members we’d like to make an extra
network call that would fetch some more metadata about the cast member. We don’t know
how many members there are up front, and we know that there could be lots of them.
We can’t create our async let tasks in a for loop unfortunately, and we don’t know ex-
actly how many we’ll need. Luckily, there’s a second way to create child tasks that supports

Donny Wals 209


Practical Swift Concurrency

spawning an unknown number of child tasks called a Task Group.

Using Task Groups to perform work in


parallel
While async let is a fantastic tool to perform a handful of tasks in parallel as part of a
single parent task or async function, we need a different mechanism to create (and await) any
number of tasks.
In the example from the previous section we were able to fetch details for a given movie
by retrieving the movie’s crew and cast in parallel. This is great because now the function
responsible for doing this is as fast (or slow) as the slowest async let we create.
Now imagine that we have an API endpoint that provides any number of movie objects. Maybe
there’s only a handful, maybe there are many. We don’t know this up front so we certainly
can’t write an async let to fetch details for each movie that’s returned by the endpoint.
Consider the following code that retrieves a list of Movie objects:

func fetchMovies() async throws -> [Movie] {


let url = "http://localhost:8080/movies-1.json"
let (data, _) = try await URLSession.shared.data(from: url)
let jsonDecoder = JSONDecoder()
return try jsonDecoder.decode([Movie].self, from: data)
}

And now let’s implement a function that fetches movies and for each fetched movie we also
fetch the associated crew and cast members.
Here’s what that code would look like if we process movie objects one by one:

func fetchEnrichedMovies() async throws -> [(Movie, [CrewMember],


,→ [CastMember])] {
let movies = try await fetchMovies()

Donny Wals 210


Practical Swift Concurrency

var enrichedMovies = [(Movie, [CrewMember], [CastMember])]()

for movie in movies {


let enriched = try await fetchDetailsFor(movie)
enrichedMovies.append(enriched)
}

return enrichedMovies
}

We iterate over our movies one by one and we call our fetchDetailsFor(_:) method
for each movie in our list. This code works and gets the job done but I’d like to speed things
up and process as many movies at once as I possibly can.
The key to doing this is, is a Task Group.
With a Task Group it’s possible to spawn any number of child tasks that run as part of our
Task Group. The Task Group will run as many tasks in parallel as it possibly can, which is great
because that means things should speed up by a lot when processing our list movies.
Before I go in-depth on Task Group, how it handles errors, cancellation, and more, I want to
show you how a basic Task Group is created, and how we can add tasks to a Task Group:

func fetchEnrichedMovies() async throws -> [(Movie, [CrewMember],


,→ [CastMember])] {
let movies = try await fetchMovies()

try await withThrowingTaskGroup(of: (Movie, [CrewMember],


,→ [CastMember]).self) { group in
for movie in movies {
group.addTask(operation: {
return try await self.fetchDetailsFor(movie)
})
}
}

return []

Donny Wals 211


Practical Swift Concurrency

The code above only shows a part of what we need in order to properly set up our fetchEn-
richedMovies() function.
The first line to pay attention to is the following:

try await withThrowingTaskGroup(of: (Movie, [CrewMember],


,→ [CastMember]).self)

To create a Task Group, we must call withThrowingTaskGroup(of:_:) or with-


TaskGroup(of:_:) depending on whether or not our child tasks can throw errors. In
my case, my child tasks can throw errors. If this weren’t the case I would have called with-
TaskGroup instead. This would also change the try await to just an await since the
Task Group would no longer throw any errors. The current version of the code is written in a
way where any errors thrown by child tasks are not directly thrown out from our Task Group
closure. This means that we could wait for our Task Group to complete with just an await
even though our group is a throwing group. The Swift Compiler knows that we’re not throwing
any errors from our Task Group’s body.
We will be looking at throwing error later though, so let’s keep the try await for now.
The first argument passed to a Task Group is the type of elements that we’d like our child
tasks to produce. In this case, I want to produce tuples of Movie, [CrewMember], and
[CastMember]. While Swift 6.2 allows us to omit the of argument because the compiler
can infer the return type of our child tasks I do recommend using it for more complex return
types like the one I use in this example. It reduces guesswork for readers of your code. For
simpler return types, you can safely omit the child task’s return type and let the compiler infer
it.
In a Task Group, all child tasks that we create must produce the same type as an output. If
each task just performs some work but doesn’t actually have an output that you’re interested
in, you can specify Void.self as the Task Group’s output to specify that the child task’s
your adding won’t return any values. When you let the compiler infer your child task’s return
types, it will use the return type of the first task that you add to the group as the return type
for all of your child tasks.

Donny Wals 212


Practical Swift Concurrency

Note that the type that you use for the Task Group’s output must be Sendable. You will be
creating and obtaining your output in a highly concurrent fashion so it must be safe for your
output to be passed across concurrency boundaries. As you learned in Chapter 6 - Preventing
data races with Swift Concurrency, Sendable is how Swift Concurrency makes sure that an
object can be safely passed around in a concurrent environment.
The second argument that’s passed to a Task Group is a closure that’s defined as a trailing
closure in the example you just saw:

// The first part of this line is cut off so you can focus on the
,→ closure
...).self) { group in
for movie in movies {
group.addTask {
return try await self.fetchDetailsFor(movie)
}
}
}

The closure that you pass to a Task Group will be called with one argument; a group. We add
work to our group by calling addTask on the group. In our case, we iterate over the array of
Movie objects that we obtained by calling fetchMovies(). I use a regular synchronous
for loop to perform this iteration. Nothing fancy is happening there.
For each movie in the array of movies, I call group.addTask with a closure that wraps the
work I want to do in my child task.
In this case, I want to obtain and return the result of calling fetchDetailsFor(_:) be-
cause that’s how I can obtain a movie object along with it’s crew and cast members.
At this point, I’m adding work to my Task Group but I’m not doing anything with the result of
the work that I add to the group. Every child task we create obtains and returns a value but
the results of each child task are kind of lost in the ether right now.
We’ll get to fixing that later.
First, I want to talk a little bit about the implications of having a throwing Task Group and
having an error occur inside of child task. After that we’ll go over how we can use the output

Donny Wals 213


Practical Swift Concurrency

from our child tasks so we can collect the results from all tasks into a single array. Just like we
did earlier in the example where I fetched movie details one at a time.

Task groups and error handling


As you now know, Task Groups exist in throwing and non-throwing variants. This means that
in a non-throwing Task Group we cannot have any child tasks that can throw errors from the
closure we pass to group.addTask. This means that the following code is not valid:

await withTaskGroup(of: Void.self) { group in


group.addTask {
// this is not allowed, we should use a do/catch here
try await performSomeWork()
}
}

If we were to fix this code, we could leverage a do/catch block to catch and handle errors
that might be thrown by our call to performSomeWork():

await withTaskGroup(of: Void.self) { group in


group.addTask {
do {
try await performSomeWork()
} catch {
print("something went wrong!!")
}
}
}

We don’t really handle our error in a meaningful way here, but the do/catch block at least
makes sure that we don’t attempt to potentially throw errors from a child task that’s not
supposed to be throwing errors.
The flow of the code in a non-throwing Task Group is relatively straightforward. We add task
to the group and we await the call to withTaskGroup(of:_:). That await completes

Donny Wals 214


Practical Swift Concurrency

once all of the tasks in our Task Groups have completed or once all tasks have stopped their
work in response to the Task Group’s task being cancelled.
If we ignore cancellation for a moment this means that we know that every task in the
Task Group will have fully completed its work once the await on our call to with-
TaskGroup(of:_:) is completed and our code continues to run. Some of our tasks might
not have completed succesfully and we would have printed something to the console those
tasks. Either way, every task has completed in one way or another.
Notice that nowhere in our Task Group code we have to await the child tasks that are created.
When you add tasks to a Task Group using group.addTask, the Task Group will implic-
itly await any child tasks that we added to the Task Group before completing the call to
withTaskGroup(of:_:). This guarantees that all work that’s added to the Task Group is
completed before our code resumes from the await on withTaskGroup(of:_:).
For throwing Task Groups this is slightly different. A throwing Task Group at its core operates
under similar rules as a non-throwing Task Group. This means that once the await for
withThrowingTaskGroup(of:_:) is completed, we know that no tasks in the throwing
Task Group are running anymore. They either completed successfully, with an error, or they
stopped their work in response to cancellation.
To refresh your mind a little, here’s what creating a throwing Task Group looks like:

try await withThrowingTaskGroup(of: (Movie, [CrewMember],


,→ [CastMember]).self) { group in
for movie in movies {
group.addTask(operation: {
return try await self.fetchDetailsFor(movie)
})
}
}

When we call withThrowingTaskGroup(of:_:) we try await that call instead of


only awaiting it. That’s because our Task Group is throwing and can produce an error. Again,
note that when we don’t let errors escape from the closure that’s passed to withThrow-
ingTaskGroup(of:_:) we don’t need the try because all error from our child tasks will
be implicitly ignored.

Donny Wals 215


Practical Swift Concurrency

In the closure that we pass to our throwing Task Group we add tasks to the Task Group and we
don’t have to handle or catch our errors. We can write try await in the addTask closure
and it’s all good.
So what happens when one of our tasks throws an error then?
Well, the short answer is quite straightforward, and the long one.. . is quite complex. So let’s
start with the simple answer.
If a child task in a throwing Task Group throws an error, the Task Group itself will implicitly
swallow that error. In other words, the try await withThrowingTaskGroup( ...
would never receive the error that’s thrown by our child task.
But what if we do run into a situation where we throw an error from the Task Group’s body?
There’s a hard rule that once our await completes, all child tasks must be complete. In other
words, all child tasks must be completed and no longer running by the time we receive our
error.
What happens is that as soon as an error is thrown from our Task Group, all child tasks get
cancelled immediately. This means that every child task must stop what it’s doing as soon
as it can to respect the cancellation action. Once all child tasks have stopped their work, the
thrown error from the child task is rethrown by the Task Group, allowing us to receive and act
on the error.
Regardless of whether we’re working with a throwing Task Group or a non-throwing Task
Group, once our call to with(Throwing)TaskGroup(of:_:) completes, we know that
no child tasks are active anymore.
In the example for a throwing Task Group, you saw the following code:

try await withThrowingTaskGroup(of: (Movie, [CrewMember],


,→ [CastMember]).self) { group in
for movie in movies {
group.addTask(operation: {
return try await self.fetchDetailsFor(movie)
})
}
}

Donny Wals 216


Practical Swift Concurrency

Notice how our Task Group produces a result of (Movie, [CrewMember], [CastMem-
ber]).self and the tasks we add to the group return the result of calling fetchDe-
tailsFor(movie). We don’t attempt to grab the results of our work which means that
we never need to try await the outcomes for our child tasks. This, in turn, means that we
never throw any errors from our throwing Task Group’s body.
In the next section you’ll find out how to do begin receiving the results from your child tasks.

Using the results from a Task Group


Running a lot of work in parallel with a Task Group is great, but it’s even better when your
child tasks can produce results that you can use instead of just sending your results off into
the ether like we are doing now.
The closure that you pass to the with(Throwing)TaskGroup(of:_:) function is al-
lowed to return something. Whatever you return (or throw) from the closure becomes the
result for your call to with(Throwing)TaskGroup(of:_:).
This result does not have to be the same as the result of your child tasks.
A somewhat silly example of returning a value from a Task Group looks as follows:

let list = [1, 2, 4, 5]


let randomNumber = await withThrowingTaskGroup(of: Void.self) { group
,→ in
for _ in list {
group.addTask {
try await Task.sleep(for: .seconds(2))
}
}

let int = Int.random(in: 0..<Int.max)


print("will return", int)
return int
}

print("number is", randomNumber)

Donny Wals 217


Practical Swift Concurrency

If everything goes well and none of our tasks fail, the Task Group will return a random num-
ber.
Note that even though we don’t await every task in the group which means that we’ll hit
our return before all of our child tasks are completed, all tasks in the group have completed
once the random number is assigned to the randomNumber constant.
If you run the code sample in this chapter’s code bundle you’ll see that the first print statement
is printed two seconds before the second print statement in the sample code. That’s due to
the implicit await for child tasks performed by the Task Group.
That’s because the group will implicitly await all active child tasks before allowing our code
to continue running.
Of course, examples like the one you saw aren’t very useful because usually what you’re
actually interested in is running child tasks that produce a result, and then returning the
results from all child tasks from your Task Group.
For example, it makes more sense write something like this where we’d replace the ??? in
the code with something useful:

let moviesWithDetails = try await withThrowingTaskGroup(of: (Movie,


,→ [CrewMember], [CastMember]).self) { group in
for movie in movies {
group.addTask(operation: {
return try await self.fetchDetailsFor(movie)
})
}

return // ???
}

The group argument that’s passed to our Task Group closure serves two purposes:

1. We can use it to add tasks to the group


2. We can iterate over the group with an async for loop to obtain results

The second purpose of the group is what we’re interested in.

Donny Wals 218


Practical Swift Concurrency

After adding all of the work we’d like to perform to our Task Group we can start iterating over
it to obtain the results that are produced by our child tasks:

let moviesWithDetails = try await withThrowingTaskGroup(of: (Movie,


,→ [CrewMember], [CastMember]).self) { group in
for movie in movies {
group.addTask(operation: {
return try await self.fetchDetailsFor(movie)
})
}

var results = [(Movie, [CrewMember], [CastMember])]()

for try await result in group {


results.append(result)
}

return results
}

The code above will receive the result from tasks in the group in the order in which they
complete; we can not force the group’s async sequence to yield results in the order in which we
added tasks. Results are always yielded in the exact order in which the child tasks complete.
Since we’re dealing with a throwing Task Group, our child tasks can fail. When they do,
their error is rethrown by the Task Group in this case. In other words, if we don’t make the
Task Group implicitly wait for all child tasks to complete, we can receive and handle errors
that are thrown by our child tasks before they are thrown to the caller of withThrowing-
TaskGroup(of:_:).
In the code sample from earlier this means that we can make sure that any tasks that threw
an error are silently ignored for example. Alternatively, you could return an array of Re-
sult<YourOutput, Error> instead of an array of YourOutput from your Task Group
closure.
The type of object that you return from your Task Group closure does not have to match the
type of objects that are produced by your child tasks.

Donny Wals 219


Practical Swift Concurrency

Let’s say you’d want to go down the path of leveraging Result in the example we looked at
earlier to make sure that all child tasks can run to completion and produce a result regardless
of how many child tasks failed. Here’s what the code for that would look like:

let moviesWithDetails = try await withThrowingTaskGroup(of: (Movie,


,→ [CrewMember], [CastMember]).self) { group in
for movie in movies {
group.addTask(operation: {
return try await self.fetchDetailsFor(movie)
})
}

var results = [Result<(Movie, [CrewMember], [CastMember]),


,→ Error>]()

while let result = await group.nextResult() {


results.append(result)
}

return results
}

Because a regular for loop would stop as soon as an error is thrown from the sequence, we
have to use a slightly different mechanism to obtain results from our Task Group. For example,
we can use the nextResult() method on our Task Group to ask it for a Result object
that will either hold a value or an error depending on whether the child task that completed
to produce this result succeeded or not.
Note that my child task still produces the same type as before. The Task Group itself will
transform the outcome of our child task to a Result.
Instead of a for loop, I use a while here because that allows me to capture and use the
result of nextResult() until nextResult() returns nil which indicates the end of the
sequence of results. Or in other words, when nextResult() returns nil we know that all
child tasks have completed.
Note that because we don’t throw any errors from the throwing Task Group, a single child
task failure does not impact other child tasks. As long as we do not throw an error from our

Donny Wals 220


Practical Swift Concurrency

Task Group, the group will not cancel itself. This means that if a child task fails and we handle
the thrown error inside of our Task Group closure, all other child tasks continue running as
normal.
However, if we throw an error, any error, from our Task Group closure all pending child
tasks will be marked as cancelled, and the thrown error becomes the result of the call to
withThrowingTaskGroup(of:_:).
With the code that we have in place right now, we can actually build a full blown movie details
scraper that scrapes movie details from the local server that’s included in this chapter’s code
bundle.
In the Xcode project for this chapter, I’ve put together Task Groups and async let to build
a highly concurrent scraper that will retrieve data as fast as possible. I won’t paste all the
code here since it follows the pattern from code snippets you saw earlier in the chapter, I do
encourage you to take a look at the Xcode project and run it at least once to see it in action.
It’s pretty cool.

Limiting the number of tasks that a Task


Group performs in parallel
In the previous examples our code was optimized to run as much work as possible in parallel.
While this is great in some cases, there are cases where you’ll want to limit the amount of
work you’re doing in parallel.
This could be due to restrictions on a server you’re accessing (like rate limits) or your work
might be so CPU intensive that you don’t want to max out CPU usage. Or maybe your tasks
need a lot of RAM. There are various reasons to not want to run tons of work in parallel.
Unfortunately, we cannot configure a Task Group to only run a limited number of tasks at
once. We can, however, write a little bit of code to introduce such a limit ourselves.
When you’re adding work to a Task Group, you don’t have to add all of the work in one go;
you can add a couple of tasks to a Task Group, wait for work to complete, and then add more
tasks. As long as the Task Group hasn’t completed yet we are allowed to add more tasks to
the group.

Donny Wals 221


Practical Swift Concurrency

One approach to limit the number of tasks in our Task Group is to track the number of tasks
that have been added to the task group up to a threshold. Once we hit that threshold we wait
for a task to complete before adding the next task. This allows us to make sure that we never
have more tasks running than we consider appropriate, and for every task that completes we
can add one new task to the group until all work has been added.
An implementation of this would look as follows:

await withTaskGroup(of: Void.self) { group in


for idx in 0..<10 {
if idx >= 3 {
await group.next()
}

group.addTask {
do {
try await Task.sleep(for: .seconds(Int.random(in: 0..<3)))
} catch {
print(error)
}
}
}
}

This is great when we’re adding tasks based on a range. We can check the current index that
we’re performing our iteration for, and based on that we can go ahead and decide whether
we should add our task immediately or not. But how do we adapt this example to work with
something like the movie fetcher that we’ve built earlier? In that movie fetcher we iterated
over a list of Movie objects using for movie in movies.
On approach would be to keep track of the number of added tasks with a simple counter
that’s incremented whenever we add a new task:

await withTaskGroup(of: Void.self) { group in


var addedTasks = 0

Donny Wals 222


Practical Swift Concurrency

for movie in movies {


if addedTasks >= 3 {
await group.next()
}

addedTasks += 1
group.addTask {
// fetch movie information
}
}
}

The nice thing about this approach is that the index that we receive will always be zero based,
even if we’re working with an ArraySlice of our movies array.
One thing to be careful about is the fact that we call group.next() in this for loop. If you’re
looking to collect all of the results of the tasks that you’ve added to your Task Group you’ll
want to make sure that you add the return value for group.next() to your output array
because once a value has been returned by the group’s iterator it won’t be made available
again.
You will also need to write a async for loop after adding all work to your Task Group like you
did in the previous section to collect the results for the tasks that didn’t complete yet.
If we put all of this together and update the Task Group code from the previous section to only
fetch details for 5 movies at a time we’d end up with the following code:

let moviesWithDetails = await withThrowingTaskGroup(of: (Movie,


,→ [CrewMember], [CastMember]).self) { group in
var results = [Result<(Movie, [CrewMember], [CastMember]),
,→ Error>]()
var addedTasks = 0

for movie in movies {


if addedTasks >= 5 {
print("We're at the threshold. Waiting for a task to
,→ complete.")

Donny Wals 223


Practical Swift Concurrency

if let result = await group.nextResult() {


results.append(result)
}
}

print("Adding a new task.")


addedTasks += 1
group.addTask {
return try await fetchDetailsFor(movie)
}
}

while let result = await group.nextResult() {


results.append(result)
}

return results
}

Notice how I’ve moved the results variable to the top of my Task Group closure while I
defined that after the for loop in earlier examples. That’s so that I can start adding results
from within my for loop.
Inside of the for loop, when we add the 6th task and any task after that, I use nextResult()
instead of next() to obtain a Result object and avoid any errors from being thrown out of
my Task Group closure since that would cancel the running tasks.
After my initial for loop completes I wait for the remaining work to complete, append the
results to my results array, and then I return the results from the Task Group closure.
Even though it would have been nice if we could just tell the Task Group to not work on
more than five tasks at a time, implementing our own limiter isn’t too bad and it works well
enough.

Donny Wals 224


Practical Swift Concurrency

Using Task Groups for tasks with varying


types
When you define a Task Group, you must specify the type of output that your child tasks will
produce. This means that you can’t have a Task Group that produces a mix of two different
types depending on what’s returned from the child task.
Imagine that you’re building a page in your app that presents a feed of some sort. You can
fetch information for what should be shown on the feed. The models that we receive from our
server might look a little as follows:

enum FeedItemType {
case photo, text, video
}

struct FeedItem {
let id: UUID
let type: FeedItemType
}

For each FeedItem object we might have to make a separate network call to fetch details for
that specific FeedItem. I know it’s not efficient to do that, and ideally you would not model
your backend like this but for now, let’s just roll with this idea for the sake of the example.
The models that we’ll eventually receive from the server look like this:

struct PhotoItem {
let caption: String
let imageUrl: URL
}

struct TextItem {
let text: String
}

Donny Wals 225


Practical Swift Concurrency

struct VideoItem {
let caption: String
let videoUrl: URL
let duration: Double
}

So based on an array of FeedItem objects, we want to run a Task Group to fetch information
for each item and return the corresponding struct for each item.
Unfortunately, we can’t create a Task Group where we have all three structs as the output.
You could make three Task Groups, one for each feed item type but that’s not ideal either. A
more acceptable solution would be to define an enum that has cases and an associated type
for each type of item:

enum PopulatedFeedItem {
case photo(PhotoItem)
case text(TextItem)
case video(VideoItem)
}

Now you can create a Task Group that produces PopulatedFeedItem objects by fetching
different data depending on the type property of each FeedItem as follows:

let populatedItems = try await withThrowingTaskGroup(of:


,→ PopulatedFeedItem.self) { group in
for item in feedItems {
group.addTask {
switch item.type {
case .photo:
let photoItem = try await fetchPhotoItem(withID: item.id)
return PopulatedFeedItem.photo(photoItem)
case .text:
let textItem = try await fetchTextItem(withID: item.id)
return PopulatedFeedItem.text(textItem)
case .video:

Donny Wals 226


Practical Swift Concurrency

let videoItem = try await fetchVideoItem(withID: item.id)


return PopulatedFeedItem.video(videoItem)
}
}
}

var results = [PopulatedFeedItem]()


while let result = await group.nextResult() {
do {
let item = try result.get()
results.append(item)
} catch {
print("we're ignoring errors for now...")
}
}

return results
}

The code above switches on the type property on FeedItem to determine which function
should be called to obtain data. Once the data is obtained, the relevant PopulatedFeed-
Item case is returned with an associated value.
Finally, we use group.nextResult() to accumulate all of the results from our API calls
together and return an array of PopulatedFeedItem objects from our Task Group.
This approach to having a Task Group that has child tasks with varying results is the way that
Apple actually recommends in some of their documentation and WWDC videos. It’s really also
the most solid and flexible way I have found to solve this problem.
Generally speaking you shouldn’t need this pattern often. But when you do need it, it’s nice to
know how.
Before we wrap up this chapter, there’s an important Swift Concurrency concept that you’ve
seen in action in this chapter as well as in the previous chapter. We haven’t explicitly named
the concept, but now that you’re familiar with its principles I think it’s time we take a moment
to learn about Structured Concurrency.

Donny Wals 227


Practical Swift Concurrency

Understanding structured concurrency


While I was writing this book I knew I’d have to work Structured Concurrency into the book
at some point. It’s a pretty big deal, an important concept, and possibly something you’ve
heard of at some point. I considered explaining the topic at the start of this book. But that felt
wrong.
Structured Concurrency makes most sense when you’ve seen child tasks, and when you’ve
worked with Task a bit. Sure, it plays a role for every asynchronous function you write, but
you don’t need to know about structured concurrency when you write an async function as
you’ve seen throughout this book.
I considered writing a chapter dedicated to structured concurrency and place it right before
or right after the chapter you’re reading right now. That would make sense because at this
point you’ve seen async functions, unstructured tasks, async lets and Task Groups. All
building blocks that you need to understand structured concurrency.
However, that chapter would repeat a lot of what you’ve seen in this chapter. The rules that
you learned in the Task Groups section like how child tasks completing before the Task Group
can complete would be repeated in the structured concurrency chapter, making it largely a
repetitive chapter that wouldn’t necessarily introduce anything you haven’t seen before.
That might actually make it a really good chapter for you to solidify your understanding of
everything you’ve seen so far. And maybe making it a section in this chapter is a decision I’ll
come back to later in a book update.
But for now, I don’t want to make structured concurrency into a bigger topic than it needs to
be.
Structured concurrency is based on a concept called the fork-join model which is a concept that
was coined in the sixties. The fork join model describes a method of modeling asynchronous
work in a way that makes it possible to ensure that any work kicked off by a root task completes
before the root task itself completes.
Consider the following image:

Donny Wals 228


Practical Swift Concurrency

Figure 16: The fork join model visualized

It shows how we can call a function, kick off a bunch of other work in parallel, and our function
isn’t completed until the work we’ve kicked off is completed.
This is very similar to how a Task Group works.
Note that this is not at all similar to how calling an async function from another async function
works since that suspends the calling function while the called function does its job. The
calling function can’t resume until the called function completes. Logically this is very similar
to what the fork join model describes but the key thing that’s missing is the parallelism that’s
shown in the image above.
However, when we change some of our async function calls to use async let to have these
functions run in parallel, we’re suddenly back on something that looks like the fork-join model
as depicted in the way fetchCrewForMovie(_:) and fetchCastForMovie(_:) are
called in the graphic you just looked at.
The fork-join model is very, very similar to what Structured Concurrency describes. Both are

Donny Wals 229


Practical Swift Concurrency

systems that describe how work items relate to each other, and how any task in the system
can only complete once all of the child tasks it spawned has completed.
The above is, in just one sentence, what structured concurrency is all about. Describing the
relationship between parent and child tasks.
The key difference between the fork-join model and Structured Concurrency is that Structured
Concurrency can enforce that all child tasks complete before the parent does. In the fork-
join model, there are no guarantees or ways to enforce that the join part of the concept is
implemented and honored correctly.
Structured Concurrency on the other hand provides a model where the program is aware of
how child tasks are completed, and how they related back to their parent task. This means
that the program can ensure correctness around child tasks completing, and how errors are
propagated.
For example, when a child task in a Task Group throws an error, and we allow this error to be
thrown from our Task Group closure, the system knows to cancel all of the child tasks, wait for
all child tasks to honor their cancellation and complete their work, before the error is actually
thrown from the Task Group so we can handle it. If the error would be thrown before all child
tasks are completed, that would be a breach of structured concurrency since the parent (the
Task Group) would throw an error (and complete) while it still has running child tasks.
At this point in time, the only places in Swift Concurrency that implement Structured Concur-
rency. Or rather the only places in Swift Concurrency where you will interact with Structured
Concurrency through child tasks right now, are Task Groups and async let. Everything else
you do like spawning unstructured tasks with Task or detached tasks with Task.detached
are actions that do not involve structured concurrency.
When you spawn an unstructured task with Task, you don’t create a child task, and the
function that you spawned that task from can complete just fine without the unstructured
task completing. That’s why it’s named an unstructured task; it doesn’t follow the rules of
Structured Concurrency.
When you await an async function from within another async function you’re not creating a
child task so if you want to get super strict, awaiting something is not structured concurrency.
However, I do feel like awaiting async function involves principles from structured concurrency.
For example, you know that the function you awaited has fully completed by the time it
returns.

Donny Wals 230


Practical Swift Concurrency

Following the principles of Structured Concurrency is not enough for something to be Struc-
tured Concurrency though. So at this point in time, you’re really only actively using Structured
Concurrency when you’re working with child tasks produced through async lets or Task
Groups even when its rules and principles can be seen throughout everything Swift Concur-
erency related.

In Summary
In this chapter, you’ve learned everything you need to know about child tasks and structured
concurrency.
You’ve seen how you can leverage async let as a tool to concurrency run multiple asyn-
chronous functions in parallel before awaiting their results. You saw that this can be extremely
useful when you want to fetch data from multiple resources at once.
After that, you were introduced to Task Groups. You saw how a Task Group can run many,
many, tasks at once as child tasks of itself. You learned that tasks in a Task Group can produce
results, and that it’s possible to iterate over these results using an async for loop. We need to
do this if we want our Task Group to produce an array of individual child task results.
After that, you learned how errors propagate from within a Task Group, how throwing an error
from a Task Group cancels all of its child tasks, and how you can obtain all results from child
tasks in a group using nextResult to avoid any errors from being thrown out of your Task
Group closure.
Lastly, you learned how async let and Task Group relate to Structured Concurrency. You
learned about the fork-join model from the sixties and how it’s a foundation for Structured
Concurrency that ensures all child tasks spawned by a given parent task must complete
before the parent task itself completes. You also learned that Structured Concurrency is only
applicable in Swift when you’re working with child tasks produced by async lets and Task
Groups.
At this point in your Swift Concurrency journey, you have learned about all of the important
concepts that you need to know to effectively use Swift Concurrency. You know how you
can write and call async functions, you know how what happens when you await a function,
you’ve learned about actors, you’ve seen Task Groups, async sequences, and much more.

Donny Wals 231


Practical Swift Concurrency

In the next two chapters we’ll take a look at some key concepts that don’t involve learning
about Swift Concurrency as a topic but instead we’ll focus on being more effective with Swift
Concurrency. First, you’ll learn about testing, and how Swift Concurrency impacts your testing
code. After that we’ll wrap up the book with a chapter on profiling and debugging to help you
find and eliminate concurrency related bugs in your code.

Donny Wals 232


Practical Swift Concurrency

Chapter 10 - Swift Concurrency and your


unit tests
Unit testing is an important part of writing high quality code. It doesn’t guarantee that code
is high quality, nor does high quality always require unit tests, but unit testing is one of the
many tools we can use to write good code.
In this chapter, I would like to take a moment and explore how Swift Concurrency and XCTest
interact, which tools we can use, and I’d like to show you some interesting ways to test code
that you might otherwise have a hard time testing.
This chapter is not an introduction to unit testing, and I will not be discussing if and how you
should be building mock objects. This chapter will also not teach you about dependency
injection or abstracting objects with protocols.
There are entire books dedicated to the topic of unit testing, so for me to write a crash course
on the topic in a book about Swift Concurrency seems wrong. Instead, I’d like to focus on the
relationship between unit tests and Swift Concurrency.
In this chapter, we’ll cover the following topics:

• Writing async test cases


• Testing async sequences
• Testing code that uses Swift Concurrency as an implementation detail

Writing async test cases


Testing asynchronous code has always been notoriously complex and involved. In XCTest, you
test your async code by leveraging completion handlers and XCTestExpectation objects
to make sure that your test method does not complete before your asynchronous work is
done. In Swift Testing this is somewhat easier using the confirmation API, which is very similar
to test expectations, but it’s a little bit different.
Essentially both techniques block execution of your test method until the asynchronous work
that you’re waiting for fulfills the expectation or confirmation that was preventing the test
from finishing.

Donny Wals 233


Practical Swift Concurrency

Let’s take a quick look at what testing a callback-based asynchronous API looks like using
Swift testing.

struct SongGeneratorTests {
let generator = SongGenerator()

@Test func callbackBased() async {


await confirmation("Song generated") { confirm in
generator.generate { result in
confirm()
}
}
}
}

The SongGenerator object is a class that exists in the sample app for this chapter in the
code bundle for this book. The unit test is intended to test the song generator method on
this class which, if I had implemented this method fully, would probably perform some heavy
work to generate a chord sequence for songs. The exact nature and details of this work are
not relevant for what we’re trying to learn in this chapter though.
In the example test above, our test will wait for our callback to called and the confirm object
to be called.
The code isn’t terrible and I’d say in a small example like this it’s not too bad to read or
understand this code.
However, we can refactor our code and test to leverage Swift Concurrency and the exact same
test would look a little bit like this:

@Test func asyncBased() async throws {


let song = try await generator.generate()
}

Of course, this code doesn’t have any assertions to make sure that the song was generated
correctly and that everything went as expected but just obtaining the result of calling gen-
erate() was much simpler than before.

Donny Wals 234


Practical Swift Concurrency

We can mark our test methods as async as well as throws and we can await any async work
we’re doing right inside of our test.
This means that you’re no longer dealing with test expectations and you can write your test
code almost like every test case you write is synchronous.
If any of the async work you’re awaiting throws an error that you don’t catch or inspect inside
of your test, your test will fail with the thrown error as a failure message. This can be useful
when you’re testing methods that you expected to succeed. When you’re testing a method
call that you expect to fail you can omit the throws from your test case and catch your error
in a do { } catch { } for inspection.
I could write a couple more paragraphs on the fact that you can await async methods inside
of test cases because this section would be quite short otherwise, but honestly I would just
be doing that to add filler. The reality of the situation is that XCTest has fantastic support for
awaiting async methods which can really help you write more readable test cases.
In the next section I’d like to take a closer look at testing the output of async sequences.

Testing async sequences


When you’re testing code that leverages async sequences there are several goals and expecta-
tions that you might want to verify with your test.
For example, some sequences will have a clear beginning and ending. You could write a test
for an async sequence that emits progress for a specific task. That sequence will emit values
from 0 up to and including 1 where a value of 1 means that the process that the sequence
reports progress for has completed. When you write a test for this sequence there are a couple
of things you’ll want to make sure to test:

• The sequence ends at some point


• The sequence never emits a value smaller than 0 or greater than 1
• Every emitted value must be larger than or equal to the previously emitted value

We can test these three assumptions using a relatively simple test because we know that the
sequence will keep going for a while and then it will complete. The fact that we can mark our
test methods as async is a huge help here.

Donny Wals 235


Practical Swift Concurrency

We can keep track of the last emitted value outside of our for loop to assert that every emitted
value is larger than or equal to the previously emitted value, and inside of the for loop we can
check that the value is within our bounds of 0 and 1:

struct ProgressSequenceTests {
let generator = ProgressSequence()

@Test func sequenceCompletesWithValidValues() async throws {


var lastValue: Double? = nil

for await value in generator.run() {


if let lastValue {
#expect(value >= lastValue)
}

#expect(value >= 0)
#expect(value <= 1)

lastValue = value
}

let value = try #require(lastValue)


#expect(value >= 1)
}
}

The run() method on my generator creates an AsyncStream that generates value for the
purpose of this example. If you’re curious about its implementation I encourage you to take a
look at the code samples for this chapter in the book’s code bundle.
The test in this example reads a lot like it would if you would test a synchronous method. We
iterate over all values emitted by my async stream and if we’ve received a previous value, we
make sure that the new value is larger than the previously received value.
We also check that the value we received is between 0 and 1, and then we update our last
received value property so we can use for our next value.

Donny Wals 236


Practical Swift Concurrency

After the for loop I make sure that lastValue was set to a non-nil value and that this value
is larger than or equal to 1. I’m not comparing to 1 exactly because of floating point rounding
errors that can occur in programs.
If our stream emits values that don’t fit our bounds or if a value smaller than the previous
value is emitted, our test fails. Our test will never complete if our stream never completes and
if we end the stream before all progress was reported (before lastValue is set to 1) the test
fails.
There’s an important detail in the previous sentence that we need to keep in mind when
testing async sequences...

Our test will never complete if our stream never completes

In other words, the technique above won’t work if we expect our stream to be running for a
much, much longer time.
Luckily, we can still write tests for streams that never end. We can keep track of the values we
expect to receive and then check whether we’ve received all values on every loop iteration.
For example, if we expect an async sequence to emit the values 1, 2, 3, 4, and 5 in that order
we can write a test that looks as follows:

struct NumbersSequenceTests {
let emitter = NumbersSequence()

@Test func numbersAreEmitted() async {


var expectedNumbers = [1, 2, 3, 4, 5]

for await value in emitter.run() {


let expected = expectedNumbers.removeFirst()
#expect(value == expected)

if expectedNumbers.isEmpty || value != expected {


break
}
}
}
}

Donny Wals 237


Practical Swift Concurrency

By creating an array with all expected output and removing the first element from the array
on every loop iteration we can check that every emitted value from our sequences matches
its corresponding value in our array of expected values.
Once we’ve removed and compared all values or if we find a value that doesn’t match our
expected value we can break out of our loop to end the iteration.
The async sequence might still be able to produce more values than expected, but we’re not
interested in those values. All we’re interested in is the handful of values that we wanted to
see. If the sequence can generate more values that’s fine, but we don’t want to assert anything
about them so we break out of the loop once we’ve seen everything we want to see.
If your sequence should emit the values that you’ve placed in your array of expected values
and complete immediately after emitting the last value, omit the break that I had in the
sample code. This would be very similar to the first async sequence related test you saw where
we were interested in making sure that our sequence completed.

Testing code that uses Swift Concurrency


as an implementation detail
Sometimes you’ll want to write tests for code that uses Swift Concurrency, but in a very
non-visible way. For example, you might have a regular synchronous function that starts an
unstructured Task to perform some work in the Swift Concurrency world.
The key thing to do in these cases is to ask yourself what it is that you want to test.
If you want to test that the task spawned by your function runs on the main thread, you’re really
getting yourself into a tricky situation. After all, we really shouldn’t be testing implementation
details at all (depending on your school of thought regarding unit tests).
More often than not it’s more interesting to test whether calling a function has the desired
outcome. Or in other words, you want to make sure that calling your function produces the
side effects that you expected.
Side effects in this case are all changes to the state of what you’re testing as a result of running
your test.

Donny Wals 238


Practical Swift Concurrency

For example, you might be interested in writing a test that ensures that calling a specific
function on one of your classes kicks off a synchronization process in your app where you’ll
download and upload some data to make sure that your user’s app is in sync with data that
exists on the server.
The details of this synchronization process should not matter in your test. However, there are
a couple of things that you might want to test.

• We want to make sure that the sync process started


• We want to make sure that our state updates in response to the sync process

In just those two tests there are lots and lots of things that you may or may not want to test
depending on your app’s architecture and design philosophy.
I want to reiterate that this chapter is in no way intended to teach you unit testing, best
practices, or to tell you how to write testable code. Instead, this chapter (hopefully) follows
universal unit testing best practices while focussing on teaching you how to solve problems
that follow certain patterns so you can apply my solutions to problems you might encounter
while unit testing your own code.
Or in other words, do as I say not as I do. Now let’s get back to the problem at hand.
When you want to test that some piece of your system gets to a specific state asynchronously
without callbacks, without awaiting anything, and without iterating over an async sequence
there’s a couple of options we have.
In the simplest case, your code updates a property that’s marked as @Published at the end
of your asynchronous process. You can observe that published property and assert that the
property’s new value is the value you expected. To write a test that does this, we can leverage
an async for loop as follows:

struct SynchronizerTests {
let synchronizer = Synchronizer()

@Test func valueChangedLoop() async {


synchronizer.synchronize()

var expectedCount = 2

Donny Wals 239


Practical Swift Concurrency

for await value in synchronizer.$newsItems.values {


if value.count == expectedCount {
break
}
}
}
}

Looking at this example, you can see that I don’t do much here. I essentially call a synchro-
nize() function on my synchronizer, and I expect that a property called newsItems will
change and eventually have a value of two.
Note that I don’t expect the value being set from zero to two or for it to happen in one go. All
I’m expecting is that there will at some point be two news items available in my synchronizer.
In a real test you would also see some mocks being set up here, and the mock would be
responsible for providing the two items to the synchronizer. My test is a lot simpler than that
and I’ll leave the mocking and stubbing up to you.
When our newsItems property would not be @Published, we’d have to leverage some
different techniques to still observe and assert the number of loaded news items in our test.
You might be asking, if our test is struggling to observe this value then how would our app
observe it? That’s a great question! And I don’t have an answer for that. Usually when you’re
testing you expect some part of your codebase to update. And usually that part of the codebase
has some way of telling other parts of the codebase about its updates.
One approach could be through a closure that’s called after a synchronization is completed.
If that’s how our synchronizer works, we could leverage Swift testing and a continuation to
have our tests wait for the completion handler to be called:

@Test func valueChangedClosure() async {


await withCheckedContinuation { continuation in
synchronizer.onComplete = {
#expect(synchronizer.newsItems.count == 2)
continuation.resume()
}

Donny Wals 240


Practical Swift Concurrency

synchronizer.synchronize()
}
}

Unfortunately, in the scenario I outlined earlier we don’t have a closure that’s called. We want
to somehow poll the newsItems property regularly to check if it’s value has changed and
then complete our test once we have the expected number of news item objects.
With Swift Testing I have not yet found a way to do this without having to write anything
custom. So instead of showing you how to write a custom solution, I think it makes sense to
take a look at how we do this in XCTest so that if you would like to write a custom solution,
you more or less know which API you would like to mimic.
XCTest has support for a predicate based expectation that will evaluate a given predicate every
second until the predicate returns true. Once the predicate is true, our expectation fulfills and
the test can complete.
The best way to test our code in this case is to use a closure based version of NSPredicate.
We can evaluate our conditions in the closure and return true if all of our conditions are met.
Here’s what that looks like:

func test_predicateExpectation() {
let synchronizer = Synchronizer()
let predicate = NSPredicate(block: { _, _ in
return synchronizer.newsItems.count == 2
})
let expect = expectation(for: predicate, evaluatedWith: nil)
synchronizer.synchronize()
waitForExpectations(timeout: 2)
}

We still verify that the synchronizer has acquired two news items, and we check this in a
predicate. This predicate is evaluated every second so we’ll need to make sure that our test
waits for more than one second for our expectation to fulfill. Setting the timeout to a too low
value, like a second, can result in a failing test because of the predicate not being evaluated in
time.

Donny Wals 241


Practical Swift Concurrency

In Summary
In this chapter, you’ve seen that Swift Testing has a lot of built in support for testing Swift
Concurrency code. You saw that you no longer have to rely on XCTestExpectation to wait
for your async code to produce results when you write async tests in Swift Testing. Instead,
you can now write async test functions and await any async work you need to wait for.
After that, we moved on to seeing how we can test our async sequences. You saw how we can
keep our test code relatively simple which is nice because that means we don’t have to invest
a lot of effort into workarounds and trying to untangle our test code when we come back to it
weeks or months after writing it.
Lastly, we looked at patterns that are useful when you’re testing code that’s not marked as
async but still uses Swift Concurrency under the hood. You saw an example where we resorted
to using a continuation to wait for our async code to complete.
All in all you should have a much better sense of how you can start writing unit tests for your
async code. There’s a lot more to cover about Swift testing that I chose to not put into this
book because Swift testing is a pretty new framework that has a lot of different features that I
would have to cover and writing a comprehensive testing guide that explores Swift Testing
fully was never a goal of this book.
In the next and final chapter of this book we’ll discuss some of the available profiling and
debugging techniques that can be useful when you’re looking to gain more insights into what
your code is doing.

Donny Wals 242


Practical Swift Concurrency

Chapter 11 - Debugging and profiling your


asynchronous code
Being able to write good code and to test it are essential skills for any developer. However,
mistakes get made and sometimes the code we write isn’t as good, predictable, or stable as
we might hope. And even when our code appears to function as expected you’ll want to keep
an eye on whether your code isn’t somehow performing worse than you expected. The truth is,
you should be profiling your code even when you think everything is going well. Periodically
checking if your app’s performance is good allows you to discover regressions early. Which in
turn means you can mitigate problems before you’ve even shipped them to your users.
In this chapter, I would like to highlight some of the ways that you can profile and debug your
Swift Concurrency related code in Instruments. You’ll learn how you can inspect the lifecycles
of your tasks, as well as keep an eye on the work that’s being done by your actors.

Investigating task activity with


Instruments
You’ve learned that Structured Concurrency describes how one task can spawn various child
tasks. You’ve also learned that a task can only complete when all of its child tasks have
completed. Knowing this rule is really good, because it allows us to reason about the way our
code runs.
In Chapter 9, we made extensive use of task groups and async lets to build a whole hierarchy
of tasks. We leveraged a task group to fetch 100 pages of movies from a local server. Then we
used another task group to fetch information for each movie on a given page. Lastly we used
two async lets for every movie so we could fetch crew and cast information in parallel.
All of our movie fetching work was kicked off by invoking a single method called fetchAll-
Movies. The cool thing about structured concurrency is that we know that when
fetchAllMovies returns, all (structured) work spawned by that method has also
completed.

Donny Wals 243


Practical Swift Concurrency

But how can we be sure that we don’t accidentally spawn some unstructured work that’s still
running in the background? And how can we double check that the hierarchy that we expect
to be created is created correctly?
We can answer both of these questions and more with Instruments.
If you want to follow along with the screenshots and examples in this chapter, grab the finished
Chapter 9 project and make a copy of it since it contains everything we need to explore the
Task related features of the Swift Concurrency Instrument. The Chapter 11 folder in the code
bundle only contains the final product of this chapter.
When following along, don’t forget to start your local webserver by navigating to the movies
folder in your terminal and start an http server by running python3 -m http.server
8080, For more detailed instructions on running the local server, make sure to take a look at
the README.md file in the book’s code bundle.
To run your project with instruments, press cmd+I or choose Product → Profile. . . from the
menu in your Mac’s top bar.
After doing this, a window will appear that allows you to pick a predefined Instruments
template. In our case, we’re interested in the Swift Concurrency Instrument so we can record
and inspect concurrency related information.

Donny Wals 244


Practical Swift Concurrency

Figure 17: The Instruments template picker

The Instruments window that appear after that contains several lanes worth of information.
For now, we’ll focus on the top lane which contains information about tasks and structured
concurrency.

Donny Wals 245


Practical Swift Concurrency

Figure 18: The blank Concurrency Instruments template

When you hit the record button and execute the app’s movie fetch function you’ll see that a
lot of tasks are created. The graphs in the top lanes fill up rapidly and then the middle lane
goes back down gradually as shown in the following image.

Donny Wals 246


Practical Swift Concurrency

Figure 19: Initial trace with Concurrency Instrument

The top lane shows us the tasks in our program that Instruments believes to be currently
active and making progress. Due to the very high volume of tasks we create in our example
we immediately stumble upon somewhat of a limitation of Instruments. As indicated by the
yellow warning symbols shown on the timeline, Instruments drops certain measurements
due to a very high volume of signpost data being generated.
The second lane which is labelled Active tasks shows the number of tasks that are registered
within our system. This includes both tasks that are running as well as tasks that are awaiting
other work to be completed. What’s interesting is that this lane seems to have dropped fewer
data points and we can clearly see the number of tasks go up and then come back down.
The third task related lane is labelled Total tasks and it shows us the total number of tasks
that have been created while we were collecting data for our app.
Let’s try to run our experiment again but instead of kicking of a couple of thousand network
calls we only fetch and process a single page worth of movie data. Update the button action

Donny Wals 247


Practical Swift Concurrency

handler in Chapter9’s ContentView as follows:

Button(action: {
Task {
let fetcher = MovieFetcher()
//let movies = try await fetcher.fetchAllMovies()
let movies = try await fetcher.fetchEnrichedMovies(page: 1)
}
}, label: {
Text("Fetch movies")
})

Running Instruments with this code in place shows us a much nicer picture:

Figure 20: Slightly better Instruments trace

If you’re following along and your graph looks a lot less detailed, try zooming in with cmd + to

Donny Wals 248


Practical Swift Concurrency

see more detail.


In the top lane you can see that only a handful of tasks are in the Running state at a given
time. This means that we only have a handful of tasks actively making progress during that
time. All the other tasks that we created are awaiting other work to complete.

Figure 21: Focus on Running tasks

If we look at the second lane to see the number of tasks that are in the Alive state you can see
that we have many more tasks in that lane than we have in the Running lane. A task that is
awaiting other work to complete is considered alive, but not running.

Figure 22: Focus on Alive tasks

And of course the total number of tasks only goes up because that graph shows the total
number of tasks that have ever existed in our app.
When everything is set up correctly, you should see that at a time that you don’t expect to be
doing any async work that both the alive and running tasks column show zero tasks.
If you have one or more tasks that are awaiting values from an async sequence that’s long
running, that task will be alive throughout the entire lifetime of the task but it will only be
running when your async sequence produced a value that you can process.
You can use the combination of these two lanes in Instruments to verify that your app isn’t
doing work when you don’t expect it to (and to verify that it’s doing exactly the work that you
are expecting to be doing).
Let’s update the code in our button handler one more time to introduce a bug:

Donny Wals 249


Practical Swift Concurrency

Button(action: {
Task {
let fetcher = MovieFetcher()
//let movies = try await fetcher.fetchAllMovies()
let movies = try await fetcher.fetchEnrichedMovies(page: 1)

for await value in NotificationCenter.default.publisher(for:


,→ UIApplication.willResignActiveNotification).values {
print("will resign")
}
}
}, label: {
Text("Fetch movies")
})

Every time we tap our button we create a new unstructured task, and that task starts iterating
over an infinite async sequence after fetching a bunch of movie data.
When we run instruments and tap the button a couple of times, the graphs will look as fol-
lows:

Figure 23: Trace that shows a bug in our button handler

The Total tasks lane looks pretty much as expected. Every time we tap out button more tasks
are created. The running tasks lane looks pretty good too. We can see that our running tasks
spike every time we kick off work and then they come back down to zero when the work is
done.

Donny Wals 250


Practical Swift Concurrency

The alive tasks lane tells us that we have an issue though. We have four tasks that are alive
right now even though that might not be what we expected.
The reason these tasks are still alive is of course that I’ve introduced a bug on purpose. But
let’s say it wasn’t on purpose. Let’s say I want to figure out where I’m creating these four tasks
that are alive when I expected no tasks to be alive.
There are a couple of things we can do to start figuring out what’s happening. First, we’ll
want to narrow down the scope of what we’re looking at. Instrument’s bottom panel shows
all kinds of interesting information for ranges of time in the timeline. By default, the entire
timeline is selected.
To select a smaller timeframe, drag over the section you want to inspect so it becomes high-
lighted. The bottom panel will adapt accordingly. In the divider between the bottom panel
and the timeline there are various ways to look at the data that Instruments collected.

Figure 24: Narrower time focus

If we choose to look at the task state summary, we can already see that there are a couple of

Donny Wals 251


Practical Swift Concurrency

tasks there. To be precise, there are four tasks shown in the Continuation state. This means
that these tasks are waiting for their continuations to be called so they can be resumed. In
other words, these tasks are awaiting something. Note that these continuations aren’t the
kind of continuations you used in earlier chapters when you bridged existing code to Swift
Concurrency. These are continuations that are internal to Swift.
When you hover over one of the tasks and you click the little arrow button that appears, you’ll
see the following information in the bottom panel:

Figure 25: Detailed information in bottom panel

This tells us a lot. Apparently there’s a closure inside of our ContentView that created a
task that’s currently active. We can start exploring with this and figure out that the only task
we create is the one we kick off in the button’s action handler.
Another interesting view to explore is the task forest. This view shows the same four tasks
except we also see all the tasks that have been associated with this task. It can be useful to
use the task forest when you want to gain insights into how different tasks in your codebase
relate to each other.

Donny Wals 252


Practical Swift Concurrency

Figure 26: Task forest

The neat thing about the task lanes is that they provide lots of insights into the structure of our
code and the relationships that exist between our tasks. It does this down to a level of detail
that we can’t really get with the Time Profiler since our code isn’t actually doing anything
when we have tasks that are alive when they should have completed. These tasks exist, and
they are often an indication of memory leaks, but we can’t see them in the Time Profiler. For
that reason alone, you’ll want to look at the concurrency instrument every once in a while.
The second section in the Instruments template we just used is labelled Swift Actors Let’s go
ahead and explore that section next.

Tracking actors with Instruments


When you’re working with highly concurrent code there’s a good change that you’ll leverage
actors to ensure safe access to one or more parts of codebase. In fact, it’s quite likely that
you’ll use the @MainActor annotation in your codebase to make sure that certain UI related
state changes occur on the main thread at all times.
Because actors serialize the work they do it can be highly undesirable to bombard a single
actor with lots and lots of calls that must be serialized. Especially when that actor is the main
actor. If your main actor is busy processing lots of potentially long running tasks, the UI in your
app will be unresponsive because you’re blocking your main thread. When you’re blocking
the main thread it’s often quite obvious to you and your user’s that something’s wrong. Your
app will be unresponsive after all.

Donny Wals 253


Practical Swift Concurrency

On the other hand, when you do some expensive processing on an actor in the background,
it might be far less obvious that you’re not using your actor optimally. Your main sign that
something’s wrong will usually be that your work appears to take longer than it has to, or when
you notice that something that you wrote to be async in the background seems to process
tasks one by one instead of in parallel.
To help you detect and fix these kinds of issues it’s highly recommended that you regularly
use Instruments on our app to keep an eye on how your app performs work. You already
know how to read the Task related lanes in the Swift Concurrency Instrument, but when we
combine the data from those lanes with the actors lane we can really start to understand why
our app might not perform as well as we’d hope.
As an example, we can take a look at the Chapter 11 sample project in the book’s code bundle.
This project contains a version of the movie fetcher object that we used in Chapter 9 except
I’ve rewritten it to be quite inefficient. We can use this inefficient example to explore how the
actors lane instruments can be leveraged to find suboptimal tasks in our codebase.

Tip:
When following along with the steps in this section make sure to run a local http server
in the movies folder from the code bundle. In your terminal, navigate to the movies
folder and run python -m http.server 8080 to start your server. Check out the
README.md file in the book’s code bundle for more detailed instructions.

Before we look at any code, let’s go ahead and profile the sample app. Use cmd+i to launch
Instruments and choose the Swift Concurrency template. Once you’re recording in Instruments
click the Fetch movies button in the app to start fetching movies. You should see the label in
the app change to say 20 movies were fetched relatively quickly.
However, when we inspect the Instruments trace we see an interesting piece of information in
our tasks lane.

Figure 27: Slow Tasks shown in the Tasks lane

Donny Wals 254


Practical Swift Concurrency

In the top lane you can see that we have only one task running at a time while we have over
sixty tasks alive. That’s quite unexpected. The movie fetcher object was built to fetch as many
movies in parallel as possible so we would expect at least a couple of tasks to be running at
once.
The movie fetcher object itself was written as an actor because it holds the fetched movies as
mutable state. So that means that it only works on a single task at a time. We can actually see
that the movie fetcher being an actor is the entire reason for our slowdown when dig a bit
deeper.
With the tasks lane selected, we can look at the bottom of Instruments and see a list of
Enqueued tasks. A task in the enqueued state is a task that isn’t running because it’s waiting
for an actor to process its messages. We have quite a few tasks in this state and we can explore
these tasks more in depth by pinning them to the timeline.

Figure 28: Enqueued tasks

Once you’ve pinned a task or actor to the timeline you can learn a lot more about what’s
happening. For example, after pinning one of our tasks to the timeline we can look the
so-called narrative view to learn more about everything this task has been up to so far:

Donny Wals 255


Practical Swift Concurrency

Figure 29: The narrative view for a task

This is pretty cool because we can see when our task was created, when it was in an enqueued
state (indicated by the red color on the timeline), when it was waiting for other work to be
completed, and more.
We can even see which actor has our task in the enqueued state. Right clicking on the actor
name allows us to pin that actor to the timeline so we can inspect the actor’s narrative view:

Donny Wals 256


Practical Swift Concurrency

Figure 30: The narrative view for an actor

After zooming in on the timeline a bit we can read the bottom two lanes to see that the actor
is performing lots of small pieces of work. The bottom most lane shows the actor’s mailbox
queue. We can see that it floods with messages rather quickly and then it slowly but surely
starts chugging away and clearing those messages.
If we look at the bottom panel in Instruments, we can see more information about what the
actor is working on at any given time. This allows us to see which functions or properties on
our actor are called often, and how long each call takes.

Donny Wals 257


Practical Swift Concurrency

Figure 31: Detailed view of which functions are called

What’s interesting in the image above is that we see lots of calls to fetchDetailsFor(_:).
This method is a method that uses two async lets to read data from the network so it should
really be running asynchronously without blocking our actor.
When we look at the two functions called via async let, we would expect those functions to be
asynchronous. They are responsible for reading data from the network after all:

func fetchDetailsFor(_ movie: Movie) async throws -> EnrichedMovie {


async let crewMembersTask = fetchCrewMembersFor(movie)
async let castMembersTask = fetchCastMembersFor(movie)

let crew = try await crewMembersTask


let cast = try await castMembersTask

return (movie, crew, cast)


}

When we take a closer look at the fetchCrewMembersFor(:) and fetchCastMem-


bersFor(_:) methods the mistakes become clearer.
I made the implementations for these methods quite inefficient on purpose because I syn-
chronously read my urls with Data(contentsOf:) instead of using URLSession. Here’s

Donny Wals 258


Practical Swift Concurrency

what the code for that looks like:

func fetchCrewMembersFor(_ movie: Movie) throws -> [CrewMember] {


let url = URL(string:
,→ "http://127.0.0.1:8080/crew-\(movie.id).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([CrewMember].self, from: data)
}

func fetchCastMembersFor(_ movie: Movie) throws -> [CastMember] {


let url = URL(string:
,→ "http://127.0.0.1:8080/cast-\(movie.id).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([CastMember].self, from: data)
}

Writing code this way is highly undesirable so I don’t recommend you do it. This code is purely
intended to illustrate how we can use Instruments to begin tracking down slow and blocking
code in our actors.
One way to start resolving the problem we have with our code is to mark all of the data fetching
methods as async. So that would mean marking the two methods you just saw as well as
the fetchMovies(page:) method as async. We can do that as follows:

func fetchCrewMembersFor(_ movie: Movie) async throws -> [CrewMember]


,→ {
let url = URL(string:
,→ "http://127.0.0.1:8080/crew-\(movie.id).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([CrewMember].self, from: data)
}

func fetchCastMembersFor(_ movie: Movie) async throws -> [CastMember]


,→ {

Donny Wals 259


Practical Swift Concurrency

let url = URL(string:


,→ "http://127.0.0.1:8080/cast-\(movie.id).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([CastMember].self, from: data)
}

func fetchMovies(page: Int = 1) async throws -> [Movie] {


let url = URL(string: "http://127.0.0.1:8080/\(page).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([Movie].self, from: data)
}

Marking these functions as async will allow the system to run these functions asynchronously
if possible. Let’s see what the instruments log looks like when we run the code now:

Figure 32: Resulting trace of marking functions as async

In this image you can see one of the many tasks that we’ve created along with our movie
fetcher actor. This image still looks a lot like the situation we had earlier so clearly marking
our functions as async is not the full solution to our performance issue.
While it’s true that the system can run an async method asynchronously on the global thread
pool, an async method that’s constrained to an actor will still need to be enqueued by the
actor.
What we’re looking for is to break free from the actor and run our code on the global thread
pool. To do this, we must mark the three data fetching methods as nonisolated

Donny Wals 260


Practical Swift Concurrency

nonisolated func fetchCrewMembersFor(_ movie: Movie) async throws ->


,→ [CrewMember] {
let url = URL(string:
,→ "http://127.0.0.1:8080/crew-\(movie.id).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([CrewMember].self, from: data)
}

nonisolated func fetchCastMembersFor(_ movie: Movie) async throws ->


,→ [CastMember] {
let url = URL(string:
,→ "http://127.0.0.1:8080/cast-\(movie.id).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([CastMember].self, from: data)
}

nonisolated func fetchMovies(page: Int = 1) async throws -> [Movie] {


let url = URL(string: "http://127.0.0.1:8080/\(page).json")!
let data = try Data(contentsOf: url)
let decoder = JSONDecoder()
return try decoder.decode([Movie].self, from: data)
}

Since none of these three methods interact with our actor’s mutable state it’s perfectly safe
for these methods to run on the global thread pool.
Running our app with Instruments again yields the following graphs:

Figure 33: Result of marking methods as nonisolated

Donny Wals 261


Practical Swift Concurrency

This is already much better. We see that we’re enqueued for a little while but the actor is
chewing through way fewer tasks. I had to zoom in my Instruments timeline a bit more to
capture this graph than I had for the previous one.
However, we can push this to be even better. To understand how we need to take a look at
the code in MovieFetcher. The fetchDetailsFor(_:) method is actor isolated even
though it never access any mutable state on the actor. This means that we can opt out of
actor isolation for this method so it can run asynchronously on the global thread pool without
being actor constrained without any issues:

nonisolated func fetchDetailsFor(_ movie: Movie) async throws ->


,→ EnrichedMovie {
async let crewMembersTask = fetchCrewMembersFor(movie)
async let castMembersTask = fetchCastMembersFor(movie)

let crew = try await crewMembersTask


let cast = try await castMembersTask

return (movie, crew, cast)


}

When running the app again the graph looks much, much, better. In fact, there wasn’t much
of a graph to show so I decided to show you the enqueued tasks list instead.

Figure 34: The trace after marking fetchDetailsFor(_:) as nonisolated

We no longer have a long list of tasks in the app that needs to be enqueued on the actor.
We only have a single task enqueue and that’s our button handler that calls fetchEn-

Donny Wals 262


Practical Swift Concurrency

richedMovies(page:). Every other method in our actor is nonisolated so calling


those methods doesn’t require any actor enqueueing.
Note that we only made all of these changes for the purpose of experimenting and learning.
In a real application I would probably suggest to better split all the async work like fetching
data from the actor entirely. This actor is doing way too many non-actor tasks and it’s only a
matter of time before we run in to mistakes surrounding our usage of nonisolated.
When an actor has more nonisolated members than it has isolated members you should
probably reconsider whether what you’re doing makes sense.

In Summary
In this, we took a deep dive into the Instruments template that Apple provides.
We started off by exploring the task related lanes to see how we can inspect the number of
tasks we created, which tasks are active, and which tasks are running. You learned about
different task states and how you can leverage the task lanes to gain insights into verifying
whether you code runs as efficient as you’d like. You also learned about the Task Forest which
can help you inspect and understand Structured Concurrency relationships between tasks in
your app.
After that, we took a look at the actors lane in Instruments. You learned how the tasks lane
can highlight interesting actor related hold ups in your code. I showed you how you can use
the narrative view for a task to learn more about the actor that your task is waiting on, and
then you saw that you can deeper inspect that task by pinning the actor to the timeline and
inspecting the actor’s narrative.
With this information we solved an example of a performance issue in this chapter’s sample
app and we verified our solutions by constantly measuring and verifying the improvements
we’ve made.
In the next and final chapter of this book, you’ll learn more about migrating existing code
from Swift 5.10 over to Swift 6.

Donny Wals 263


Practical Swift Concurrency

Chapter 12 - Migrating to Swift 6.2


Throughout this book you’ve learned everything there is to know about Swift concurrency.
Well not everything but you’ve learned a lot.
Swift Concurrency is a very powerful new paradigm and the Swift 6 language mode puts
that powerful paradigm on a pedestal and makes it so that you can benefit from all of the
compile-time data safety that Swift 6 has to offer. In this chapter we’re going to take a look at
what the path from Swift 5.x to Swift 6.2 looks like and how Xcode 26 deals with new projects
and existing projects as well.
By the end of this chapter you’re going to have a good understanding of how and when you
can migrate to Swift 6.2. We’ll dig into the following topics throughout this chapter:

• Enabling the Swift 6 language mode in existing and new projects.


• Using the Swift 6 language mode in SPM packages
• Migrating from Swift 5.x to Swift 6.2

The changes that Apple made to Swift Concurrency in Xcode 26 make it so that you’ll need to
check various project settings before you can fully reason about a codebase that uses Swift
Concurrency. In this chapter I will try and cover as much of this as possible, but the main focus
will be on default settings rather than different variants and flavors of these settings. While
working on this chapter I had to make decisions about what to include, and what to omit.
Because not all readers will have read this book for Swift 5.x, 6.0, and 6.2 I will avoid comparing
every version of the language in this chapter. It would simply become too confusing to explain
every Swift version, especially for those that are just learning about concurrency for the first
time.
Our core focus in this chapter will be to learn about what it takes to enable the Swift 6 language
mode rather than updating code from Swift 5.x to Swift 6.0, and to Swift 6.2.
In other words, we’ll spend most of this chapter learning about enabling the Swift 6 language
mode for the Swift 6.2 compiler.

Donny Wals 264


Practical Swift Concurrency

Enabling the Swift 6 language mode in


Xcode
When you’re using Xcode 26, you’re using the Swift 6.2 compiler. The Swift 6.2 compiler allows
you to specify several configurations about how your code will work, and where it will run
when you’re using Swift Concurrency. We’ve talked about a lot of these settings already in
Chapter 3 - Awaiting your first async methods when we explored features like running code on
the main actor by default and inheriting actor isolation through approachable concurrency.
For this chapter, you’ll want to make sure you do not opt-in in to these new, convenient
features. If you do, none of the errors presented in this chapter will occur. In the real world,
that’s a good thing! You typically want to run your code on the main actor by default, and you
typically want to inherit the callers actor when calling nonisolated async functions. It makes
your code simpler, and less likely to contain concurrency issues (because you’re using less
concurrency).
For the sake of this chapter though, I recommend that you try things the hard way. That will
help you become more comfortable and familiar with concurrency related issues, and their
possible fixes.
If you’ve gone through this chapter, try starting from scratch and opt-in to main actor by
default and approachable concurrency. Then see how you can introduce concurrency in
modules like Networking where it makes sense. I’ll leave this as an exercise for you to go
through.
Back to the Swift language mode...
In addition to all the flags mentioned above, you can also specify how strict the compiler is
about checking your code for data races. By default, your project will use the Swift 5 language
mode (regardless of your other compiler settings). This means that you’re using Swift 6.2 and
that you have access to Swift 6.2 features even though the compiler will enforce concurrency
rules as they were in Swift 5. You can slowly but sure bump up the compiler’s checks by setting
your project’s “strict concurrency settings”.
Whether you open an existing project in Xcode, or you create a new project doesn’t matter.
Both will use the Swift 5 language mode by default.
This means that, by default, you won’t immediately see all kinds of concurrency errors when

Donny Wals 265


Practical Swift Concurrency

you open an older project in Xcode 26. You won’t even see concurrency warnings unless
you’ve opted in using the instructions from Chapter 6 - Preventing data races with Swift
Concurrency.
If you do want to enable the Swift 6 language mode for your existing or your new project you’re
going to want to go to your project in Xcode’s navigator and navigate to build settings.
On the build settings screen you’re going to look for language version. You’re going to see a
screen that looks a little bit like this.

Figure 35: The language mode setting in Xcode

Notice how Swift 5 is the selected language mode.


You can change this to Swift 6. After doing this your project will use the Swift 6 language mode
which means that you’re going to get all of Swift 6’s concurrency checking and data race safety
protections. Typically you don’t want to turn this on immediately. Usually you’ll move to strict
concurrency checking first using the steps described in Chapter 6 - Preventing data races with
Swift Concurrency.

Donny Wals 266


Practical Swift Concurrency

Once you’ve enabled strict concurrency and you’ve resolved most or all of your warnings is
when you want to turn on the Swift 6 language mode. Later in this chapter we’re going to take
a look at some strategies that you might employ to migrate your project over but the high
level overview is that: you want to enable the warnings first, resolve them and then go to the
Swift 6 language mode.
For new projects that you start from scratch I think it does make a lot of sense to try and enable
the Swift 6 language mode from the get-go. Most people will want to move there eventually
anyway so it makes sense for you to start using it as soon as possible. However, if you find
that you’re jumping through a lot of hoops or that you’re having trouble coding the way you’d
like to because Swift 6 is so strict about data safety you might want to drop down to the Swift
5 language mode and revisit Swift 6 later. If you’ve tried Swift 6 mode in Xcode 16 and gave
up, you might want to try again with Xcode 26. Swift 6.2 introduces a lot of features that make
the compiler smarter about data race protection which means it’ll complain less frequently
due to the compiler being able to detect that certain patterns of (tecnically) unsafe code are
actually very safe.
Now that you’ve seen how to enable the Swift 6 language mode for Xcode projects let’s take a
look at how SPM packages can be migrated next.

Using the Swift 6 language mode in SPM


packages
SPM packages don’t have a build settings configuration in the same way that Xcode projects
do. An SPM project will typically specify its Swift build tools using a comment that looks a
little bit like the one shown below:

// swift-tools-version: 5.10

Notice how the tools version specifies a specific Swift version. If you create a project or an
SPM package using an older version of Xcode or using command line tools before Xcode 26,
the comment will contain the tools version that you used to create your SPM package.
For example, that might be Swift 5.9 if you created your SPM package with Xcode 15.3.

Donny Wals 267


Practical Swift Concurrency

When Xcode 16 or newer is your primary Xcode version, any new SPM packages that you create
will be using the Swift 6 language mode because the Swift 6 tool chain is going to be selected
in the comment at the top of your Package.swift file.
This means that opposed to how for Xcode projects the default is still to use Swift 5, for SPM
packages the default is actually to use Swift 6.
If you’re opening an existing SPM module in Xcode the Swift tools version will not be updated
which means that, if your tools version is 5.x, you’ll be using the Swift 6 compiler with the
Swift 5 language mode.
All in all this makes SPM packages a little bit more confusing than Xcode projects because for
an Xcode project it’s pretty clear that Xcode is going to not use Swift 6 by default and it’s not
going to alter existing projects to use the Swift 6 language mode.
For an SPM package Xcode will not alter the package to use the new language version. However
new SPM packages will use the Swift 6 language mode. If you want to drop down to the Swift
5 language mode you have several available options. One approach could be to edit the
comment at the top of the file and say that you’re using Swift 5.9 or Swift 5.10 as your tools
version. This will work but you will be limiting it in the SPM features that you can use. If you
want to use an SPM feature that became available with the Swift 6 compiler that feature would
not be available if your selected tools version is smaller than Swift 6.
Instead, you can actually use the Swift 6 tools and set the Swift language mode on either a
package level or on a target level. Let’s take a look at how to do both. We’ll look at how to do
it for your entire SPM module first:

// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.

import PackageDescription

let package = Package(


name: "package_demo",
targets: [
// Targets are the basic building blocks of a package, defining a
,→ module or a test suite.

Donny Wals 268


Practical Swift Concurrency

// Targets can depend on other targets in this package and


,→ products from dependencies.
.executableTarget(
name: "package_demo")
],
swiftLanguageModes: [.v5]
)

Every target in the Swift package will automatically use the Swift 5 language mode with the
setup shown above. If we want to have a specific target that uses the Swift 5 language mode
but have other targets that use the Swift 6 language mode, we can opt into Swift 5 using our
package definition like this:

// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift
,→ required to build this package.

import PackageDescription

let package = Package(


name: "package_demo",
targets: [
// Targets are the basic building blocks of a package, defining a
,→ module or a test suite.
// Targets can depend on other targets in this package and
,→ products from dependencies.
.executableTarget(
name: "package_demo",
swiftSettings: [.swiftLanguageMode(.v5)]
)
]
)

My recommendation for new packages is to have them use the Swift 6 language mode. The
reason for that is similar to why I would recommend using Swift 6 in a new project: you’re

Donny Wals 269


Practical Swift Concurrency

starting from scratch anyway so you’re not held down by existing code you might have to
rewrite in order to adopt Swift 6.
It’s totally possible and valid to mix Swift 6 packages with Swift 5 packages so your new and
old code can coexist inside of your apps. This means that you could actually start writing new
features in Swift 6 if your codebase is modularized and you can have your existing features
continue to use the Swift 5 language mode.
Once time is on your side, and you’ve resolved your strict concurrency warnings in your other
targets, you can switch those over to Swift 6. In the next section we’ll take a closer look at
what that process looks like.

Migrating from Swift 5 to Swift 6


The process of migrating from Swift 5 to Swift 6 is not a straightforward one. It’s a complicated
process that will most typically require a lot of code to be changed. This means that you’ll
want to take the refactor slow, and you’ll want to avoid migrating everything all at once if you
can avoid it.
Having a modularized code base helps a lot with this.
With a modularized codebase you’ll be able to migrate your Swift packages over to Swift 6
while you have other packages in Swift 5. You can then turn on strict concurrency checking on
a per package level, resolve all the warnings, move to Swift 6 and then do the same for the
next package. This will allow you to have gradual migration strategy which is a lot easier than
migrating everything all at once.
The hardest part will always be migrating code that depends on other modules. So if you have
code that depends on both Swift 5 and Swift 6 modules it’s quite likely that you’ll run into
some compatibility issues.
In this section I’d like to take a look at what it looks like to have packages that are already at
Swift 6 combined with projects that are at Swift 5.
The starting point for this section is essentially an app that has a couple of modules in it. We’ll
have four modules:

1. The app target itself

Donny Wals 270


Practical Swift Concurrency

2. A networking package
3. A models package
4. A package with UI components

The app target is going to depend on all three of our packages. The networking package will
depend on the models package only. And the package with UI components will also only
depend on the modules. All of the packages will be at Swift 5 when we start this refactor.
As we refactor, I will highlight a couple of things that you’ll run into. This will help you under-
stand problems that you might run into. The bottom line with any migration from Swift 5 to 6
is that you need a good understanding of how actors, sendable work as well as understanding
the essentials how Swift Concurrency and structured concurrency work.
If actors and Sendable are not something that you’re somewhat comfortable with, I highly
recommend to review the relevant chapters in this book before you start your migration.

Exploring the sample code


The sample code as you know exists in four modules. Three Swift packages and an application
target. If you explore what every package and module looks like it’s really not that big of
an application. The reason it’s not so big is because I could make a very large sample app
and refactoring it would be no more informative to you to then it would be to migrate this
application.
Sure you might might encounter more problems in a larger project but it’s not necessarily
about the problems. You know how to solve those, you’ve just haven’t done that before. What
I’d rather focus on is the process of starting to migrate your packages and how you can tackle
this in a responsible way.
Step one in a migration to Swift 6 for me is to look for a Swift packages that I can easily
migrate.
For example in the sample project, we have our models package and that models package has
no dependencies on other packages. That means that we can migrate the models package
without running into any warnings that come from other packages.
If you would start migrating a package with dependencies then Swift will start complaining
about issues that are a result of that package not being updated for Swift 6 yet.

Donny Wals 271


Practical Swift Concurrency

Even though you should be looking at the models package in this case, I would like to turn on
Swift 6 mode just momentarily for our application target so that you can see what the effect
of that is when it is and how broad the concurrency warnings are going to be.
Navigate to the build settings of the app target and change the language mode to Swift 6:

Figure 36: The language mode setting in Xcode

After doing this, you’ll notice that the project no longer compiles. The first two errors that
show up for me are in the following files:

• CastViewModel.swift
• CrewViewModel.swift

Both of these files contain a function that creates a new Task. Inside of that task we interact
with the viewmodel itself as well as with the networking object. At the time of writing this,
the error message in both cases looks a little bit like this:

Donny Wals 272


Practical Swift Concurrency

Task-isolated value of type ‘() async throws -> ()’ passed as a strongly transferred parame-
ter; later accesses could race

This error message is a little bit strange and in my opninion you might consider it misleading.
It looks like there’s something wrong with the way we’re creating our task, but in reality the
problem is that we’re capturing non-Sendable types inside of our task.
We can fix this in part by making our view models sendable by annotating them with the
@MainActor attribute. We can safely do this because we expect interactions with our view
models to happen on the main actor anyway. In modernized projects, you will have opted-in
to global actor isolation and your code will run on the main actor by default which would
make this a non-issue.
Since we decided to try things the hard way to understand some of the complexities of
migrating code, we can update the declaration of CastViewModel and CrewViewModel
to the following:

@Observable @MainActor
class CastViewModel {
// ...
}

// and...
@Observable @MainActor
class CrewViewModel {
// ...
}

With these changes in place, the project will still not compile but we’ll actually be able to see
what it’s like to migrate a module that has dependencies on other modules.
Xcode will show you an error in both the fetchCast and fetchCrew functions:

func fetchCast() {
Task {
// Non-sendable type '[CastMember]' returned by implicitly
,→ asynchronous call to nonisolated function cannot cross actor
,→ boundary

Donny Wals 273


Practical Swift Concurrency

let fetchedCast = try await networking.loadCast(for: movie)


cast = fetchedCast
}
}

I’m only showing you the error for the fetchCast function because the fetchCrew func-
tion is identical in structure.
Xcode also shows a warning to suggest a (temporary) fix for this problem. The warning looks
like this:
Add ‘@preconcurrency’ to treat ‘Sendable’-related errors from module ‘Models’ as warn-
ings

This warning suggests that we suppress warnings about sendability or Swift 6 from modules
that have not yet been updated to the Swift 6 language mode.
So in this case it’s saying you can add the preconcurrency declaration to your import of
Models so that whenever you interact with Models, and there is a problem related to
sendability you will not be prevented from compiling your application.
This is really useful when you’re migrating an application and you don’t own all of your
dependencies. This will allow you to perform the migration without having to wait for third-
party vendors. However in this case we do own all of the dependencies so it makes a lot of
sense to abandon the migration of our app target and instead start focusing on migrating our
dependencies.
As I mentioned earlier, models is the place to start because that has no dependencies let’s
take a look at what we need to do to make models work with our app.
Turn off the Swift 6 language mode for the app target and turn on the Swift 6 language mode
for the models package by updating the Models target in the Package.swift file like this:

.target(
name: "Models",
swiftSettings: [.swiftLanguageMode(.v6)]
),

Donny Wals 274


Practical Swift Concurrency

When you try and compile the models package now you will get a single error in the mock
property of the movie. The error looks a little bit like this:

Static property ‘mock’ is not concurrency-safe because non-‘Sendable’ type ‘Movie’ may
have shared mutable state

This error tells us that our movie is not sendable. However it is a struct and it only has im-
mutable state. So in theory the compiler should know that our movie is supposed to be
sendable. However because our movie is a public struct the compiler will no longer auto-
matically infer its sendability. In modules that have a public struct structs will never be
automatically sendable. We always have to declare the confirmation by hand so let’s go on
ahead and update movie to be sendable:

public struct Movie: Decodable, Identifiable, Hashable, Sendable {


// ...
}

All I had to do was add the sendable conformance to the end of my declaration. The project
compiles just fine, but we do know that the app target had an issue with our cast and crew
models being non-sendable.
We should update those to be sendable in the exact same way as we did for our Movie. I
won’t show you the code here I trust that you’re able to figure that one out on your own.
With these changes in place our models package now confirms to the Swift 6 language mode.
The next package that I think we should update is our networking package since that only has
a dependency on models. Start by turning on the Swift 6 language mode for the networking
package by updating the Networking target in the Package.swift file like this:

.target(
name: "Networking",
swiftSettings: [.swiftLanguageMode(.v6)]
),

What’s quite interesting about the networking package is that even though we’ve just turned
on the Swift 6 language mode, there are no problems to resolve. There are issues that I know of

Donny Wals 275


Practical Swift Concurrency

that we will need to tackle at some point but for now let’s just move on to the UI Components
module to see if that has any problems when we turn on the Swift 6 language mode.
By now I trust that you know how to update the Swift settings for the UIComponents package.
If you’re not sure how to do it, take a look at how we did networking and modules and I’m
sure you’ll be able to find the correct line to update.
Just like the networking module the UI components module does not have any problems that
we need to resolve right now. That’s really nice! The next step for us is to move our app over
to strict concurrency checking so that we can actually continue building our project while we
resolve concurrency warnings.
Navigate to the project settings for the Chapter 12 project and look for the strict concurrency
checking settings. Make sure to set that to complete as shown in the image below.

Figure 37: The strict concurrency checking setting in Xcode

When you build the project with strict concurrency checking set to complete you will find that
there are a couple of compiler warnings that we need to resolve. The first one that I want to
take a look at is in the CastViewModel file:

Donny Wals 276


Practical Swift Concurrency

Sending ‘self.networking’ risks causing data races; this is an error in the Swift 6 language
mode Sending main actor-isolated ‘self.networking’ to nonisolated instance method
‘loadCast(for:)’ risks causing data races between nonisolated and main actor-isolated
uses

This warning is telling us that we’re sending our networking object from the main actor. which
is what our view model is isolated to, over to a non-isolated instance method loadCast. The
loadCast instance method is defined on networking and it’s an asynchronous function that
is not isolated to anything.
In other words we’re taking our main actor isolated networking object and we’re performing
non isolated work on it. There’s two ways that we could fix this.
One is to make the entire networking class main isolated. This would put the networking
object on the main actor and it should fix our problems. However, I don’t particularly like that
solution.
I think we should make our networking object itself an actor. We can do this by updating the
Networking object as follows:

public actor Networking {


// ...
}

With this change in place our compiler warning should be gone.

Note that if we had opted Networking in to global actor isolation, we wouldn’t have
seen this problem. Networking would have been isolated to the main actor which
means it would be sendable by default. A networking module is, in my opinion, a good
example of a module that should not opt in to main actor isolation by default due to the
fact that it might be decoding heavy payloads for example.

By making Networking an actor, we also gain some data race protection that we didn’t have
before. The networking object has a cache object that we were reading and updating from
within out asynchronous functions. This could have resulted in data races.
By making Networking an actor, we ensure that the cache is protected from being accessed
from multiple asynchronous functions at the same time.

Donny Wals 277


Practical Swift Concurrency

After making the Networking class into an actor, we only have a single warning left.
The following warning is shown in the MovieListViewModel file:

Passing closure as a ‘sending’ parameter risks causing data races between code in the
current task and concurrent execution of the closure

We’ve seen this before. It means that we’re capturing something that’s not sendable inside of
our task while also being able to access it outside of the task. Because we’re seeing this in
a view model, we can make the view model @MainActor annotated which should fix our
problem.

@Observable @MainActor
class MovieListViewModel {
// ...
}

Having our view model main actor isolated makes a lot of sense to me because we’re going to
be interacting with MovieListViewModel from the main actor most of the time anyway.
By making it @MainActor isolated, we know that we’re always going to be making state
changes on the correct thread and we won’t run into any warnings about accidentally manip-
ulating UI data on a background thread.
Now that all warnings are gone we can set our app’s language mode back to Swift 6. You’ll
notice that we get no new errors which means we successfully migrated the sample app from
Swift 5 to Swift 6.
In a larger project this process is going to be a lot more involved you’ll mainly want to look
out for logic changes that could be the result of migrating some piece of code over from being
completion handler based to being async.
Resolving certain warnings is also going to be a lot more complicated because you have way
more moving parts.
The process is always going to be the same.
Start by migrating your packages if you have them. Look for packages that have no depen-
dencies or as few dependencies as possible. Use @preconcurrency whenever it’s relevant

Donny Wals 278


Practical Swift Concurrency

to import modules into your app without being slowed down by any warnings that these
modules produce.
Once you’ve migrated your packages you can migrate your app and hopefully by going step
by step you’ll be able to complete the refactor relatively quickly.
Migrating to Swift 6 should be done slowly, carefully and only when you feel like you have a
good grasp of everything that Swift Concurrency has to offer. If you’re just not comfortable
yet with Actors, Sendability or if you’ve attempted to migrate but you were overwhelmed by
all the work that you would have to do, that’s completely okay.
The Swift 5 language mode is not going to go anywhere anytime soon.
This means that you’re going to be able to use Swift 5 and Strict Concurrency Checking if you
want to for a really long time before you have to move on with Swift 6. The fact that you can
mix packages that are written with Swift 6 into a project that still uses the Swift 5 language
mode makes it so that you can actually start using Swift 6 without ever migrating your existing
code over. There might be some compatibility issues here and there. You’ll have to be on the
lookout for those but those can usually be resolved.
One final tip before we wrap up this chapter is that you should always read your warnings and
errors really carefully. They can be quite cryptic and it might take you a couple of reads to
actually get the information that you need in order to understand what is wrong.
I know that Apple will be working on updating these error messages in future Swift versions
so hopefully by the time you’re reading this the situation has improved a little bit and a book
update will be on its way soon.
That doesn’t change the fact that you should always read errors very carefully because it’ll
make your migration a lot smoother.

In Summary
This final chapter of practical Swift concurrency has provided you with the final tools that you
need to start your journey into Swift 6.
You already knew a lot about how Swift concurrency worked and now you actually know how
Swift 5.10 and Swift 6 are related to each other and which path you can take to migrate your
code over. You’ve also learned that you don’t have to migrate any time soon.

Donny Wals 279


Practical Swift Concurrency

If you prefer to stay with Swift 5 language mode just a little bit longer you absolutely can.
There’s no plan from Apple to force you to update from the Swift 5 language mode to Swift 6
so if you don’t have the time to start your migration to Swift 6 just yet it doesn’t have to be
your top priority.
I think for most projects opting in to Swift 6.2 Approachable Concurrency and main actor
isolation should be a first step. After that you’ll want to trun on concurrency checks, and only
then should you move to the swift 6 language mode. Swift 6.2 makes this much, much, easier
than it was before.
With this final set of skills and examples our journey together has come to an end. But your
journey is just getting started! There’s still much to learn about Swift Concurrency, and there is
still lots of practice you can do before you consider yourself an expert at Swift Concurrency.
This book should have given you a good sense of the important parts of Swift Concurrency.
And you hopefully feel confident that you know enough about the topic to start using it today.
Or maybe you already started using it in some bits of your app while reading this book.
I would like to take this moment of your time to sincerely thank you for reading this book all
the way to the end. Writing this book took a lot of time and hard work and you just made it
worth every minute I spent on it. So thank you.
Cheers,
Donny

Donny Wals 280

You might also like