Threads allow us to run code asynchronously and in parallel, but it’s not often clear how a task relates to a thread in Swift Concurrency. You might know how to run something on the main thread using a @MainActor, but what happens if you create a new unstructured task using Task { }?

In this lesson, we’re going to dive deeper into how threads relate to tasks in Swift Concurrency. This will be an in-depth article covering many topics, some of which will be explained in more detail in dedicated lessons later in this module.

What is a thread?

A thread is a system-level resource that runs a sequence of instructions. The operating system manages them, and threads have a high overhead when created or switched. Multithreading allows multiple operations to run concurrently, but managing these threads manually can be complex.

Due to this overhead and complexity, having the Swift Concurrency framework is a blessing. It prevents us from having to think directly about threads and prevents us from making potentially many mistakes.

Yet, if you’ve been developing in Swift for a while, you might be used to interacting more directly with threads. I personally did and created a “threading mindset”—when writing performant code, I’m often thinking about which thread it should be executing. Therefore, it’s good to understand better how Swift Concurrency works with threads and how it takes away a lot of responsibility.

Swift Concurrency and Threads

The concurrency model in Swift is built on top of threads, but you’ll never interact with them directly. Swift doesn’t make any guarantee about which thread a function will run on.

For example, an async function could give up a thread it’s running on, letting another async function run on that thread while the first one is being blocked. When that first function resumes, there’s no guarantee on which thread it will continue.

Suspension points and threads

When working with tasks, there will be so-called suspension points. For example, when you use await, you’re essentially pausing execution of that piece of code until the asynchronous function returns.

This is also called yielding the thread, as Swift suspends the execution of your code on the current thread and might run some other code on that thread instead. This clearly explains why there’s no direct relationship between a single task and a single thread, and it also shows how well Swift can optimize concurrency.

Tasks: A Higher-Level Abstraction

A task in Swift Concurrency is a unit of asynchronous work that runs within Swift’s cooperative thread pool. They’re not tied to a specific thread. They are scheduled and executed on any available threads. Swift dynamically manages task execution and ensures efficient thread usage. In other words, it prevents creating more threads than necessary, optimizing CPU efficiency.

What is Swift’s cooperative thread pool?

The cooperative thread pool is the execution model that Swift Concurrency uses to manage the execution of asynchronous tasks efficiently. Instead of creating a separate thread for each task (which can lead to excessive context switching and resource consumption), Swift dynamically schedules tasks on a limited number of system threads.

How Tasks are mapped to Threads

The earlier described cooperative thread pool is used to execute tasks efficiently. It avoids blocking threads and reduces unnecessary thread creation. This is a behind-the-scenes insight into why Swift Concurrency is so much better than what we’ve had before.

The system creates only as many threads as CPU cores. Tasks are scheduled onto available threads rather than getting its own thread each. When a suspension point occurs (as with await), the task will be suspended and another task is allowed to run on the same thread.

Here’s a code example to demonstrate this behavior:

struct ThreadingDemonstrator {
    private func firstTask() async throws {
        print("Task 1 started on thread: \(Thread.current)")
        try await Task.sleep(for: .seconds(2))
        print("Task 1 resumed on thread: \(Thread.current)")
    }

    private func secondTask() async {
        print("Task 2 started on thread: \(Thread.current)")
    }

    func demonstrate() {
        Task {
            try await firstTask()
        }
        Task {
            await secondTask()
        }
    }
}

Note that you can only print Thread.current in Swift 5 language mode. In Swift 6 and above, you’ll have to make use of this code extension (included in the sample code):

extension Thread {
    /// This is a workaround for compiler error:
    /// Class property 'current' is unavailable from asynchronous contexts; Thread.current cannot be used from async contexts.
    /// See: https://github.com/swiftlang/swift-corelibs-foundation/issues/5139
    public nonisolated static var currentThread: Thread {
        return Thread.current
    }
}

The above code example might print out something like:

Task 1 started on thread: <NSThread: 0x600001752200>{number = 3, name = (null)}
Task 2 started on thread: <NSThread: 0x6000017b03c0>{number = 8, name = (null)}
Task 1 resumed on thread: <NSThread: 0x60000176ecc0>{number = 7, name = (null)}

This is explained as follows:

This is possible because Swift Concurrency does not block threads while awaiting. In this case, it didn’t block any thread while sleeping.

Can Thread explosion still happen in Swift Concurrency?

The traditional Grand Central Dispatch (GCD) model could lead to a so-called thread explosion: when too many threads are created and blocking occurs. This results in:

Swift Concurrency avoids thread explosion by only creating as many threads as CPU cores. It uses continuations instead of blocking threads, which allows more efficient use of available threads. This also ensures threads always make forward progress.

Does Swift Concurrency’s Limited Threads Reduce Performance Compared to GCD?

No, Swift Concurrency does not sacrifice performance compared to GCD, even though it limits the number of threads to match the number of CPU cores. Since it optimizes concurrency efficiency, it often outperforms GCD in real-world scenarios.

Why fewer threads doesn’t mean less performance

Counterintuitive, maybe, but fewer threads don’t mean less performance. Traditional GCD can create many more threads than CPU cores. While this may seem beneficial, it can lead to the earlier-mentioned thread explosion, excessive context switching, and CPU inefficiency.

Swift Concurrency, on the other hand, uses a fixed number of threads matching the CPU cores count. It prevents creating too many threads and ensures not to waste CPU cycles. It keeps CPU cores busy without excessive switching due to using continuations instead of blocking threads, which can add up if we talk about performance.

This reduced CPU overhead allows more work to be done per unit of time, often outperforming GCD.

Common misconceptions

To finalize this lesson, I’d like to cover a few common misconceptions to make you better understand Swift Concurrency.

Misconception 1: Each Task runs on a new thread

Wrong. As we’ve learned, Tasks share a limited thread pool and are scheduled efficiently. This means that a task could reuse a thread that was used by a now suspended task.

Misconception 2: await blocks the thread

Wrong. await suspends the task without blocking the thread, allowing other tasks to run.

Misconception 3: Task Execution Order is Guaranteed

Wrong. Tasks are executed based on system scheduling, not necessarily in the order in which they were created. If one task should finish before another, you should use await to manage its order.

Summary

Swift Concurrency helps you to manage threads efficiently. In fact, you could say you no longer have to think about threads at all and only decide whether something should run on the main thread or a background thread. Even though only as many threads as cores are used, it still results in a better performance compared to GCD in most real-world cases.

In the next lesson, we’ll dive deeper into some of the topics from this lesson: getting rid of the threading mindset.