What is Swift Concurrency?


Greetings, traveler!

Swift Concurrency was introduced to address a long-standing problem in application development: writing concurrent code that remains correct as systems grow in complexity. Coordinating work across threads has traditionally relied on conventions and discipline, which makes mistakes easy to introduce and difficult to detect. The language provided tools to perform concurrent work, but it did not provide a consistent way to guarantee that this work was safe.

A similar shift has already happened in Swift before. When the language evolved from Objective-C, many assumptions that previously lived in documentation became part of the type system. Optionals made the absence of a value explicit, and the compiler began enforcing rules that were previously left to developer discipline. Swift Concurrency follows the same direction. It brings guarantees about how data is accessed into the language itself, allowing the compiler to reason about correctness instead of relying entirely on runtime behavior.

This article looks at Swift Concurrency from that perspective. It starts with the motivation behind the model, builds a practical way to reason about isolation and data flow, examines patterns that often lead to issues, and finally explores how actors work under the hood. The goal is to provide a consistent mental model that makes both the language features and compiler diagnostics easier to understand in practice.

Swift Concurrency: Origins

To understand where Swift Concurrency comes from, it is useful to look back at the ideas that shaped it. One of the earliest and most influential documents is the Concurrency Manifesto written by Chris Lattner, the original creator of Swift. The manifesto was published in 2017, at a time when Swift already had a growing ecosystem but lacked a cohesive model for handling concurrency at the language level.

In that document, Lattner outlined a long-term vision for how Swift could evolve to make concurrent programming safer and more expressive. Rather than proposing a single feature, the manifesto described a set of goals and building blocks: structured concurrency, async/await syntax, actor-based isolation, and stronger guarantees enforced by the compiler. Many of the problems highlighted there were already well known, including the difficulty of reasoning about shared mutable state, the lack of compiler support for thread-safety, and the complexity of existing abstractions built on top of threads and queues.

The manifesto was not a specification and did not define exact APIs. Instead, it served as a direction for the language, framing concurrency as a first-class concern that required deep integration into the type system and runtime. Over the following years, Swift gradually introduced many of the ideas described in that document, culminating in the modern concurrency model.

Looking back, the Concurrency Manifesto helps explain why Swift Concurrency is designed the way it is. The focus on isolation, structured execution, and compile-time guarantees is not accidental. It reflects an intentional shift away from ad hoc concurrency patterns toward a model where correctness can be reasoned about and enforced by the language itself.

Before swift concurrency

Before Swift Concurrency became part of the language, building responsive applications already required dealing with concurrency in one form or another. Developers relied on Grand Central Dispatch, manually switching between queues, coordinating background work through callbacks, and, in more complex cases, layering abstractions such as Combine or RxSwift on top. These tools were powerful and flexible, and for many teams they worked well enough in practice. At the same time, they relied heavily on conventions that lived outside the type system. The code could compile and run even when those conventions were violated, which made certain classes of errors both easy to introduce and hard to detect.

Data races

One of the most persistent issues in that model was the data race. When multiple threads access the same piece of mutable memory without proper coordination, the result is undefined behavior. In practice, this rarely leads to immediate and obvious failures. More often, it causes subtle memory corruption that surfaces much later, in a completely unrelated part of the application. A crash might occur long after the original mistake, triggered by code that simply happened to read already corrupted state. This non-deterministic nature makes data races particularly difficult to debug, especially in large codebases where the original source of the problem is far removed from its visible effects .

A shift in perspective

Swift Concurrency addresses this problem by shifting the focus away from threads and toward data access. Instead of asking where a piece of code runs, the system is designed around the question of who is allowed to access a particular piece of data at any given time. This shift may seem subtle, but it changes how concurrency is expressed and reasoned about. Threads, queues, and locks still exist at the implementation level, but they are no longer the primary abstraction developers interact with.

Isolation as a core concept

The central concept that enables this shift is isolation. Isolation describes a boundary around data that prevents concurrent mutation from multiple execution contexts. It does not correspond directly to a thread, a queue, or a lock. Rather, it is an abstraction that can be implemented using any of those mechanisms. By introducing a common model, Swift provides a way to describe thread-safety in a consistent and verifiable form. Instead of relying on implicit rules, isolation becomes part of the program structure.

A familiar example helps illustrate this transition. Updating user interface elements has always required execution on the main thread. Previously, this requirement existed as documentation and developer knowledge. It was easy to overlook, and violations could go unnoticed until they caused unpredictable behavior. With Swift Concurrency, this constraint is expressed explicitly through annotations such as @MainActor. Once applied, the compiler understands that the annotated code belongs to a specific isolation domain and enforces correct usage at compile time.

Static and dynamic isolation

This leads to an important distinction between static and dynamic isolation. Static isolation is encoded directly in the type system through attributes and function signatures. It is visible to the compiler and does not depend on runtime behavior. When a type or function is annotated with @MainActor, the compiler can reason about all interactions with it and enforce correct access patterns.

Dynamic isolation, on the other hand, comes into play when certain guarantees cannot be expressed in the type system. In such cases, the developer provides additional information about how the code behaves at runtime, effectively bridging the gap between what the compiler can verify and what actually happens during execution .

Taken together, these ideas point to a more precise way of describing Swift Concurrency. It builds on existing concurrency mechanisms but introduces a layer of compile-time guarantees that formalize how data is accessed and shared. In that sense, Swift Concurrency can be understood as a system where the compiler participates directly in enforcing correctness, rather than leaving it entirely to runtime behavior and developer discipline.

Mental model

Understanding Swift Concurrency becomes significantly easier once the focus shifts from syntax to behavior. The language introduces several new constructs, but most of them revolve around a small set of ideas. The difficulty usually comes from trying to map them directly to threads or queues, which leads to incorrect assumptions about how code executes.

Async/await is not about threads

An async function describes work that may need to suspend and resume later. The keyword itself says nothing about where that work runs. It only expresses that execution can pause at certain points and continue when the awaited operation completes.

This distinction matters in practice. Marking a function as async does not automatically move it off the main thread. If the function performs synchronous, CPU-heavy work, it will still block the thread it runs on:

@MainActor
func processImage() async {
    let pixels = loadLargeImage()
    let result = applyFilter(pixels) // CPU work, no suspension
    display(result)
}

Even though the function is async, the expensive processing runs synchronously and can affect UI responsiveness. Suspension only happens at explicit await points, not around arbitrary code.

Tasks define execution

Async functions describe what can happen. Tasks are what actually execute that work. Without a task, an async function remains just a declaration.

Tasks also define the structure of concurrency. When you create a task, you are starting a unit of work that can be awaited, cancelled, or grouped with other tasks. This leads to structured concurrency, where work is organized as a hierarchy instead of a collection of unrelated operations.

func loadDashboard() async {
    async let stats = fetchStats()
    async let notifications = fetchNotifications()
    async let messages = fetchMessages()

    let result = await (stats, notifications, messages)
    render(result)
}

In this example, all three operations begin at the same time and are tied to the lifetime of the parent task. If the parent is cancelled, all children are cancelled as well. This structure is what makes concurrent code easier to reason about.

Isolation domains

Once work is defined, the next question is where it is allowed to access data. Swift answers this through isolation domains.

There are three common cases:

  • @MainActor, which represents a shared domain tied to the main thread
  • custom actors, which define their own isolated state
  • nonisolated code, which does not belong to any specific domain

Each domain provides a guarantee that its data will not be accessed concurrently from multiple contexts.

@MainActor
final class ProfileViewModel {
    var name: String = ""

    func updateName(_ newValue: String) {
        name = newValue
    }
}

Here, all access to name is restricted to the MainActor domain. Any code that interacts with it must either already be in that domain or explicitly cross the boundary.

Custom actors define their own isolation:

actor Inventory {
    private var items: [String: Int] = [:]

    func add(_ name: String, count: Int) {
        items[name, default: 0] += count
    }

    func count(for name: String) -> Int {
        items[name, default: 0]
    }
}

The actor guarantees that its internal state is accessed sequentially, regardless of how many concurrent callers interact with it.

Isolation propagates

The most important rule in Swift Concurrency is that isolation flows through the code by default. Once execution starts in a given domain, it remains there unless explicitly changed.

This applies to function calls:

@MainActor
func refreshUI() {
    updateHeader()
}

func updateHeader() {
    // Runs on MainActor because the caller does
}

It also applies to closures:

@MainActor
func configure() {
    let action = {
        updateState()
    }
    action()
}

The closure inherits the same isolation as the surrounding context.

Tasks behave in the same way:

@MainActor
func load() {
    Task {
        updateState() // still on MainActor
    }
}

This propagation model removes the need to constantly reason about thread switches. The default behavior is predictable, and deviations are explicit.

Sendable is about data, not execution

Isolation protects data within a domain, but real applications need to move data across boundaries. When that happens, Swift verifies that the data is safe to share.

The Sendable protocol expresses this requirement. A type that conforms to Sendable can cross isolation boundaries without introducing race conditions.

Value types are usually safe because they are copied:

struct Session: Sendable {
    let id: UUID
    let token: String
}

Reference types require more care. If a class contains mutable state, sharing it across domains can lead to concurrent mutation:

final class MutableCache {
    var storage: [String: Data] = [:]
}

Passing this object between actors would allow multiple contexts to modify the same instance, which violates isolation guarantees. The compiler enforces these constraints and requires explicit confirmation when safety cannot be inferred.

Leaving the main actor

In many applications, most code runs safely on the main actor, especially when the work is I/O-bound and relies on suspension rather than computation. However, CPU-intensive tasks require explicit separation.

This can be expressed by moving work into a different execution context:

func generateThumbnail(from data: Data) async -> Image {
    await Task.detached {
        decodeAndResize(data)
    }.value
}

Here, the heavy processing is moved away from the caller’s isolation domain. The result is then awaited and brought back.

The key point is that leaving the main actor is always an explicit decision. The system does not automatically distribute work across threads.

A simplified model

All of these concepts can be reduced to a small set of rules that describe how Swift Concurrency behaves in practice.

Execution typically starts on the main actor. From there, isolation propagates through function calls, closures, and tasks without additional annotations. When work needs to run in a different context, the transition is expressed explicitly, either by introducing a new actor or by opting into concurrent execution. Whenever data crosses these boundaries, the compiler verifies that it is safe to do so.

This model provides a predictable way to reason about concurrency without constantly tracking threads or queues. The compiler enforces the rules, and the structure of the code reflects how data is accessed and shared.

Where things break

The mental model described earlier is simple, but it only works as long as the rules behind it remain intact. Most issues developers encounter with Swift Concurrency are not caused by the API itself. They appear when the model of isolation is violated in subtle ways. The compiler often points to the problem, but the underlying reason is usually a mismatch between how the system is designed and how the code is structured.

Mixed isolation

One of the more subtle issues is mixing multiple isolation domains inside a single type. This usually happens when individual properties are annotated differently from the type itself.

final class SessionStore {
    var token: String

    @MainActor
    var lastAccessDate: Date
}

At first glance, this may look reasonable. One property is UI-related, the other is not. In practice, this creates a type that does not belong to any single domain. It becomes difficult to move instances of this type across contexts, and certain properties may become inaccessible depending on where the instance was created.

A more predictable approach is to assign isolation at the type level:

@MainActor
final class SessionStore {
    var token: String
    var lastAccessDate: Date
}

This keeps ownership clear and avoids partial isolation.

Detached tasks

Creating detached tasks is a common way to move work away from the current context. The API is convenient and resembles older patterns based on global queues, which makes it appealing.

@MainActor
func refreshData() {
    Task.detached {
        await reloadFromDisk()
    }
}

The issue is that a detached task does not inherit any context. It does not carry actor isolation, priority, or task-local values. From the system’s perspective, it is a completely new execution root.

In many cases, the intent is simply to perform asynchronous work without blocking the caller. That can be achieved without losing context:

@MainActor
func refreshData() {
    Task {
        await reloadFromDisk()
    }
}

nonisolated func reloadFromDisk() async {
    // background work
}

This approach makes the transition explicit and avoids unexpected behavior caused by losing inherited state.

With recent versions of Swift, there is also a more explicit option for CPU-intensive work. The @concurrent attribute allows a function to opt out of the caller’s isolation domain and execute on the cooperative thread pool:

@concurrent
func reloadFromDisk() async {
    // CPU-bound processing
}

This approach makes the intent visible at the declaration level. Instead of creating a detached task and losing all context, the code defines where the work should run and keeps the surrounding structure intact. In practice, this tends to produce code that is easier to reason about and better aligned with the isolation model.

MainActor.run

Another pattern that appears frequently is manually hopping back to the main actor using MainActor.run.

func load() async {
    let data = await fetchData()
    await MainActor.run {
        self.items = data
    }
}

This works, but it moves responsibility from the type system to runtime behavior. The compiler cannot reason about the intent of the code in the same way.

In most cases, the same result can be achieved by annotating the function:

@MainActor
func load() async {
    items = await fetchData()
}

The second version expresses the constraint directly in the type system, which allows the compiler to enforce it consistently.

Unstructured concurrency

Creating tasks without managing their lifecycle is another common source of issues.

func handleTap() {
    Task {
        await sendAnalyticsEvent()
    }
}

This task runs independently, and there is no way to cancel it or observe its completion. In simple cases this may be acceptable, but in larger systems it leads to work that is difficult to control.

Structured alternatives, such as task groups or framework-provided mechanisms, keep work tied to a parent context and make cancellation predictable.

Actors under the hood

At the surface level, actors look like a simple language feature that protects mutable state. Under the hood, the implementation is considerably more involved. Understanding the core mechanics helps explain why certain patterns behave the way they do and why some abstractions carry a cost.

An actor is a reference type with additional behavior attached by the compiler and runtime. When you declare an actor, you are not just defining a container for state. You are also introducing a runtime object that participates in scheduling and synchronization.

This object is responsible for enforcing isolation. It receives units of work, manages their execution, and ensures that only one operation interacts with the actor’s state at a time.

The compiler

When code interacts with an actor, the compiler does not treat it as a regular method call. Instead, it introduces explicit boundaries around that interaction.

A simple call like:

await storage.save(item)

is transformed into a sequence of operations that moves execution into the actor’s context, runs the code, and then returns control back to the caller. This transition is implemented through instructions often referred to as a “hop” to the actor’s executor.

These boundaries are not optional. They are enforced at compile time and ensure that all access to actor state goes through the same controlled path.

Runtime responsibilities

Once execution reaches the actor, the runtime takes over. Each actor is associated with an executor, which acts as a scheduler for the work targeting that actor.

Conceptually, this can be viewed as a queue of jobs:

actor Logger {
    private var buffer: [String] = []

    func log(_ message: String) {
        buffer.append(message)
    }
}

Each call to log becomes a job that is submitted to the actor’s executor. The executor decides when that job runs and ensures that jobs are processed in a controlled order.

Internally, these jobs are stored and scheduled before being executed on a shared thread pool. The actor itself does not own a thread. It only defines how work related to it is coordinated.

Safety

The key guarantee provided by actors is that their state is accessed sequentially. This is achieved through a combination of scheduling and low-level synchronization.

At runtime, incoming work is added to a queue in a thread-safe way, often using atomic operations to prevent concurrent modification of internal state. Only one job is allowed to run at a time, while others wait their turn. This effectively eliminates concurrent access to the actor’s data without requiring explicit locks in user code.

From a developer’s perspective, this behavior resembles a serial queue. The difference is that it is integrated into the language and enforced consistently.

Actors vs Threads

A common misconception is to treat actors as dedicated threads. In reality, actors are independent of threads entirely.

Execution happens on a cooperative thread pool managed by the runtime. When work is scheduled on an actor, the runtime decides which thread will execute it. That thread may change between calls, and the actor itself has no direct control over it.

This distinction is important. Isolation is about controlling access to data, not about pinning execution to a specific thread.

Performance

Because actors rely on scheduling rather than fixed threads, performance characteristics differ from traditional concurrency models.

Switching between threads has a cost. It involves context switching and coordination at the operating system level. Swift Concurrency tries to minimize this cost by reusing threads and scheduling work within a cooperative pool.

In many cases, moving execution between actors does not require a full thread switch. The runtime can often continue execution on the same thread, reducing overhead. This is one of the reasons why code structured around actors can perform well even when it appears highly concurrent .

At the same time, each boundary crossing introduces some overhead. Scheduling, queuing, and synchronization are not free. This is why excessive use of actors can lead to unnecessary complexity and performance costs.

When to use actors

At a high level, actors provide a guarantee of sequential access to their internal state. The compiler enforces boundaries around that state, and the runtime ensures that all work targeting the actor is executed one piece at a time.

This leads to a useful way of thinking about actors. They are not a tool for parallel execution. They are a mechanism for maintaining order within a concurrent system.

Global actors

Not all isolation domains are tied to individual instances. In some cases, multiple types and functions need to share the same execution context. Global actors provide a way to define such shared domains and make them explicit in the type system.

The most commonly used example is @MainActor, which represents the main thread’s isolation domain. Any type or function annotated with it becomes part of that domain, and all access to its state is coordinated accordingly:

@MainActor
final class SettingsViewModel {
    var isEnabled: Bool = false

    func toggle() {
        isEnabled.toggle()
    }
}

Here, the entire type is bound to a single shared context. Unlike instance actors, which isolate their own state independently, a global actor defines a common boundary that multiple parts of the system can rely on.

Custom global actors can be introduced when a shared execution context is needed beyond the main actor:

@globalActor
struct AnalyticsActor {
    static let shared = AnalyticsService()
}

actor AnalyticsService {
    func track(_ event: String) {
        // send event
    }
}

This pattern allows different parts of the codebase to coordinate access to a shared resource without exposing the underlying implementation.

In practice, global actors are often a better fit than custom actors when the goal is to define a common execution domain rather than isolate independent pieces of state. They make ownership explicit at the system level and integrate naturally with the compiler’s isolation checks.

Approachable concurrency

Recent versions of Swift introduced a set of changes often referred to as approachable concurrency. Rather than adding new primitives, this work focuses on adjusting default behavior to make the model easier to reason about in everyday code.

The core idea is straightforward. Instead of implicitly switching execution contexts, async code now tends to remain in the isolation domain of its caller. This aligns with the mental model described earlier, where isolation propagates through function calls, closures, and tasks unless explicitly changed.

In practical terms, this means that many functions no longer need to be annotated to preserve correct behavior. When code is executed from the main actor, it typically continues to run there. This reduces the number of implicit boundary crossings and, as a result, decreases the number of situations where the compiler requires additional guarantees such as Sendable.

@MainActor
func loadProfile() async {
    let data = await fetchProfile()
    apply(data)
}

func fetchProfile() async -> Profile {
    // runs in the same isolation domain as the caller
}

In this example, fetchProfile does not introduce a new execution context. It inherits the isolation of loadProfile, which makes the flow of execution easier to follow and avoids unnecessary transitions.

When work needs to run outside of the current isolation domain, the transition becomes explicit. This can be expressed through constructs such as custom actors or attributes like @concurrent, which signal that the function should execute on the cooperative thread pool rather than remain in the caller’s context.

This approach simplifies the mental model without removing control. Code stays where it was invoked by default, and any deviation from that behavior is visible at the declaration level. As a result, reasoning about execution becomes a matter of tracing isolation rather than tracking implicit thread switches.

At the same time, this simplification comes with trade-offs. When most code implicitly stays within the caller’s isolation domain, it becomes easier to overlook where work is actually executed. In UI-driven applications, this often means that more code runs on the main actor than intended. As long as the work is I/O-bound and relies on suspension, this is usually acceptable. However, CPU-intensive operations can degrade responsiveness if they are not explicitly moved out of that context.

Another consequence is that the absence of compiler errors does not always imply an optimal design. Fewer isolation boundary crossings reduce the need for Sendable, but they can also hide cases where data should be separated more clearly. In larger systems, this may lead to overly broad isolation domains that become harder to evolve over time.

These trade-offs do not diminish the value of approachable concurrency. They highlight the importance of understanding the model rather than relying entirely on defaults. The simplified behavior makes the system easier to adopt, but explicit boundaries remain essential when performance and scalability become a concern.

Final model

All of the concepts discussed throughout this article can be reduced to a single, consistent model. Swift Concurrency is built around controlling access to data rather than coordinating threads. Isolation defines who is allowed to read or mutate state, and that rule is enforced across the entire system. Once a piece of data belongs to a specific isolation domain, every interaction with it must respect that boundary.

Actors are one way to establish such boundaries, but they are not the default solution to every problem. They exist to protect mutable state that cannot be safely shared otherwise. In many cases, a single isolation domain, such as the main actor, is sufficient. Introducing additional actors only makes sense when there is a clear need to separate ownership and coordinate access across concurrent contexts.

The compiler plays a central role in this model. It does more than validate syntax or types. It actively enforces the rules of isolation, inserts the necessary boundaries, and ensures that code cannot accidentally violate them. Runtime behavior still matters, but the primary guarantees come from compile-time analysis. This is what allows Swift Concurrency to prevent entire classes of bugs before the application is even executed.

Taken together, these ideas form a way of reasoning about concurrent code that does not depend on threads, queues, or manual synchronization. Instead of tracking execution across different contexts, the focus shifts to ownership and access.

The complexity of Swift Concurrency comes from this shift. The APIs themselves are relatively small and consistent. The challenge lies in adopting a different way of thinking, where correctness is expressed through isolation and verified by the compiler rather than managed manually at runtime.