Xcode 26 Compilation Cache


Greetings, traveler!

Most iOS engineers don’t need another reminder that builds are expensive — we feel it every day.

You change a few lines. You wait. You switch branches. You wait again. CI rebuilds the same targets for the tenth time today. Someone suggests cleaning DerivedData. The build gets slower, then faster, then weird again.

For years, we treated this as a fact of life: large Swift projects compile slowly, and the best you can do is keep the project modular and hope the compiler behaves.

Xcode 26 introduces a feature that changes the situation in a more fundamental way: Compilation Cache. The goal isn’t to make the compiler 5% faster. The goal is to stop repeating work that has already been done.

Let’s break down what this cache actually means, when it pays off, and where it doesn’t.

The repeating-work problem

Consider a typical week in a team:

  • you’re working on a feature branch
  • two colleagues are doing the same in parallel
  • a CI runner builds every PR from scratch
  • the same dependencies and internal modules are compiled over and over again

A large portion of this work is redundant. It’s rebuilding because the build system has no reliable way to reuse the previous result.

This is the exact problem compilation caching is designed to solve.

Why DerivedData never solved it

Xcode has always stored artifacts — object files, intermediates, indexing outputs. But DerivedData was never built to support reusable caching across builds in a robust way.

The easiest way to see it: DerivedData is treated as disposable.

If you’ve been on iOS long enough, you’ve probably heard all of these:

  • “clean build folder fixes it”
  • “try removing DerivedData”
  • “maybe Xcode got confused”

That works because DerivedData is a local build workspace. It’s convenient and practical, but it’s not a structured cache with correctness guarantees.

What Compilation Cache changes

With Xcode 26, compilation results can be cached in a more intentional and reusable way.

The key shift is this: Xcode can now decide whether a compilation action is reusable based on what went into it.

If the relevant inputs didn’t change (sources, compiler settings, toolchain, etc.), Xcode can skip repeating the work and pull the result from cache.

This often improves two common workflows:

  • rebuilding after switching branches
  • repeated clean builds once the cache is warmed up

In other words: the cache targets “I already compiled this exact thing yesterday (or five minutes ago)” situations.

Enabling it

Compilation Cache can be enabled via build settings (including xcodebuild), which makes it approachable for both local development and CI.

In many projects you can enable it by setting:

COMPILATION_CACHE_ENABLE_CACHING = YES

Once enabled, it builds up over time as you compile.

Where you’ll notice the biggest difference

1) Branch switching

Some branches touch parts of the project that trigger a rebuild of modules you didn’t edit directly.

When cache hits work well, you avoid recompiling a large portion of that unchanged code.

2) Clean builds that aren’t truly “cold”

A clean build used to mean: “you’re paying the full cost.”

With compilation caching enabled, even rebuilds after cleaning build products can reuse previously compiled artifacts — as long as the compilation cache is still present on disk.

3) High-churn CI

Many teams rebuild the same dependency graph dozens of times daily.

If your CI setup persists caches between runs, you reduce a large chunk of repeated work.

Why some projects won’t see dramatic wins (yet)

This is where expectations matter.

Even if compilation becomes fast, builds can still be slow because compilation is only one piece of the pipeline.

Common non-compiler bottlenecks:

  • asset catalog processing
  • large copy phases (thousands of files)
  • heavy script phases (SwiftLint, codegen)
  • linking and embedding

If your build time is dominated by these steps, compilation caching won’t feel like magic.

It’s doing its job — you just have other bottlenecks.

A note on Swift packages and modular graphs

Many modern iOS projects are modular, and a lot of that modularity is implemented with Swift Packages.

In theory, modularity helps caching. In practice, cacheability depends heavily on how the build system models the dependency graph and compilation actions.

That means you can observe a slightly counterintuitive outcome:

  • a project built primarily from Xcode targets sees clear cache wins
  • a project with a large Swift Package graph sees smaller improvement

This isn’t a reason to avoid SwiftPM. It’s simply the current state of a feature that is still evolving.

How to tell if the cache is working for you

Measure it.

A simple evaluation approach:

  1. Build once to populate cache
  2. Build again with no meaningful code changes
  3. Compare timings
  4. Inspect which phases dominate now

Run a second build without changes and open the Build Report. If the total time doesn’t drop, look at the longest steps in the timeline — scripts, asset processing, and resource copying often dominate once compilation is cached.

Conclusion

Compilation Cache is one of the most practical performance improvements in recent Xcode history.

It won’t fix every slow build, and it won’t replace good build hygiene. But it attacks a specific and very expensive category of waste: repeating compilation work unnecessarily.

Enable it. Measure it. And once compilation stops being the bottleneck, use that clarity to address the real build pipeline issues your project has been carrying for years.