How I migrated 300 screens to SwiftUI and what I learned


Greetings, traveler!

At some point, we decided to move forward with a full redesign of the application. The scope was significant, and the plan included expanding the team to handle the expected workload.

The timelines were tight, which forced us to look closely at the development process and identify potential bottlenecks early. UIKit quickly came up in those discussions. From our experience, new engineers often needed time to get comfortable with layout and view composition, especially in a large and modular codebase.

SwiftUI looked like a more approachable alternative. It offered a clearer mental model for building interfaces and reduced the amount of boilerplate required to get a screen on the device. The redesign also meant building a large set of reusable UI components from scratch, and here again SwiftUI seemed like a better fit for scaling development across a growing team.

Migration strategy

We did not approach this as a full migration to SwiftUI. The change was deliberate and scoped. Layout and view composition moved to SwiftUI, while navigation remained on UIKit. UIKit had already proven to be stable in our codebase, and we had a mature set of navigation tools built around it. Those tools covered complex flows, edge cases, and deep linking scenarios that would have been expensive to reimplement or adapt to SwiftUI at that stage.

Keeping navigation in UIKit also gave us a predictable foundation for parts of the app where control matters the most. Transitions, state restoration, and coordination between screens were already well understood and tested. SwiftUI, on the other hand, was introduced where it provided the most value: building and composing UI. This split allowed us to reduce friction for new developers working on layout, while avoiding unnecessary risks in areas that required stability and fine-grained control.

In practice, this resulted in a hybrid architecture where SwiftUI views were embedded into an existing UIKit navigation stack. It was not a compromise, but a conscious choice to use each framework where it fits best.

Incremental rollout and feature selection

We were careful about what to migrate first. Early on, we focused on simple, low-risk features. The goal was not speed, but feedback. The team needed time to get comfortable with SwiftUI, and we needed real signals about how it behaves in our codebase. Starting with non-critical screens allowed us to surface issues without putting core product flows at risk.

We worked with curated lists of screens and tracked progress explicitly. On CI, we maintained visibility into how many screens were still on UIKit and how many had already been moved to SwiftUI. This gave us a simple, objective way to measure progress and helped keep the migration aligned with delivery timelines.

The rollout was incremental by design. We did not attempt large rewrites. Instead, we moved feature by feature, validating each step along the way. In parallel, we were building a set of reusable UI components in SwiftUI. These components were not developed in isolation. As soon as they were ready, we integrated them into new or migrated features to test their behavior in real product scenarios. This approach helped us refine the components quickly and ensured they were shaped by actual use cases rather than assumptions.

Reality check: SwiftUI adoption inside the team

At the start of the migration, most engineers were familiar with SwiftUI at a high level. Many had tried it in side projects or small experiments. That did not translate into production-ready experience. Once SwiftUI became part of the main codebase, the gap showed up quickly.

A common pattern I kept seeing during code reviews was treating SwiftUI as a different syntax for UIKit. Views were written with an imperative mindset, with logic scattered across the body and state updates happening in places where they were hard to reason about. This often led to subtle bugs and unpredictable rendering behavior. Misuse of property wrappers was one of the most frequent issues that surfaced in reviews.

Side effects inside body were another recurring problem. Code that triggered network calls or mutations during view construction led to repeated execution and hard-to-track issues. Lifecycle misunderstandings added to this. Developers expected onAppear to behave like viewDidLoad, which resulted in duplicated work.

Nested ObservableObject structures caused additional confusion. Changes deep inside the hierarchy did not always propagate as expected, and debugging those cases required a clear understanding of how SwiftUI observes state. More broadly, there was a lack of intuition around the declarative nature of SwiftUI. Instead of describing how the UI should reflect state, code often tried to control when and how updates happen.

All of this was expected to some extent. The shift in mental model takes time, and without prior production experience, these issues regularly surfaced during reviews and required clarification.

Scaling the team: code review as a learning tool

We adjusted our development process early to support the transition. Pull requests became smaller and more focused. This made reviews easier to follow and reduced the cost of mistakes. On the first iterations, experienced engineers stayed closer to the changes, especially on features that introduced new patterns or components.

Code review quickly turned into a primary learning channel. Instead of limiting feedback to correctness, we used reviews to explain decisions, point out patterns, and highlight how SwiftUI behaves in practice. Many comments addressed the same themes: where state should live, how data flows through a view, what should trigger updates, and how to avoid side effects.

We also shared interesting cases with the wider team. When a tricky issue came up or a good solution emerged, we discussed it in the team chat with code examples and short explanations. This created a shared context and reduced the chance of repeating the same mistakes across different features. Over time, the number of recurring issues dropped, and reviews shifted from teaching basics to refining details.

Workshops and knowledge transfer

To support the transition, I introduced a series of workshops and internal sessions focused on practical SwiftUI usage. These were structured as live coding sessions rather than prepared presentations. The goal was to work with real code in real time, using examples taken directly from our codebase or closely mirroring production scenarios.

The sessions covered topics that repeatedly surfaced during reviews: state management, rendering behavior, performance considerations, and common architectural patterns. We walked through typical mistakes, discussed why they happen, and rewrote problematic implementations together. This helped connect abstract concepts to concrete issues the team had already encountered.

Interaction was an important part of the format. Developers could suggest approaches, ask questions, and challenge decisions as the code evolved on screen. This made the sessions closer to collaborative problem-solving than traditional training. Over time, the team developed a more consistent understanding of how SwiftUI works in practice, and the gap between individual approaches began to narrow.

Architecture evolution

As the migration progressed, we revisited the structure of feature modules to better align with SwiftUI’s data flow. We stayed with MVVM, but made state and interactions more explicit. View models exposed state as enums, which described all possible UI states in a single place. User interactions were also modeled as enums, making the flow of events predictable and easy to trace.

Business logic was moved into use cases. This kept view models focused on mapping state and handling events, while the underlying logic remained isolated and testable. The boundaries between layers became clearer, and side effects were easier to reason about.

This structure reduced ambiguity across the codebase. When working on a feature, it was clear where to look for state, where events were handled, and how data moved through the system. As a result, both reading and debugging the code required less context and fewer assumptions.

What changed over time

The first iterations moved slowly. Even simple screens required more time than expected, and the team often paused to clarify how SwiftUI behaves in specific scenarios. Reviews were detailed, and many changes involved reworking the same pieces of code to align with the intended patterns. This phase was necessary to build a shared understanding and reduce the number of repeated mistakes.

After a few iterations, the pace started to improve. Familiar patterns emerged, and developers began to rely on them instead of experimenting from scratch. The introduction of reusable components played a key role here. As we built a growing library of SwiftUI views, new screens required less effort. Layouts became more consistent, and the amount of custom code per feature decreased.

Standardization followed naturally. We aligned on naming, structure, and data flow conventions. New features were built on top of existing components rather than introducing variations of the same idea. Over time, this reduced cognitive load across the codebase. Developers could focus on the specifics of a feature without rethinking the underlying UI approach.

Developer experience: what actually improved

Over time, the developer experience changed in a noticeable way. Layout work became faster once the team settled on a consistent approach and built a set of reusable components. Creating a new screen often meant composing existing pieces rather than building everything from scratch. With a prepared component library and agreed patterns, the amount of effort required to deliver UI decreased significantly.

The declarative model made it easier to reason about what a screen does. State and events were defined explicitly, which reduced the need to trace side effects across multiple layers. When something did not behave as expected, debugging usually started from the current state and followed a predictable path. The combination of enum-based state, explicit events, and isolated business logic in use cases made features easier to understand, even for developers who were not involved in their initial implementation.

Onboarding also improved. New engineers could start contributing earlier, since building UI required less knowledge of framework-specific details. The learning curve shifted from mastering layout mechanics to understanding data flow and conventions within the project. With the addition of AI-assisted workflows and prepared prompts, routine UI work became even more streamlined. Generating a screen, adapting it to project conventions, and integrating it into an existing feature could be done quickly, with most of the effort focused on correctness rather than boilerplate.

AI as a force multiplier

AI tools became part of the workflow once the project reached a certain level of consistency. By that point, we had a stable set of components, clear architectural conventions, and examples of how screens should be structured. This made it possible to prepare prompts that reflected our standards and reuse them across features.

We experimented with agent-based workflows where the model could generate UI code based on a description of a screen and a set of constraints. The generated code still required review and adjustment, but it reduced the amount of repetitive work.

Performance considerations

Performance required a more careful approach. SwiftUI introduces additional overhead related to state diffing and layout recalculation. In many cases, this is not noticeable, especially on simple screens. Screens with deep view hierarchies, dynamic content, and frequent state updates tend to expose these limitations.

Scroll-heavy interfaces were the most sensitive area. Lists with heterogeneous cells, asynchronous image loading, and interactive elements could produce inconsistent frame rates under load. The behavior was not always predictable, and small changes in state handling or view composition could affect rendering performance.

Based on these observations, we defined clear boundaries. Screens that required strict performance guarantees, particularly those with intensive scrolling or complex interactions, remained on UIKit. For the rest of the application, SwiftUI provided sufficient performance while offering faster development and better maintainability. This separation allowed us to balance user experience with development efficiency without forcing a single approach across all features.

UIKit fallback strategy

We did not treat SwiftUI as an all-or-nothing decision. Some components still required a level of control that was easier to achieve with UIKit, especially when the expected behavior depended on details that SwiftUI does not expose directly or does not handle with the same level of precision.

Text input was a good example. In several cases, we needed behavior that relied on fine-grained control over cursor position, formatting, focus management, and other interaction details. For those scenarios, we kept UITextField and exposed it to SwiftUI through UIViewRepresentable. This allowed us to preserve the required behavior without forcing a custom SwiftUI solution where UIKit already provided a reliable one.

The same principle applied more broadly. Whenever a component demanded tight control over interaction, lifecycle, or rendering details, UIKit remained the safer option. SwiftUI stayed the default for layout and composition, while UIKit covered the cases where precision mattered more than consistency of the abstraction.

Final outcome

The migration changed how we build and maintain UI without requiring a full rewrite of the application. Development speed improved once the team aligned on patterns and built a shared set of components. New screens could be assembled with less effort, and changes were easier to implement within an existing structure.

Onboarding became more predictable. New engineers spent less time understanding layout mechanics and more time learning how state and data flow are organized in the project. With clear conventions in place, it was easier to contribute without relying on deep knowledge of legacy code.

The final setup settled into a stable hybrid model. SwiftUI is used for layout and composition, while UIKit remains in place for navigation and for components that require precise control. This approach allowed us to move forward without disrupting parts of the system that were already working well.