Modularity as an Architectural Choice


Greetings, traveler!

Modularity is an architectural approach where a codebase is split into well-defined, independent units with explicit responsibilities and boundaries. Each module exposes a clear public interface and hides its internal details, allowing parts of the system to evolve without tightly coupling everything together. That said, modularity is not a universal requirement.

For small projects or early-stage prototypes, introducing multiple modules can add unnecessary overhead and slow down development without providing real benefits. In such cases, simplicity often wins.

However, once the business direction of an application is clear and it is evident that the codebase will grow significantly, starting with a modular structure becomes a pragmatic decision. At that point, modularity helps manage complexity early, establishes ownership boundaries, and prevents the system from turning into a tightly coupled monolith as development accelerates.

Designing the Structure Before Writing Code

Before writing any production code, it is worth spending time thinking through the structure of the application and, in many cases, sketching it out as a simple diagram. This initial step helps clarify boundaries, responsibilities, and relationships between parts of the system, long before they are encoded into build targets or packages.

Once this skeletal structure is clear, implementation becomes far more deliberate and less reactive. A practical starting point is the foundational layer of the application. This layer typically includes shared data models, the networking stack, and core entities that will be used across most features and services. Establishing this foundation early creates a stable base for future modules and reduces the likelihood of reworking core concepts once feature development is already in full motion.

Three Core Module Categories

In practice, a modular codebase benefits from a small set of clearly defined module categories. The first is the Foundation layer. These modules contain the core building blocks of the application: ubiquitous data models, shared primitives, and technical infrastructure that should remain broadly reusable. A Foundation module should not depend on any higher-level modules, yet it will be imported almost everywhere else.

The second category is Service modules. This is where shared helpers and capabilities live: components that features can reuse without duplicating logic. Service modules depend on Foundation, and they can be imported by feature modules. The dependency direction matters: Foundation must never import Services, since that would invert the hierarchy and pull application-specific concerns into the base layer.

The third category is Feature modules. A feature module represents a user-facing scenario with a clear beginning and outcome: completing a payment, changing settings, browsing a feed, opening item details, or finishing an onboarding flow. Feature modules can depend on Foundation and Services, yet they should not depend on other feature modules. When features import each other, the dependency graph becomes tangled, boundaries lose meaning, and the project drifts back toward a monolith—just one split across targets.

When Modularity Becomes a Trend-Driven Refactor

Many teams adopt modularization because it looks like the modern thing to do. The finish line becomes “make it compile,” and the design work stops there. The result is a dense web of interconnected modules, an inflated public surface area, recurring circular dependency issues, and a growing collection of god-modules and grab-bag services that try to handle everything. Teams still get some immediate benefits, such as fewer rebase conflicts and better parallel development, yet most of the architectural upside disappears.

Over time, this kind of modularity adds friction instead of removing it. Developers avoid touching modules owned by other teams because responsibilities are unclear, and the system accumulates singletons and god-objects that make the codebase feel like an overflowing junk drawer. Technical debt keeps growing, and it rarely gets paid down in an environment where deadlines tighten and attention shifts toward tooling and delivery speed.

In that context, shortcuts become the default: a feature imports another feature just to access a single service, or even a single view, and the codebase quietly slides back into monolithic coupling—only now it is harder to see. By the way, Xcode previews can also stop working in such an environment, so this really can be considered shooting yourself in the foot.

Defining Module Contracts

To prevent a module’s responsibilities from slowly expanding, its contract needs to be defined upfront. A useful mental exercise is to imagine publishing the module on GitHub and having to explain, in isolation, how it should be used. In that scenario, every public type and function becomes part of a promise to external users. Keeping this surface area small pays off quickly, especially once the role of the module is clear and its external touchpoints can be reduced to the essentials.

Foundation modules typically contain little to no business logic and do not represent user scenarios, which explains why they often expose a broader set of public APIs. Any change in this layer is costly and can affect large portions of the codebase.

Service modules may contain shared business logic used across several features, yet they should not model user flows. Their public APIs should remain focused. Once they start to grow unchecked, it is a strong signal to audit the module for signs of turning into a god-module. Designing services with an open source mindset helps here, since they will be consumed in many contexts and must remain predictable and convenient.

Feature modules, by contrast, usually need only a small public interface: an entry point into the feature and a way to observe or retrieve its outcome, often expressed through feature-specific data models. Direct feature-to-feature imports should be avoided, whether enforced through tooling such as Tuist or through consistent team discipline.

Navigation and Data Flow Between Modules

Navigation within a feature module is usually straightforward. A feature owns its internal flow and manages transitions between its own screens without exposing those details to the rest of the application. The complexity appears when navigation needs to cross module boundaries.

Moving from feature A to feature B is best handled outside of both modules, through an external coordinator. This coordinator can take the form of an app-level coordinator or a more specialized construct such as a tab bar coordinator. The key property is that it has visibility into all feature modules and is responsible for orchestrating transitions between them. When feature A completes a particular scenario and requires a handoff to feature B, it signals this intent to the shared coordinator. The coordinator then performs the navigation and passes any required data forward. This approach keeps feature modules isolated, avoids direct dependencies between them, and centralizes cross-feature navigation in a single, explicit place.

Assembling Modules and Managing Cross-Feature Interaction

At this point, it may seem that a coordinator is also responsible for assembling feature modules, yet this responsibility is better kept inside the feature itself. A feature should be able to construct its own internal graph by exposing a dedicated assembly API, whether through static factory methods, builders, or an explicit assembly type. The important part is that the feature declares what data and services it needs and performs its own composition, instead of relying on an external coordinator to wire its internals.

This naturally raises the question of data exchange between features. One common approach is to treat a feature’s output as a result model that gets handed to a component aware of both features, such as an app coordinator, which can then transform that result into a form suitable for the next feature.

When multiple features share the same data models, it is often a sign that those models belong in a separate module that contains no user scenarios and acts as a lightweight feature kit.

If interaction goes beyond simple data transfer and involves richer coordination, a dedicated mediator module can be introduced. This module sits higher in the hierarchy, knows about both features, and manages their communication, allowing the application to depend on the mediator rather than on the features directly. Such an approach adds architectural weight, so it should be reserved for cases where interaction is stateful, frequent, or complex enough to justify an explicit owner.

Another frequently proposed solution is the use of bridge modules built around protocols to avoid feature-to-feature imports. While this aligns well with clean architecture ideals on paper, the trade-offs are real: more modules, harder debugging, longer build times, and increased cognitive load. In many cases, these bridges create an illusion of decoupling while masking even tighter coupling underneath. For that reason, protocol-based bridges should be introduced selectively and with a clear goal, such as isolating a very large dependency, supporting testing through alternate implementations, or easing a gradual migration from a monolith to a modular structure.

Within a feature module, protocols can still be a good fit, as long as they serve a concrete purpose. Chasing maximum flexibility and full interchangeability rarely pays off. The situation where something genuinely needs to be replaced does not arise very often, while the situation where a developer tries to inspect a function’s behavior and ends up in a protocol happens on a regular basis.

Conclusion: Modularity as a Deliberate Practice

Modularity is a powerful tool for untangling complex systems and creating conditions for sustainable application development. When designed well, it reduces rebase friction, lowers the risk of unintended side effects, shortens feedback cycles, and makes day-to-day development more predictable. It can also open the door to sharing well-isolated modules across multiple applications, turning internal code into long-term assets rather than project-specific baggage.

At the same time, modularity is not a mechanical refactor that can be applied blindly. It is a complex instrument that demands careful thought and restraint. Tools such as Tuist and the growing set of AI-assisted development tools can support this effort by removing mechanical overhead, yet they do not replace architectural judgment. Clear ownership, team discipline, a coherent architectural vision, and a solid understanding of code hygiene remain the foundation that keeps a modular system from drifting into chaos and allows a project to mature with confidence.