Explicit Dependency Injection


Greetings, traveler!

When an application is small, dependency injection rarely feels like a problem. Dependencies are simply passed through initializers, and the system remains easy to understand. There is little need for special patterns or tooling, and the code naturally reflects how the application is composed.

As the codebase grows, this simplicity often disappears. New features, shared services, multiple environments, and modular boundaries introduce pressure on how dependencies are created and passed futher. What once looked like “just passing values” can quickly turn into a complex setup involving containers, frameworks, and global state. At that point, dependency injection stops being a minor implementation detail and becomes a structural concern.

It is important to acknowledge that dependency injection frameworks exist for good reasons. They address real problems and can be effective in the right context. We will return to them in later articles. However, in practice they may often be unnecessary or even harmful.

In this article, we intentionally set frameworks aside and start with a simpler approach: a clear and explicit composition root, built through straightforward dependency wiring. The goal is not to define a perfect or “pure” form of dependency injection, but to explore how far a boring, explicit approach can scale before additional abstractions become necessary.

The central question we will explore is simple: can dependency injection scale without code magic, containers, or global state?

Dependency Injection at its Core

At its core, dependency injection is neither an architectural style nor a framework. It is a simple idea: one part of the system creates an object, and another part receives it instead of creating it itself. Nothing more.

Stripped of terminology, dependency injection is about separating construction from usage. A type should focus on what it does, not on how its collaborators are instantiated. When dependencies are provided from the outside, this separation becomes explicit and easier to reason about.

In practice, dependency injection tends to appear for a small number of concrete reasons. The most common one is testing: production code should not be tightly coupled to real networks, databases, or system resources. Another reason is environment configuration, such as switching between staging and production backends or different I/O implementations. Finally, dependency injection helps manage shared instances, ensuring that stateful components are not recreated arbitrarily across the system.

None of these motivations are inherently complex. The complexity usually emerges later, when dependency injection is treated as a problem to be abstracted away rather than a design choice to be applied deliberately. The issues we encounter are rarely caused by dependency injection itself, but by the mechanisms we introduce around it.

Almost Always, You Can Avoid a Singleton — and You’ll Thank Yourself Later

Singletons are undeniably convenient. You don’t need to pass them around, wire them through initializers, or think about ownership. You can simply reach for them when needed. As a shortcut, they often feel harmless — especially early in a project.

The problem is not that singletons are inherently bad, but that their costs tend to surface much later, when changing direction becomes expensive.

One often overlooked issue is concurrency. Singletons are not automatically thread-safe. Swift 6 has become stricter about concurrency and may force additional synchronization or isolation mechanisms, but not all codebases run with strict concurrency enabled. As a result, thread-safety concerns are frequently postponed or inconsistently addressed.

Another consequence of a singleton is its lifetime. A singleton lives for the entire duration of the application, which means its state must be actively managed. Changes in application state — session updates, user switches, backgrounding, data refreshes — all require explicit handling. If a singleton is turned into an actor to address thread-safety, new challenges emerge: await points can delay execution and unexpectedly interleave with other application events, creating subtle and hard-to-reason-about behavior.

There is also a common assumption that a dependency will only ever be needed as a single instance. This may be true today, but in the long term the situation may change. Parallel flows, background tasks, or testing scenarios can invalidate that assumption. Refactoring a deeply embedded singleton into a regular dependency later is often costly and invasive.

Singletons also tend to create failure modes that are difficult to test and reproduce. Because they represent global state, bugs often depend on timing, execution order, or prior interactions. These issues are notoriously hard to isolate in tests. Over time, a singleton can also accumulate responsibilities and turn into a “god object,” simply because it is easy to access from anywhere.

Finally, singletons have a habit of becoming architectural debt. If you ever decide to change your dependency injection approach, migrate to a different architecture, or introduce clearer boundaries between modules, singletons are often the first thing that gets postponed — and the last thing that gets removed.

There are cases where singletons are a reasonable choice. Resources that are inherently singular — such as disk caches, analytics or logging — may benefit from a single shared instance. To understand whether you need a singleton, just analyze the very nature of the mechanism you want to make a singleton. In most cases the answer will be — there is no need for a singleton here. But sometimes something that can truly only exist in one instance is really better designed that way.

Even then, it is usually a good idea to pass that instance from the root instead of accessing it directly at the call site. Doing so hides implementation details from parts of the system that should not care about them, improves testability, preserves modular boundaries, and keeps refactoring options open if requirements change later.

What Is a Composition Root?

A composition root is the place in an application where object graphs are assembled. It is the point where concrete implementations are chosen, dependencies are created, and fully configured objects are wired together before being handed off to the rest of the system.

The key idea is separation of responsibility. Business logic should focus on behavior, not on how its collaborators are constructed. A composition root takes on that responsibility explicitly. Outside of this boundary, code should receive dependencies rather than create them, and it should remain unaware of how those dependencies are configured internally.

A minimal example:

// Composition root
func makeFeature() -> Feature {
    let network = Network()
    let api = API(network: network)
    let repository = Repository(api: api)
    return Feature(repository: repository)
}

In practice, a composition root is not a specific pattern, class, or framework. It is an architectural decision. In a small application, it might live in an entry point such as Main or an application delegate. In a larger system, it often appears as a feature entry point, a coordinator, or a static factory exposed by a module. What matters is not where it lives, but that its role is clearly defined.

A composition root is also the place where trade-offs are allowed. Environment selection, conditional compilation, and knowledge of transitive dependencies are acceptable here because they are intentionally localized. By concentrating these decisions in a small number of well-known locations, the rest of the codebase remains simpler, more testable, and easier to evolve.

Seen this way, a composition root is not about enforcing a specific style of dependency injection. It is about making dependency creation explicit, visible, and constrained — so that the system can grow without relying on hidden global state or framework-level magic.

What Problem Does a Composition Root Actually Solve?

Without a clearly defined composition root, dependency creation tends to leak into business code. Objects begin to take responsibility not only for what they do, but also for how their collaborators are constructed. Over time, this leads to types becoming aware of details they should not care about: lower-level services, infrastructure choices, or dependencies that exist several layers below them.

This kind of knowledge creates tight coupling. When a dependency changes — gains a new requirement, needs a different configuration, or is replaced entirely — that change ripples upward through unrelated parts of the system. Classes that only depend on high-level abstractions suddenly need to be updated because they indirectly participated in constructing something deeper in the stack.

A composition root prevents this by enforcing a single direction of knowledge. Dependencies are created from the outside and passed inward, layer by layer. Each type only knows about the collaborators it directly interacts with, not how those collaborators were assembled or what they depend on internally. Construction details stop propagating through the system.

This has a compounding effect on maintainability. Changes remain localized, refactoring becomes mechanical rather than invasive, and the dependency graph stays understandable even as it grows. Instead of spreading construction logic across dozens of initializers, the system gains a small number of explicit places where dependency wiring is allowed — and nowhere else.

In other words, a composition root does not eliminate complexity. It contains it. By making dependency assembly an explicit responsibility with clear boundaries, the rest of the codebase is free to remain focused, decoupled, and resilient to change.

Why DI Frameworks Are Not Always Helpful — and Sometimes Actively Harmful

Dependency injection frameworks exist for a reason. In the right context, they can reduce boilerplate, standardize wiring across teams, and unlock patterns that are hard to implement consistently by hand. The problem is that many teams adopt them for the wrong reason: not because the project truly needs the trade-offs, but because the framework is popular, familiar, or “what everyone uses.”

Introducing a DI framework should be a deliberate decision, because it comes with real costs.

First, it adds a third-party dependency to the most load-bearing part of your architecture. When something goes wrong, you cannot fix it quickly in the same way you can fix your own code. You do not fully control the implementation unless you fork it—and once you fork, you effectively trade upstream support for ownership. On top of that, DI frameworks tend to be non-trivial: their internals are rarely approachable for an average engineer under time pressure, which makes debugging and patching even harder.

Second, maintenance can end at any time. A library may stop being updated, fall behind language or platform changes, or become incompatible with new toolchains. When that happens, you are not dealing with an isolated utility—you are dealing with a dependency that is wired through a large portion of the codebase. The result is a new form of technical debt: a migration project that is both expensive and risky precisely because the framework is so deeply embedded.

Third, a framework increases onboarding and operational complexity. New engineers must learn not only your architecture, but also the framework’s mental model, conventions, and failure modes. In practice, many people end up copying existing patterns without understanding them. This is where issues become subtle: a missing registration, a mis-scoped dependency, or an implicit resolution rule can stay invisible during development and surface later as a crash or a production-only bug.

Fourth, debugging becomes harder because control flow is no longer explicit. When dependencies are resolved indirectly, understanding “what was actually injected” often requires stepping through container configuration, registrations, scopes, and runtime resolution. Even when the framework is working correctly, this indirection increases the time it takes to diagnose problems.

Finally, some DI approaches can quietly reintroduce global state under a different name. A container that is accessible everywhere and stores long-lived instances may be functionally indistinguishable from a singleton—except now it is harder to see, harder to reason about, and easier to grow into an unbounded “god object” that everything depends on.

None of this means DI frameworks are bad. It means they are powerful tools with non-obvious consequences. If you choose one, do it because you understand the cost model and you know which problems you are buying your way out of—not because adopting a framework feels like the default next step.

On-Demand Dependency Creation: When Not Everything Is Available Up Front

In real applications, not all dependencies are available at the moment a feature is composed. Some objects are only needed under specific conditions, while others depend on information that appears later — user choices, navigation state, permissions, or runtime context. In these cases, trying to construct the entire dependency graph up front either becomes wasteful or simply impossible.

This is where on-demand dependency creation becomes useful. Instead of passing an object directly, we pass a way to create it later. Conceptually, this can be as simple as a function that returns a dependency when called. The important shift is that dependency creation is deferred until the moment it is actually needed.

A common mistake is to treat this as a special case that requires new abstractions or complex infrastructure. In practice, the opposite is true. On-demand creation often simplifies the system by making the timing of dependency construction explicit.

A minimal example illustrates the idea:

final class Feature {
    private let makeService: () -> Service

    init(makeService: @escaping () -> Service) {
        self.makeService = makeService
    }

    func performAction() {
        let service = makeService()
        service.execute()
    }
}

Here, Feature does not hold a Service instance. Instead, it receives a closure that knows how to create one. This avoids constructing the service unless it is actually used, and it removes the need for the feature to know how the service is configured.

As systems grow, this pattern is often formalized using factories or providers. The idea remains the same, but the intent becomes clearer. Dependencies that are always available are supplied through initializers, while dependencies that only become available later are passed as arguments to factory methods.

Whether this is expressed through factories, provider objects, or plain closures is mostly a matter of style and scale. What matters is the principle: dependency injection does not require all dependencies to exist at the same time. By deferring creation intentionally, we avoid leaking construction details upward, keep types decoupled, and make real-world flows easier to model.

On-demand creation does not make dependency injection more complex. It localizes when dependencies are created — and that clarity is often exactly what a growing system needs.

How Feature Modules Should Be Composed

When an application is split into modules, the structure of the dependency tree becomes just as important as the dependencies themselves. A useful mental model is this: a feature knows how to assemble itself. It exposes a single entry point that allows the application to create and configure it at the appropriate place—often near the bottom of the app, such as in an AppCoordinator or a root flow controller.

What features should not do is assemble each other. A feature is a boundary, not a toolbox. Its responsibility is to define how it is composed internally and what it exposes externally, not to reach sideways into other features or recreate their internals. For that reason, a feature should provide a single, well-defined composition entry point—the place where its internal dependency graph is built.

There is an interesting parallel here with a concept described by Carlos Castaneda in his books: the assemblage point. In that context, it describes the point where perception is organized. In software, a feature’s composition entry point plays a similar role—it is where the feature’s internal structure comes together and becomes usable as a whole.

Most other APIs inside a feature module should rarely be public. Internal services, infrastructure details, and wiring logic should remain private. Public APIs typically fall into two categories: models that are required to assemble the feature, and the results or outputs produced by the feature itself. Everything else benefits from staying hidden, which preserves modularity and reduces accidental coupling.

How this entry point is implemented is a team-level decision. Some teams prefer factories, others use static functions, builders, or dedicated setup types. The specific mechanism matters less than the constraint it enforces: there is exactly one place where a feature is assembled, and that place is intentional and easy to find.

It is also important to be precise about scope. Not every module is a feature. Foundational modules or shared utilities follow different rules and serve different purposes. The guidelines here apply specifically to feature modules—units that represent cohesive user-facing behavior. Other module types introduce different trade-offs, which we will discuss later.

Conclusion

Early in our careers, it is easy to be impressed by complex code. Sophisticated abstractions, clever indirection, and intricate systems can feel like signs of maturity. Over time, that perspective tends to change. What truly stands out in a long-lived codebase is not how clever it is, but how easy it is to read, understand, and debug.

The moments that inspire confidence later on are different. Not “wow, this is complex”, but “wow, this is simple—and thoughtful.” That kind of simplicity is rarely accidental. It is usually the result of applying well-known principles consistently, even when doing so feels repetitive or boring.

Most of these rules are not new. They exist because they help teams ship software without turning everyday development into a constant exercise in firefighting. The key is to define clear boundaries early, make deliberate trade-offs, and then have the discipline to follow those constraints over time.

Dependency injection is one facet of those boundaries. Used thoughtfully, it becomes a powerful tool: it clarifies responsibilities, reduces hidden coupling, and keeps complexity contained. Not by introducing more machinery, but by making structure explicit. When applied with intention, DI stops being an abstract concept and starts doing what it should—removing friction, improving quality, and making the system easier to evolve.

Good architecture is rarely about inventing something new. More often, it is about choosing the right constraints—and respecting them long enough to see the benefits.