Greetings, traveler!
Most software systems begin with a limited scope. The number of components is small, interactions are straightforward. As the system evolves, its structure becomes more explicit. Responsibilities are separated, layers emerge, and shared services appear. At this point, attention moves from individual components to the way they are assembled.
Dependency Injection emerges as a response to this coordination problem. The goal is to manage relationships between components in a consistent and explicit way. It addresses the question of how a collection of well-defined parts becomes a working application.
What We Mean by Dependency Injection
Dependency Injection is best understood as a design principle. At its core, it describes a simple rule: an object should receive the collaborators it depends on from the outside, instead of creating them internally.
This principle stands in contrast to several common alternatives. Global singletons hide dependencies behind static access points and make relationships implicit. Service locators centralize creation, but allow any part of the code to request arbitrary services at runtime, which obscures actual requirements. Instantiating dependencies directly inside a class couples it to concrete implementations and makes change harder over time. All of these approaches solve short-term convenience problems while introducing long-term structural cost.
Dependency Injection can be applied in different forms. Constructor injection makes dependencies explicit and enforces completeness at creation time.
final class ArticleListViewModel {
private let repository: ArticlesRepository
private let analytics: AnalyticsService
init(
repository: ArticlesRepository,
analytics: AnalyticsService
) {
self.repository = repository
self.analytics = analytics
}
}Property injection defers configuration until after initialization and is often used in UI-oriented code.
final class ProfileViewModel {
var analytics: AnalyticsService!
var profileService: ProfileService!
}
let vm = ProfileViewModel()
vm.analytics = analyticsService
vm.profileService = profileServiceMethod injection supplies collaborators only for the duration of a specific operation.
final class ReportGenerator {
func generate(
report: Report,
exporter: ReportExporter,
logger: Logger
) {
logger.log("Generating report")
exporter.export(report)
}
}These techniques differ in mechanics, but they share the same underlying idea: dependencies are supplied, not owned.
A container is not part of this definition. It is one possible mechanism for implementing Dependency Injection at scale. Confusing the two leads to misunderstandings. DI describes how responsibilities are assigned. Containers describe how that decision is executed in a concrete system.
A Note on Platform Support
Apple does not provide a general-purpose Dependency Injection framework. While recent platform releases introduced a narrow form of dependency wiring for App Intents and related system integrations, this mechanism is highly specialized and does not address dependency management inside application architectures.
In contrast, the Android ecosystem has long treated DI as a first-class architectural concern. Frameworks such as Dagger and Hilt are supported by extensive official guidance, and Google invests heavily in documenting architectural patterns that assume explicit dependency composition as a baseline.
SwiftUI’s @Environment is sometimes mentioned in this context, but it serves a different purpose. It propagates values through the view hierarchy and is tightly coupled to UI structure. It does not manage object creation, lifetimes, or system-wide assembly, and therefore does not replace container-based DI. It complements architectural decisions rather than defining them.
When Constructor Injection Stops Being Enough
Constructor injection is often the first and most effective application of Dependency Injection. It makes dependencies explicit, enforces correct initialization, and keeps object contracts clear. For a long time, this approach scales well and remains easy to reason about.
As systems grow, a different kind of pressure appears. Dependencies start to propagate upward. Objects become assemblies of other objects, which themselves require further collaborators. Initializers expand, not because the design is careless, but because the object legitimately relies on many parts of the system. At the same time, the responsibility for creating these objects becomes fragmented. Each layer knows how to build the next one, but no single place reflects the full structure of the application.
The issue is not the number of dependencies themselves. It is the absence of a clear boundary where the complete picture is visible. A composition root addresses this by defining a dedicated location where the object graph is assembled. In practice, container-based approaches emerge as an alternative way to express this same idea. They provide a centralized mechanism for describing how components are created and connected, while allowing the rest of the system to remain focused on behavior rather than construction.
What a Container Actually Is
The term “container” often carries unnecessary weight. In practice, it refers to a very limited and concrete concept. A container is a place where a system describes what can be created, how it is created, and how long the resulting objects should live. Nothing more is implied by the term itself.
From this perspective, a container solves three related tasks. First, it defines creation: the rules that describe how an instance of a given abstraction comes into existence. Second, it defines wiring: how objects receive the collaborators they depend on. Third, it defines lifetime management: whether an instance is reused, cached, shared within a certain scope, or recreated on each request. These concerns exist in every non-trivial system, whether they are handled explicitly or not.
It is important to separate the idea of a container from two common misconceptions. A container does not have to be a global singleton, even though many implementations expose a shared instance for convenience. The lifetime of the container itself is an architectural decision. A container is also not the same as a service locator. In a locator-based design, code actively asks for dependencies at arbitrary points, obscuring intent. In a container-based design, the container defines construction rules, while the rest of the system consumes already-defined dependencies according to those rules.
Lifetime as a First-Class Architectural Concern
In many discussions about Dependency Injection, creation tends to dominate the conversation. In practice, lifetime is often the more critical dimension. How long an object lives, and under which conditions it is reused or discarded, has a direct impact on correctness, memory usage, and system behavior over time.
Most applications operate with several overlapping lifetime categories. Some objects are application-wide and are expected to exist for the duration of the process. Others belong to a specific feature or user flow and should disappear when that flow ends. Some are tied to a single operation or request and are meaningful only within that narrow context. There are also ephemeral objects that are created, used briefly, and immediately discarded. These categories coexist in the same codebase and frequently interact with one another.
Reducing this complexity to a single rule such as “one object equals one lifetime” does not reflect how real systems behave. The same abstraction may require different lifetimes depending on where and how it is used. Without a clear strategy, lifetime decisions become scattered across the codebase and are enforced indirectly through conventions and assumptions.
A container provides a centralized way to express and enforce these decisions. By making lifetime management an explicit part of object creation, it allows the system to reason about scope and reuse in a consistent manner. This shifts lifetime from an implicit side effect of construction into a deliberate architectural choice.
Scopes and the Shape of Object Lifetimes
A container becomes more than a simple registry once scopes are introduced. Scopes describe the context in which an object instance is reused and define the boundaries of its lifetime. Instead of treating all objects uniformly, the system can express different rules for different kinds of collaborators.
Some scopes are container-wide and correspond to objects that are shared for as long as the container itself exists. Others are limited to a single resolution pass, where instances are reused only while a particular object graph is being constructed. There are scopes that align with user flows, allowing a set of related objects to exist together and be released when the flow ends. Session-oriented scopes group dependencies around concepts such as authentication, where a login event establishes a boundary and a logout event clears it.
UI-driven applications are especially sensitive to these distinctions. Screens appear and disappear, navigation stacks change, and multiple flows can coexist. An object that outlives its intended scope can easily hold onto stale state or resources. Conversely, an object that is recreated too often may lose continuity that the user expects. Managing these lifetimes manually quickly becomes error-prone.
By encoding scopes directly into the container, these concerns are handled in a consistent way. A screen can rely on dependencies that live exactly as long as the screen does. A multi-step flow can share state across several views without leaking beyond its boundary. Login sessions can establish and tear down their own dependency sets. Previews and tests can define short-lived containers with tightly controlled lifetimes. The container serves as the place where these rules are expressed and enforced.
Scope Examples in Practice
To make scopes concrete, consider a few small examples using Factory. In Factory, a scope defines how long a resolved instance is reused. The default is typically a new instance per resolution, and you opt into reuse by attaching a scope to the factory definition.
For application-wide services, a singleton scope keeps one instance for the life of the container:
extension Container {
var analytics: Factory<AnalyticsService> {
self { AnalyticsServiceImpl() }.singleton
}
}For dependencies that should survive until you explicitly clear them, cached stores the instance until the cache is reset:
extension Container {
var sessionStore: Factory<SessionStore> {
self { SessionStore() }.cached
}
}
// On logout
Container.shared.manager.reset(scope: .cached)When you want reuse without forcing the container to retain the instance indefinitely, shared keeps a weakly-held instance and returns it while someone else still has a strong reference:
extension Container {
var paymentDraft: Factory<PaymentDraft> {
self { PaymentDraft() }.shared
}
}Graph is useful during object graph construction. It reuses instances only within a single resolution pass, then discards them once that pass completes:
extension Container {
var apiClient: Factory<APIClient> {
self { APIClient() }.graph
}
var repo: Factory<Repo> {
self { Repo(api: self.apiClient()) }
}
}These small differences in caching and ownership translate into meaningful architectural control: you can model app-wide services, flow-bound state, session lifetimes, and one-shot construction behavior without spreading lifecycle decisions across unrelated parts of the codebase.
Compile-Time Safety and Dependency Registration
Not all container-based DI approaches provide the same level of safety. A major distinction lies in whether dependency access and configuration errors are detected at compile time or deferred until runtime. This difference has practical consequences for both correctness and maintainability.
Frameworks such as Swinject rely on runtime registration and lookup. Dependencies are registered by type and resolved dynamically, often through calls like resolve(Service.self). This allows a high degree of flexibility, but it also means that the compiler cannot verify whether a dependency has been registered. In large systems, these failures may appear far from the point of configuration and only under specific runtime conditions, which increases the cost of diagnosis.
Factory takes a different approach by expressing dependencies as typed properties on a container. Access to a dependency goes through a concrete property, not a dynamic lookup. If a property does not exist, the code does not compile. If the return type does not match the expected abstraction, the compiler reports an error. This shifts a class of mistakes from runtime to build time.
This does not eliminate all runtime risks. Configuration errors can still occur when a dependency is declared but intentionally left without a default implementation, as is common in multimodule setups.
A Brief Note on Multimodular Factory Setup
In Factory, dependencies are accessed through typed container properties, which the compiler can validate. In a single-module application this provides a strong, explicit surface for the dependency graph.
In a multimodular setup, the situation is more nuanced. A feature module may depend on an abstraction whose concrete implementation lives in the application target or in another module. The feature still needs a typed access point, but it cannot provide a default implementation without breaking module boundaries. As a result, compile-time safety applies to the access point itself, while the availability of an implementation becomes a configuration concern.
One option is to define a mandatory slot that fails immediately if it is not wired:
extension FeatureContainer {
var paymentsAPI: Factory<any PaymentsAPI> {
self { fatalError("PaymentsAPI is not configured") }
}
}This approach keeps feature code clean and makes misconfiguration obvious during development. The trade-off is that the failure occurs at runtime.
Another option is to model the dependency as optional:
extension FeatureContainer {
var paymentsAPI: Factory<(any PaymentsAPI)?> {
self { nil }
}
}This avoids crashes when wiring is missing, but shifts the burden into feature code. Optional handling spreads quickly and can obscure configuration errors unless the dependency is genuinely optional by design.
A common compromise is to keep the slot mandatory and introduce a single assembly point responsible for wiring the feature. The assembly receives required dependencies explicitly and registers them before the feature is used:
final class PaymentsContainer: SharedContainer {
@TaskLocal static var shared = PaymentsContainer()
let manager = ContainerManager()
}
public struct PaymentsAssembly {
public init(api: any PaymentsAPI) {
PaymentsContainer.shared.register.paymentsAPI { api }
}
}
final class PaymentsViewModel {
@Injected(\PaymentsContainer.paymentsAPI) var paymentsAPI
}For cases where a dependency should be mandatory during development but must not crash a released application, Factory offers promised(). When used as a factory definition, it triggers a failure in debug builds if the dependency is not registered, while returning nil in release builds.
Another common pattern involves adaptors. When integrating third-party libraries or platform services, it is often preferable to wrap them behind a small protocol owned by the application.
public protocol Analytics {
func event(name: String)
}
extension Container {
public var analytics: Factory<Analytics> {
self { AnalyticsAdaptor() }
}
}
private class AnalyticsAdaptor: Analytics {
public func event(name: String) {}
}Finally, some architectures require stricter separation. A core module that defines protocols and models may need to remain completely independent of any DI framework. In such cases, an additional wiring module can be introduced. This module depends on the core contracts and defines the container slots, while the application performs the final cross-wiring. The result is an extra level of indirection, but it preserves module independence and keeps dependency definitions out of the core domain.
Container-Based DI and Testability
Testability improves when construction and configuration are separated from behavior. Container-based DI makes this separation explicit. By concentrating object creation and wiring in one place, it becomes possible to alter the environment of a feature or a test without touching its internal logic.
Containers simplify testing in several ways. Unit tests can override specific dependencies with mocks while leaving the rest of the graph unchanged. Previews can assemble lightweight containers with deterministic data and short-lived scopes. Sandbox modes can swap infrastructure services, such as networking or persistence, without affecting feature code. In all cases, the system under test remains the same.
An important distinction exists between overriding a dependency and replacing it entirely. Overrides allow a test to temporarily substitute an implementation within an existing container setup. Full replacement constructs a new container tailored to a specific scenario. Both approaches are valid. Overrides are convenient for focused tests, while replacement is useful when strong isolation is required.
Scope management becomes a practical testing tool as well. Resetting a scope clears cached or shared instances and returns the container to a known state. This prevents hidden state from leaking between tests and avoids reliance on implicit ordering. Instead of manually tracking object lifetimes, tests interact with the container as a controlled boundary.
Viewed this way, the container defines the edge of testable behavior. Code inside the boundary operates against abstractions. Code outside the boundary decides which implementations are in play. This division keeps tests precise, predictable, and aligned with how the system is assembled in production.
Common Mistakes and Anti-Patterns
Container-based DI reduces complexity only when its boundaries are respected. Many issues attributed to DI stem from how the container is used, rather than from the concept itself.
A frequent mistake is treating the container as a global service locator. When any part of the codebase can reach into a shared container and request arbitrary services, dependencies become implicit again. This recreates the same opacity as global singletons, with an additional layer of indirection.
Another common issue is allowing the container to leak into domain or business logic. Domain code should express behavior in terms of abstractions, not concern itself with how those abstractions are obtained. Once the container crosses this boundary, architectural clarity erodes and testability suffers.
Configuration and usage should also remain clearly separated. When registration logic is interleaved with dependency consumption, the object graph becomes difficult to reason about and lifecycle rules turn implicit. The absence of a clear composition root amplifies this problem by scattering assembly decisions across the system.
Finally, using a single container without scopes flattens all lifetimes into one. Short-lived objects begin to behave like long-lived ones, state leaks across unrelated parts of the application, and cleanup becomes unreliable. Scopes exist to model real boundaries. Ignoring them turns the container into a blunt instrument rather than a precise architectural tool.
When Container-Based DI Is Actually Warranted
Container-based DI is not a prerequisite for every project. In small codebases with a limited number of components, the overhead of introducing a container can outweigh its benefits. Simple construction logic, explicit initializers, and straightforward ownership rules are often sufficient and easier to maintain at that stage.
The balance shifts as systems grow. When the number of services increases, lifetimes begin to diverge, and features are developed in parallel, coordination becomes harder to manage informally. At that point, a container can reduce complexity by making assembly explicit and centralizing decisions that would otherwise be scattered across the codebase.
There are also cases where DI introduces more friction than value. If a project has stable requirements, minimal layering, and little need for test isolation or configuration flexibility, a container may add indirection without solving a real problem. In such environments, DI can feel like ceremony rather than structure.
Container-based DI becomes justified when the cost of managing dependencies manually exceeds the cost of maintaining the container. Signals include repeated wiring logic, unclear ownership of shared services, growing initializer chains, and difficulty testing features in isolation. In these situations, the container serves as a tool for restoring clarity.
Conclusion
Dependency Injection rarely appears fully formed at the start of a project. It emerges gradually as a system grows and its internal structure becomes more explicit. What begins as a few extracted services eventually turns into questions about ownership, lifetime, and assembly. In this sense, DI reflects the natural evolution of a codebase rather than a single architectural decision.
A container is only one possible response to these questions. It is a tool that helps express construction rules, manage lifetimes, and define boundaries between parts of the system. Treating the container as the goal leads to misplaced focus. The underlying architecture, the clarity of dependencies, and the separation of responsibilities matter far more than the specific mechanism used to implement them.
Different projects will arrive at different solutions. Some will rely on simple initializer-based composition. Others will benefit from a more structured container-based approach. The common thread is intentionality. Decisions about how dependencies are created and shared should be deliberate and visible.
A well-designed DI setup makes a system easier to understand. It clarifies how parts fit together and where responsibilities lie. A good DI setup makes the structure of the system explicit.
