Whether you’re maintaining a list of delegates or debugging a stubborn retain cycle, tools for managing references and monitoring memory can save hours of frustration. In this article, I’ll walk through three lightweight but powerful building blocks you can introduce into your Swift toolkit:
- Tools for working with collections of weak objects
- It’s thread-safe versions
- A memory leak monitor you can use in coordinator-based architectures
Let’s start with the basics.
1. Managing a Collection of Weak References
To store weak references in collections, you can use the native NSHashTable<AnyObject>.weakObjects()
provided by Foundation. It is a robust and time-tested way to manage sets of weakly-referenced objects, especially when order and duplicates are not a concern.
private final class ReferenceRepository {
private var references = NSHashTable<AnyObject>.weakObjects()
func count(with reference: AnyObject) -> Int {
references.add(reference)
return references.count
}
}
However, in some cases, you might prefer more flexibility or need array-like behavior (preserving order, allowing duplicates, etc.). For such situations, you can implement a custom WeakObject
wrapper that holds a weak reference using a simple closure-based approach. This gives you more control over how weak references are stored and accessed — particularly useful when building a WeakArray
.
WeakArray
is a property wrapper around an array of WeakObject
instances. Each WeakObject
holds a weak reference internally using a closure-based approach to preserve type information and support any class-constrained generic type. When accessing the array, WeakArray
automatically filters out deallocated objects, ensuring that your list stays clean and memory-safe without manual cleanup. This makes it an ideal building block for delegate multicast patterns, event hubs, or any loosely coupled observer system.
@propertyWrapper
struct WeakArray<Element> {
private var storage = [WeakObject<Element>]()
var wrappedValue: [Element] {
get { storage.compactMap { $0.value } }
set { storage = newValue.map { WeakObject($0) } }
}
}
final class WeakObject<T> {
var value: T? { handler() }
private let handler: () -> T?
init(_ value: T) {
let object = value as AnyObject
handler = { [weak object] in object as? T }
}
}
You can use this property wrapper to safely store and notify multiple delegates:
protocol MyDelegate: AnyObject {
func didUpdate()
}
final class EventBroadcaster {
@WeakArray private var subscribers: [MyDelegate] = []
func subscribe(_ subscriber: MyDelegate) {
subscribers.append(subscriber)
}
func notifyAll() {
subscribers.forEach { $0.didUpdate() }
}
}
Thread-safety
When working with collections of weak references and especially when those collections need to be accessed from multiple threads, it’s essential to ensure thread safety. AtomicWeakArray
solves this by using a concurrent DispatchQueue
with barrier synchronization to safely coordinate reads and writes. The internal array holds weak references to avoid retaining the elements and creating reference cycles.
Crucially, the entire collection is wrapped in a class. This design ensures reference semantics, meaning that entities using AtomicWeakArray
won’t get copies of the collection but will instead share access to the same instance. Since all interactions go through this single class instance, we avoid the pitfalls of value-type copying and guarantee that only one place modifies the collection.
final class AtomicWeakArray<Element> {
private let queue = DispatchQueue(
label: "livsycode.atomic-weak-array.queue",
qos: .default,
attributes: .concurrent
)
private var storage: [WeakObject<Element>] = []
var all: [Element] {
queue.sync {
storage.compactMap { $0.value }
}
}
func append(_ newElement: Element) {
queue.async(flags: .barrier) {
self.storage.append(WeakObject(newElement))
}
}
func removeAll() {
queue.async(flags: .barrier) {
self.storage.removeAll()
}
}
func forEach(
_ body: (Element) throws -> Void
) rethrows {
try queue.sync {
try storage.compactMap { $0.value }.forEach(body)
}
}
var count: Int {
queue.sync {
storage.compactMap { $0.value }.count
}
}
}
Note
By the way, you can achieve thread-safety with Apple’s new Synchronization framework, too. You can read about it here.
2. Ensuring Thread Safety with AtomicDictionary
In multi-threaded Swift code, race conditions can easily occur when multiple threads access and mutate shared collections. AtomicDictionary
addresses this by wrapping a dictionary in a thread-safe interface using Grand Central Dispatch. Internally, it uses a concurrent DispatchQueue
for reads and a .barrier
flag for writes, ensuring exclusive access during mutations. This design provides high performance for frequent reads while maintaining safety for concurrent writes.
It’s important that the dictionary is wrapped in a reference type like a class. Swift’s native collections, such as Dictionary
, are value types, and assigning or modifying them across threads can lead to unexpected things. By encapsulating the dictionary inside a class, we maintain a single reference to the underlying storage and ensure synchronized access through the dedicated dispatch queue. This avoids race conditions and ensures memory coherence across threads.
In addition to standard subscripting, AtomicDictionary
also includes a convenience accessor subscript(key:default:)
, which initializes and inserts a value only if it’s absent — a common pattern in caching or deduplication scenarios.
class AtomicDictionary<Key, Value> where Key: Hashable {
private let queue = DispatchQueue(
label: "livsycode.atomic-dictionary.queue",
qos: .default,
attributes: .concurrent
)
private var storage: [Key: Value] = [:]
subscript(key: Key) -> Value? {
get {
queue.sync {
storage[key]
}
}
set {
queue.async(flags: .barrier) {
self.storage[key] = newValue
}
}
}
subscript(key: Key, default value: @autoclosure () -> Value) -> Value {
get {
if let value = self[key] {
return value
} else {
let newValue = value()
self[key] = newValue
return newValue
}
}
set {
self[key] = newValue
}
}
}
Note
By the way, you can read more about atomic collections here.
3. Detecting Memory Leaks with MemoryLeakMonitor
Now, let’s combine the previous two tools into something more: a memory leak monitor.
Example: Coordinator Retention
In the Coordinator pattern, it’s common to have a base coordinator class with multiple subclasses managing flows. Often, a coordinator should exist in only one instance at a time. But when deallocation doesn’t happen as expected — say, due to a retained closure or a strong reference cycle — tracking down the issue is hard.
MemoryLeakMonitor
is a lightweight utility designed to help detect memory leaks in Swift applications by tracking how many instances of a particular class remain in memory over time. It works by requiring monitored classes to conform to the MemoryLeakMonitorable
protocol, which defines two properties: description
(defaulting to the class name) and max
(the expected maximum number of instances allowed to be alive simultaneously).
protocol MemoryLeakMonitorable: AnyObject {
var max: Int { get }
}
extension MemoryLeakMonitorable: CustomStringConvertible {
var description: String {
String(describing: self)
}
}
Now the monitor itself:
final class MemoryLeakMonitor {
private static let shared: MemoryLeakMonitor = .init()
private let repository: AtomicDictionary<String, ReferenceRepository> = .init()
private init() {}
static func validate(_ instance: MemoryLeakMonitorable) {
let count = shared.repository[instance.description, default: .init()].count(of: instance)
assert(
count <= instance.max,
"Memory leak detected! \(instance.description) instances count: \(count)"
)
}
}
private final class ReferenceRepository {
@WeakArray private var references: [AnyObject]
func count(with reference: AnyObject) -> Int {
references.append(reference)
return references.count
}
}
Internally, MemoryLeakMonitor
stores weak references to tracked instances using a ReferenceRepository
, which relies on a custom @WeakArray
property wrapper. This approach ensures that references do not retain the objects, allowing them to be deallocated as usual. The repositories are stored in an AtomicDictionary
keyed by the class description, making access thread-safe and suitable for use in concurrent environments.
A common use case for this tool is monitoring view models, coordinators, or any component that is expected to be short-lived or unique. By calling MemoryLeakMonitor.validate(self)
in the initializer of such a class, you can assert that the number of living instances doesn’t exceed expectations — helping you catch retain cycles or forgotten deallocations early in development. This makes it a practical and non-intrusive solution for memory management diagnostics.
class Coordinator: MemoryLeakMonitorable {
var max: Int { 1 }
init() {
#if DEBUG
MemoryLeakMonitor.validate(self)
#endif
}
}
final class ArchiveCoordinator: Coordinator {
override var max: Int { 2 }
}
Thread Safety Note
While AtomicDictionary
ensures that access to individual ReferenceRepository
instances is thread-safe, it does not guarantee thread safety within those repositories. Each ReferenceRepository
contains a weak array, and since @WeakArray
is not inherently thread-safe, concurrent access to it may lead to undesirable effects.
To address this, we can use an AtomicWeakArray
inside ReferenceRepository
. This way, both access to the repository itself and operations on its internal storage are synchronized across threads. Alternatively, if you’re certain that MemoryLeakMonitor.validate(_:)
will only be called from a single thread (e.g. the main thread), you might skip additional synchronization.
Here’s a quick look at a thread-safe version of ReferenceRepository
:
private final class ReferenceRepository {
private var references: AtomicWeakArray<AnyObject> = .init()
func count(with reference: AnyObject) -> Int {
references.append(reference)
return references.count
}
}
Wrapping Up
Each of these tools is useful on its own:
@WeakArray
andAtomicWeakArray
can help to manage weak references in a safe, concise way.AtomicDictionary
andAtomicWeakArray
can provide a concurrency-safe access to mutable state.MemoryLeakMonitor
builds on these tools to detect unexpected object retention.
If you’re working in an architecture where deallocation is critical (such as coordinators, view models, or services), this pattern can help you catch issues early — with minimal boilerplate.
If you enjoyed this article, please feel free to follow me on my social media: