Greetings, traveler!
Unit testing often comes up in iOS interviews, yet many developers have limited hands-on experience with it. Some teams barely use tests, some cover only a narrow slice of the codebase, and some rely on a few integration checks while calling them unit tests. Interviewers know that. They usually care less about whether you can recite XCTest APIs from memory and more about whether you understand what should be tested, how to keep tests useful, and how testing relates to architecture.
This article covers the practical knowledge that helps you give a strong interview answer. It also includes code examples you can discuss if the conversation gets more detailed.
what unit tests mean in ios
In the iOS world, unit tests usually focus on isolated logic. That often includes business rules, data transformations, validation, view models, and small services with controlled dependencies. The point is to verify behavior in a fast and predictable way.
The default framework for this in Apple platforms is XCTest, which ships with Xcode and remains the standard option in most projects.
A very simple example looks like this:
import XCTest
@testable import YourApp
final class PriceFormatterTests: XCTestCase {
func testFormatting() {
let formatter = PriceFormatter()
let result = formatter.format(100)
XCTAssertEqual(result, "$100")
}
}There is nothing sophisticated about this example, and that is fine. On an interview, the important part is often the reasoning around the test rather than the test itself.
How tests are usually structured
A clean test often follows the Arrange, Act, Assert pattern. This structure is widely used because it makes the intent obvious.
func testLoginSuccess() {
// Arrange
let service = AuthService()
// Act
let result = service.login(username: "user", password: "1234")
// Assert
XCTAssertTrue(result)
}What is worth testing
A good interview answer usually includes a clear sense of boundaries. In practice, unit tests are most useful for code that contains decision-making or transformation logic.
Common candidates include:
- view models
- business logic
- validation rules
- mappers
- use cases or interactors
- small services with mocked dependencies
Here is a simple view model example:
final class LoginViewModel {
var username: String = ""
var password: String = ""
var isValid: Bool {
!username.isEmpty && password.count > 3
}
}And the corresponding test:
func testValidation() {
let viewModel = LoginViewModel()
viewModel.username = "user"
viewModel.password = "1234"
XCTAssertTrue(viewModel.isValid)
}This is the kind of example that works well in interviews because it reflects real application code more closely than testing a trivial math function.
Dependencies and why mocking matters
Once a type depends on networking, storage, analytics, or any other external collaborator, the test stops being truly isolated unless you control that dependency. This is where protocols, mocks, and stubs become useful.
Consider this example:
protocol APIService {
func fetchUser() -> String
}
final class ProfileService {
private let api: APIService
init(api: APIService) {
self.api = api
}
func load() -> String {
api.fetchUser()
}
}A mock implementation for testing could look like this:
final class MockAPIService: APIService {
func fetchUser() -> String {
"mock_user"
}
}And the test:
func testProfileLoading() {
let api = MockAPIService()
let service = ProfileService(api: api)
let result = service.load()
XCTAssertEqual(result, "mock_user")
}The main idea here matters more than the mock itself. A test should control its inputs and avoid depending on real networking, real databases, or timing-sensitive infrastructure.
Mock vs Stub
Interviewers sometimes ask about the difference between mocks and stubs, especially if the conversation moves beyond basics.
A stub provides predefined data so the test can run in a predictable way. A mock is often used when you want to verify interactions, such as whether a method was called.
A stub example:
final class StubAPIService: APIService {
func fetchUser() -> String {
"stub_user"
}
}A mock that records calls:
protocol AnalyticsTracking {
func track(event: String)
}
final class MockAnalyticsTracker: AnalyticsTracking {
private(set) var trackedEvents: [String] = []
func track(event: String) {
trackedEvents.append(event)
}
}And a test that verifies interaction:
final class CheckoutViewModel {
private let analytics: AnalyticsTracking
init(analytics: AnalyticsTracking) {
self.analytics = analytics
}
func completePurchase() {
analytics.track(event: "purchase_completed")
}
}
func testTrackingPurchaseCompletion() {
let analytics = MockAnalyticsTracker()
let viewModel = CheckoutViewModel(analytics: analytics)
viewModel.completePurchase()
XCTAssertEqual(analytics.trackedEvents, ["purchase_completed"])
}This distinction is worth knowing because it shows that you understand test doubles beyond the generic word “mock.”
Async testing
Modern iOS code increasingly uses async and await, so it helps to be comfortable with asynchronous tests.
A straightforward example:
final class DataService {
func load() async -> String {
"data"
}
}
func testAsyncLoad() async {
let service = DataService()
let result = await service.load()
XCTAssertEqual(result, "data")
}That covers the happy path, but interviews may also explore failures.
enum NetworkError: Error {
case noConnection
}
final class FailingDataService {
func load() async throws -> String {
throw NetworkError.noConnection
}
}
func testAsyncLoadFailure() async {
let service = FailingDataService()
do {
_ = try await service.load()
XCTFail("Expected load() to throw")
} catch {
XCTAssertEqual(error as? NetworkError, .noConnection)
}
}This kind of example usually lands well because it shows that you are thinking about outcomes rather than only the success case.
Testing older async code with expectations
Not every codebase has moved fully to async and await. Many production apps still use completion handlers, so it is useful to know XCTestExpectation.
final class LegacyService {
func load(completion: @escaping (String) -> Void) {
DispatchQueue.global().asyncAfter(deadline: .now() + 0.1) {
completion("data")
}
}
}
func testLegacyAsyncLoad() {
let expectation = expectation(description: "Completion is called")
let service = LegacyService()
service.load { result in
XCTAssertEqual(result, "data")
expectation.fulfill()
}
wait(for: [expectation], timeout: 1.0)
}Even if you do not use this style often, mentioning it shows that you can work with older codebases as well.
Test lifecycle
Sometimes tests share setup code, and XCTest provides lifecycle hooks for that.
final class UserServiceTests: XCTestCase {
var service: UserService!
override func setUp() {
super.setUp()
service = UserService()
}
override func tearDown() {
service = nil
super.tearDown()
}
}There are also throwing variants such as setUpWithError and tearDownWithError. You do not need to dwell on them unless the conversation goes deeper, though knowing they exist is useful.
What makes code testable
This is where an interview answer starts to feel more senior. Testability has less to do with XCTest itself and more to do with design decisions.
Code tends to be easier to test when:
- dependencies are injected rather than created internally
- business logic is separated from UI
- global mutable state is avoided
- side effects are pushed to the edges of the system
- protocols are used where abstraction brings real value
- pure functions are kept pure
Here is an example of code that is harder to test:
final class Service {
func load() -> String {
Network.shared.request()
}
}This design hides the dependency inside the type, which makes the test harder to control. Injecting the dependency gives you much better leverage in tests and usually leads to cleaner architecture overall.
Common mistakes worth mentioning
A few mistakes come up again and again, and calling them out can strengthen your answer.
One common mistake is testing implementation details instead of behavior. A test should usually care about what the type does, not how it does it internally.
Another is relying on real networking or real databases in unit tests. That makes tests slow and brittle.
A third is writing tests that are more complicated than the production code they are supposed to protect. Once a test becomes hard to read, it stops being a safety net and starts becoming maintenance overhead.
It also helps to avoid packing too many things into a single test. Smaller tests are easier to understand when they fail.
What about UI
This part is worth phrasing carefully in interviews. Unit tests usually target logic rather than UI rendering. UI behavior is more often covered through UI tests, snapshot tests, or indirect testing through view models and state changes.
That distinction matters because saying “UI should never be tested” sounds too absolute. A better position is that different layers need different testing strategies.
Swift Testing
If you want to show awareness of newer tooling, you can mention Swift Testing. Apple introduced it as a modern testing framework for Swift, with syntax such as @Test and #expect. It is a relevant topic, though XCTest remains the default answer for most current iOS interviews because it is still the most common production choice.
You do not need to build your answer around Swift Testing, though a brief mention can make it clear that you follow the ecosystem.
Designing for testability from the start
Tests become much easier to write when they are considered during design rather than added later. A common mistake is to isolate every layer with its own abstraction and replace most dependencies with mocks. This keeps individual tests simple, but it also removes any confidence that real components work together.
An alternative approach is to shape your architecture so that most of the system can run as-is in tests, while only the lowest-level side effects are replaced. For example, instead of mocking an entire networking layer, you can inject a small transport dependency that performs the actual request. Your higher-level services then remain unchanged and are exercised in tests almost exactly as they are in production.
The same idea applies to expensive operations such as image processing or file I/O. Rather than replacing the whole component, you can inject a lightweight function or strategy that avoids heavy work during tests. This shifts the focus from testing isolated pieces to validating realistic flows, while still keeping tests fast and deterministic. The trade-off is a bit more setup, but it usually pays off in better confidence and fewer surprises closer to release.
Choosing what to test first
When time is limited, the question is rarely how to test, but where testing will have the most impact. A useful way to think about this is to look at the system as a set of layers and start with the highest one that your team owns.
For example, instead of writing separate tests for a cache, a networking client, and a repository, you can focus on a feature-level component that orchestrates all of them. A test for a FeedService that loads data, applies business rules, and prepares it for presentation will naturally exercise the caching logic and networking stack underneath, as long as those dependencies are not replaced with mocks.
This gives you confidence in a real user-facing flow without having to write multiple low-level test suites upfront. At early stages, this approach also reduces wasted effort, since foundational components tend to change more while the overall feature shape stabilizes.
Over time, lower-level modules should still gain their own tests, especially if they become reusable or are shared across teams. The key is to align test coverage with responsibility: each layer verifies its own behavior, while higher-level tests ensure that everything works together in practice.
What interviewers are often really checking
Questions about unit testing are rarely about syntax alone. In many cases, the interviewer is trying to understand whether you can reason about maintainability, design boundaries, and confidence in changes.
That is why strong answers usually connect testing with architecture. If your codebase has meaningful separation of concerns, tests become easier to write and more useful to keep. If everything lives inside view controllers or depends on singletons, testing quickly turns into friction.
This is often the real point of the discussion.
Conclusion
You do not need years of testing experience to speak well about unit tests in an interview. You need a clear understanding of what should be tested, how isolation works, where mocks and stubs fit, and how architecture affects testability. Once you can explain those ideas with a few grounded examples, your answer starts to sound credible.
