Using the Foundation Models Framework for On-Device AI in SwiftUI


Greetings, traveler!

At WWDC 2025, Apple introduced the Foundation Models framework, a set of tools enabling developers to integrate Apple’s on-device AI models into their applications. According to Apple, these models operate entirely on the device, require no cloud connectivity, incur no inference costs, and prioritize user privacy. This article demonstrates how to implement the Foundation Models framework in a SwiftUI application to process user queries and display AI-generated responses in real time.

Overview of the Foundation Models Framework

The Foundation Models framework provides access to Apple’s on-device large language model, optimized for tasks such as text generation, summarization, and classification. A key feature is the ability to stream responses, allowing incremental output to be displayed as the model generates it. The LanguageModelSession class facilitates interaction with the model, supporting both batch and streaming responses. The streamResponse(to:) method, used in this example, delivers text chunks progressively, enabling dynamic updates to the user interface.

Implementing a Query-Response Interface

The following example creates a SwiftUI application where users can input a question and receive a streamed response from the on-device AI model. The interface includes a text field for input, a button to submit the query, and a scrollable area to display the response. The glassEffect modifier is applied to the text field for visual consistency with iOS 26’s design language. You can read more about this effect here.

import SwiftUI
import FoundationModels

struct ContentView: View {
    @State private var input: String = ""
    @State private var output: String = ""
    @State private var inputDisabled: Bool = false
    
    var body: some View {
        NavigationStack {
            ScrollView {
                Text(output)
            }
        }
        .safeAreaBar(edge: .bottom) {
            inputAccessoryView
        }
    }
    
    private var inputAccessoryView: some View {
        HStack {
            TextField("Ask me anything", text: $input)
                .padding()
                .glassEffect()
            
            Button {
                sendPrompt()
            } label: {
                Image(systemName: "paperplane")
                    .frame(width: 25, height: 25)
                    .rotationEffect(.degrees(40))
            }
            .buttonStyle(.borderedProminent)
            .controlSize(.mini)
            .disabled(inputDisabled)
            .padding(8)
        }
    }
    
    private func sendPrompt() {
        Task {
            guard input.isEmpty == false else { return }
            
            do {
                let session = LanguageModelSession()
                inputDisabled = true
                
                let streamResponse = session.streamResponse(to: input)
                
                for try await chunk in streamResponse {
                    self.output = chunk
                }
                
                inputDisabled = false
            } catch {
                print(error.localizedDescription)
                inputDisabled = false
            }
        }
    }
}

In this code, the ContentView struct defines a user interface with a ScrollView for displaying the AI’s response and a bottom bar containing a TextField and a submit button. The sendPrompt function creates a LanguageModelSession and uses streamResponse(to:) to process the user’s input. As the model generates text, each chunk updates the output state variable, refreshing the UI in real time. The inputDisabled state prevents multiple submissions during processing.

Considerations for Implementation

When integrating the Foundation Models framework, consider the following:

  • Streaming Behavior: The streamResponse(to:) method delivers text incrementally, which suits real-time applications but requires careful state management to ensure smooth UI updates.
  • Privacy Compliance: As the model operates on-device, no user data is sent to external servers.
  • Platform Compatibility: The framework is available on iOS 26, macOS 26, tvOS 26, and watchOS 26.

Conclusion

The Foundation Models framework enables developers to incorporate on-device AI capabilities into SwiftUI applications, supporting features like real-time text generation with minimal setup. By leveraging streamResponse(to:), applications can deliver dynamic, privacy-focused experiences.