How to implement ChatGPT in Swift


Greetings, traveler!

Large Language Models (LLMs) or chatbots are becoming increasingly popular and are finding applications in everyday life. One such tool that has gained significant attention is GhatGPT, which OpenAI developed.

OpenAI provides a convenient API for developers to integrate their chatbot functionality into iOS applications. Some ready-made solutions will allow you to start using the OpenAI API immediately. However, we will create something basic for educational purposes.

First, we must create an account on the OpenAI website and obtain an API key.

API Models

Now, let’s review the documentation. As indicated here, we can create a request of this type.

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ]
  }'

And in response, we will get this model.

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o-mini",
  "system_fingerprint": "fp_44709d6fcb",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "logprobs": null,
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

Now, let’s formalize this in Swift.

struct Request: Encodable {
    let model: Model
    let messages: [Message]
}

struct Response: Decodable {
    let id: String?
    let object: String
    let created: Int
    let model: String
    let choices: [Choice]
    let usage: Usage
}

struct Usage: Decodable {
    let promptTokens: Int
    let completionTokens: Int
    let totalTokens: Int
    
    enum CodingKeys: String, CodingKey {
        case promptTokens = "prompt_tokens"
        case completionTokens = "completion_tokens"
        case totalTokens = "total_tokens"
    }
}

struct Choice: Decodable {
    let index: Int
    let message: Message
    let finishReason: String
    
    enum CodingKeys: String, CodingKey {
        case index, message
        case finishReason = "finish_reason"
    }
}

struct Message: Codable {
    let role: Role
    let content: String
}

enum Role: String, Codable {
    case system
    case user
    case assistant
    case function
}

enum Model: String, Codable {
    case gpt35turbo = "gpt-3.5-turbo"
}

API Request

It’s time to create a method for sending messages and receiving responses from the bot.

First, we create a Manager responsible for sending and receiving messages. It will keep our API key as a constant. Among its properties, there will be an array of messages, with a property to determine who sent the message, the user or the bot.

class MessageManager {
    
    let apiKey = "YOUR_API_KEY"    
    var messages: [Message] = []
    
}

This Manager will also have a method to send messages and receive responses. In this method, we will create a request. In the header of the request, we will specify our API key. The request’s HTTP method is POST. Then, we will add a new element to the message array — the message we are sending.

class MessageManager {
    
    let apiKey = "YOUR_API_KEY"
    let queue = DispatchQueue(label: "request-queue", qos: .utility)
    
    var messages: [Message] = []
    
    func submit(_ prompt: String) {
        let url = URL(string: "https://api.openai.com/v1/chat/completions")!
        
        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
        request.setValue("application/json", forHTTPHeaderField: "Content-Type")
        
        messages.append(Message(role: .user, content: prompt))
        
        do {
            let payload = Request(model: .gpt35turbo, messages: messages)
            let jsonData = try JSONEncoder().encode(payload)
            request.httpBody = jsonData
        } catch {
            print(error.localizedDescription)
            return
        }
    }
    
}

The next step is creating a request model containing our messages. We convert it to JSON and send it to the server. We receive a message from the server, which we add to the message array.

class MessageManager {
    
    let apiKey = "YOUR_API_KEY"
    let queue = DispatchQueue(label: "request-queue", qos: .utility)
    
    var messages: [Message] = []
    
    func submit(_ prompt: String) {
        let url = URL(string: "https://api.openai.com/v1/chat/completions")!
        
        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
        request.setValue("application/json", forHTTPHeaderField: "Content-Type")
        
        messages.append(Message(role: .user, content: prompt))
        
        do {
            let payload = Request(model: .gpt35turbo, messages: messages)
            let jsonData = try JSONEncoder().encode(payload)
            request.httpBody = jsonData
        } catch {
            print(error.localizedDescription)
            return
        }
        
        let configuration = URLSessionConfiguration.default
        configuration.urlCredentialStorage = nil
        
        let session = URLSession(configuration: configuration)
        
        queue.async { [self] in
            session.dataTask(with: request) { [self] data, _, error in
                guard let data, let response = try? JSONDecoder().decode(Response.self, from: data) else {
                    print(error?.localizedDescription)
                    return
                }
                
                messages.append(Message(role: .system, content: response.choices.first?.message.content ?? ""))
            }
        }
    }
    
}

And that’s it! We talked to the chatbot.

Conclusion

We have reviewed the basic aspects of working with the OpenAI API. It doesn’t look complicated, and it gives us a broad scope for creating something interesting.