Skip to main content
Tome lets you use any local or remote model as the LLM backend. If you’re just getting started, we recommend **Ollama **- it’s a simple way to run models on your own computer, especially if you have a GPU or an Apple Silicon (M-series) Mac. If you prefer the cloud, you can bring an API key for a hosted provider like Google Gemini or OpenAI.

Adding a Model

  1. Open the Settings page (gear icon in the bottom left).
  2. Choose one of the built-in providers:
    • Ollama → paste the server URL (usually http://localhost:11434).
    • OpenAI → paste your API key.
    • Gemini → paste your API key.
  3. To connect other providers, click Add Engine and fill in:
    • Name (any label you like)
    • URL (endpoint for your server)
    • API Key (if required)
Tome supports tools like LM Studio, OpenRouter, and many other backends out of the box.