Skip to main contentTome lets you use any local or remote model as the LLM backend.
If you’re just getting started, we recommend **Ollama **- it’s a simple way to run models on your own computer, especially if you have a GPU or an Apple Silicon (M-series) Mac. If you prefer the cloud, you can bring an API key for a hosted provider like Google Gemini or OpenAI.
Adding a Model
- Open the Settings page (gear icon in the bottom left).
- Choose one of the built-in providers:
- Ollama → paste the server URL (usually http://localhost:11434).
- OpenAI → paste your API key.
- Gemini → paste your API key.
- To connect other providers, click Add Engine and fill in:
- Name (any label you like)
- URL (endpoint for your server)
- API Key (if required)
Tome supports tools like LM Studio, OpenRouter, and many other backends out of the box.