Translation & LLM
Translation & LLM
Section titled “Translation & LLM”Sublarr supports optional subtitle translation. Translation is off by default and is treated as an experimental feature — turn it on only if you need it. Backends fall into two camps:
- Cloud backends (DeepL, Claude, Gemini, OpenAI, DeepSeek, Mistral, Google, Azure, MyMemory) — currently the most reliable choice for production-grade output. Subtitle text leaves your server.
- Ollama (local LLM) — fully offline, no API keys, no data
egress. Quality is improving but still rough at the edges; use
general-purpose models like
qwen2.5:14borllama3.1:8b.
Configuring Ollama
Section titled “Configuring Ollama”Install Ollama on the host that runs Sublarr (or on a reachable LAN host), pull a general-purpose model, then point Sublarr at it:
ollama pull qwen2.5:14b-instruct# or, lighter:ollama pull llama3.1:8b-instructSet in Settings → Translation → Backends → Ollama:
| Field | Example |
|---|---|
| Endpoint URL | http://ollama:11434 |
| Model | qwen2.5:14b-instruct |
Translation Backends
Section titled “Translation Backends”Sublarr ships 12 translation backends (Lingarr parity). Configure them in Settings → Translation Backends. Every backend records per-call cost, latency and token usage; visit Settings → Translation → Cost & Memory to compare.
| Backend | Type | Self-Hosted | API Key | Best For |
|---|---|---|---|---|
| Ollama | Local LLM | Yes | No | Full control, custom prompts, GPU-accelerated |
| OpenAI-compatible | LLM | Both | Yes | GPT-4 / local OpenAI-compatible endpoints |
| OpenAI ChatGPT | LLM API | No | Yes | GPT-4o / GPT-4-turbo via official endpoint |
| Anthropic Claude | LLM API | No | Yes | High-quality long-context translation |
| Google Gemini | LLM API | No | Yes | Fast Gemini 2.x with native multilingual support |
| DeepSeek | LLM API | No | Yes | Cost-effective Chinese-LLM provider |
| Mistral | LLM API | No | Yes | EU-hosted LLM, GDPR-friendly |
| DeepL | NMT API | No | Yes | Highest-quality NMT for EU languages |
| Google Translate | NMT API | No | Yes | Broad language support, fast |
| LibreTranslate | NMT API | Yes | Optional | Self-hosted, privacy-focused fallback |
| Azure Translator | NMT API | No | Yes | Enterprise-grade NMT with regional endpoints |
| MyMemory | NMT API | No | Optional | Free tier, useful as zero-cost fallback |
Configuring Ollama (Default):
- Install Ollama on your server
- Pull a model:
ollama pull qwen2.5:14b-instruct - In Sublarr: Settings > Translation Backends > Ollama
- Enter your Ollama URL and model name
- Click Test to verify
Fallback Chains: Configure backup backends in case your primary fails. Example:
- Primary: Ollama (local, fast, free)
- Fallback 1: DeepL (cloud, high quality)
- Fallback 2: LibreTranslate (self-hosted backup)
Chat API & Series Context (V9+)
Section titled “Chat API & Series Context (V9+)”See Settings → Translation → Ollama Chat API for the full reference including how to configure system prompts and series context injection.