Skip to content

Translation & LLM

Sublarr supports optional subtitle translation. Translation is off by default and is treated as an experimental feature — turn it on only if you need it. Backends fall into two camps:

  • Cloud backends (DeepL, Claude, Gemini, OpenAI, DeepSeek, Mistral, Google, Azure, MyMemory) — currently the most reliable choice for production-grade output. Subtitle text leaves your server.
  • Ollama (local LLM) — fully offline, no API keys, no data egress. Quality is improving but still rough at the edges; use general-purpose models like qwen2.5:14b or llama3.1:8b.

Install Ollama on the host that runs Sublarr (or on a reachable LAN host), pull a general-purpose model, then point Sublarr at it:

Terminal window
ollama pull qwen2.5:14b-instruct
# or, lighter:
ollama pull llama3.1:8b-instruct

Set in Settings → Translation → Backends → Ollama:

FieldExample
Endpoint URLhttp://ollama:11434
Modelqwen2.5:14b-instruct

Sublarr ships 12 translation backends (Lingarr parity). Configure them in Settings → Translation Backends. Every backend records per-call cost, latency and token usage; visit Settings → Translation → Cost & Memory to compare.

BackendTypeSelf-HostedAPI KeyBest For
OllamaLocal LLMYesNoFull control, custom prompts, GPU-accelerated
OpenAI-compatibleLLMBothYesGPT-4 / local OpenAI-compatible endpoints
OpenAI ChatGPTLLM APINoYesGPT-4o / GPT-4-turbo via official endpoint
Anthropic ClaudeLLM APINoYesHigh-quality long-context translation
Google GeminiLLM APINoYesFast Gemini 2.x with native multilingual support
DeepSeekLLM APINoYesCost-effective Chinese-LLM provider
MistralLLM APINoYesEU-hosted LLM, GDPR-friendly
DeepLNMT APINoYesHighest-quality NMT for EU languages
Google TranslateNMT APINoYesBroad language support, fast
LibreTranslateNMT APIYesOptionalSelf-hosted, privacy-focused fallback
Azure TranslatorNMT APINoYesEnterprise-grade NMT with regional endpoints
MyMemoryNMT APINoOptionalFree tier, useful as zero-cost fallback

Configuring Ollama (Default):

  1. Install Ollama on your server
  2. Pull a model: ollama pull qwen2.5:14b-instruct
  3. In Sublarr: Settings > Translation Backends > Ollama
  4. Enter your Ollama URL and model name
  5. Click Test to verify

Fallback Chains: Configure backup backends in case your primary fails. Example:

  1. Primary: Ollama (local, fast, free)
  2. Fallback 1: DeepL (cloud, high quality)
  3. Fallback 2: LibreTranslate (self-hosted backup)

See Settings → Translation → Ollama Chat API for the full reference including how to configure system prompts and series context injection.