Skip to content

Settings — Scoring

When Sublarr searches a provider it gets back a list of candidate subtitles. Scoring is the rule set that turns that list into a ranking. Whichever candidate ends up on top — provided it clears the profile cutoff — is the one that gets downloaded. Tweak scoring when you find Sublarr consistently picking the wrong release group, the wrong release type, or the wrong source.

Every candidate starts at a base score and gets adjusted by a chain of modifiers:

base_score (5.0)
+ release_match_modifier (0 to +3)
+ source_match_modifier (0 to +1.5)
+ resolution_match (-1 to +1)
+ group_modifier (-2 to +2)
+ provider_modifier (-1 to +1)
− penalty_rules_total (−10 to 0)
= final_score (clamped 0-10)

The Library’s scoring detail panel shows the breakdown for every candidate so you can see exactly what nudged a subtitle up or down.

ModifierDefaultEffect
Release match+3.0Filename release tags (e.g. 1080p.x264-RARBG) match the source release. Strongest signal.
Source match+1.5Source type matches (Blu-ray, WEB-DL, HDTV). Slightly weaker than release match.
Resolution match+1.0Resolution matches (1080p, 2160p).
Resolution mismatch−1.0Subtitle from a 720p release applied to a 1080p video.
Hash match+5.0Some providers ship hash-keyed subtitles guaranteed to fit the file. Very rare but a strong signal.

Adjust these in Settings → Subtitles → Scoring → Default modifiers.

Penalty rules subtract score for known-bad signals. The default ruleset covers common pitfalls; add your own when you spot a pattern.

RuleDefault penaltyWhen it fires
Empty / placeholder file−10File size below 1 KB — likely empty.
Auto-translated−4Filename contains “auto”, “machine”, “AI translated”.
Wrong language tag−6Detected language doesn’t match the requested language.
HI when prefer-not−2HI subtitle when profile says prefer-not.
Forced when prefer-not−2Same logic for forced subtitles.
Suspicious group−3Filename matches the user-maintained “bad groups” list.

Custom rules add to the table:

FieldEffect
PatternSubstring or regex (toggle) tested against the filename.
PenaltyScore subtraction (−10 to 0).
NotesFree text — shown in tooltips.

Different providers have different quality reputations. The provider modifier nudges scores up or down based on which provider returned the candidate:

ProviderDefault modifierReasoning
OpenSubtitles+0.5Largest dataset, generally good.
Jimaku+0.7Anime-specialised, high quality.
SubDL+0.3Curated subset of OpenSubtitles.
Subscene0.0Mixed quality.
MyMemory−0.5Auto-translation source.

Edit the modifier per provider on the same page. The change applies to all future searches.

When Auto-prioritise providers is on (Settings → Providers → Auto-prioritise), Sublarr re-orders providers based on their rolling success rate. The provider modifier is then applied on top of the auto-derived order — combining static quality preference with empirical performance.

Sublarr’s reranking pass runs after the initial provider list comes back. It re-evaluates the top N candidates against extra signals (download count, age, peer review) when those signals are available:

SignalEffect
Download countEach download adds a small +modifier (capped at +1.0).
RecencyRecent uploads are favoured; six-month-old uploads with low download count get a slight nudge.
Reviewer scoreSome providers expose user ratings — used as ±0.5.

Configure rerank thresholds in Settings → Providers → Provider Reranking.

Some languages have intrinsically lower-quality subtitles available. Override the global cutoff per language under Settings → Subtitles → Languages → Threshold per language:

LanguageReasonable cutoff
de4.5
en5.0
ja3.5 (less mature dataset)
tr3.0

A per-language threshold takes precedence over the profile cutoff for that language only.

Open Library → Series detail → episode → Subtitle history. For each subtitle download, Sublarr stores the final score and the breakdown:

ColumnEffect
Final scoreNumber after all modifiers and penalties.
Modifiers appliedList of each non-zero modifier.
Penalties appliedSame for penalties.
Cutoff usedWhether profile cutoff or per-language override decided eligibility.

Scoring transparency means when something seems off you can audit the decision instead of guessing.