RICCILAB
> blog/playground/post-i18n-strategic-choice

i18n in ESO Addons: A Strategic Choice, Not Just a Translation

_DEV_PROJECT

Why localizing my addon for Korean turned into an architecture decision about a language the platform doesn’t officially support.


When I decided to add Korean support to my ESO addon, I assumed it would be a chore. Find every d("...") call, swap the literal for a lookup, write a translation file, done. The “i18n” tag in my todo list was literally one line.

Three hours later I had a 130-key string table, a 3-tier language detection function, a refactor of how diagnostic messages are constructed, and a much better understanding of why “just translate it” is the wrong mental model.

This is the story of localizing AddOn Conflict Inspector (ACI) — and the design decisions that aren’t visible in the final commit but shaped everything in it.

The Setup

ACI is a diagnostic tool: type /aci health and you get a traffic-light report on your installed addons. Out-of-date counts, orphaned libraries, SavedVariables conflicts, event registration hotspots, the works. About 14 commands, each printing structured output to chat.

Every line goes through d(...), ESO’s chat-print function. By the time I started localizing, there were roughly 200 such call sites, all with English literals or string.format(...) calls inlined.

The goal: ship Korean translations without breaking anything for English users, and make it easy to add more languages later.

The Standard Pattern (and Why I Couldn’t Use It)

ESO addons get one freebie for localization: the manifest can say

## Lua: $(language).lua

and ESO substitutes $(language) with the client’s language code at load time. So you ship en.lua, de.lua, fr.lua, and ESO loads the right one automatically. Clean. Documented. Works for every officially supported language.

The supported set is: en, de, fr, es, ru, ja, zh. Notice what’s missing.

Korean isn’t on that list. ESO has never officially supported Korean. There is no kr value that language.2 will ever return on a stock client. $(language).lua will never resolve to kr.lua because the substitution can never produce the string kr.

This is not a bug. ZeniMax simply doesn’t ship a Korean localization, and the addon API reflects that. From the platform’s perspective, Korean users don’t exist.

But they do exist. They’re just running English clients.

The TamrielKR Ecosystem

Korean ESO players have been served for years by a community patch called TamrielKR. It’s a font replacement plus a string table override that translates the game UI into Korean. It does this by hooking into ESO’s text rendering and string lookup paths.

TamrielKR also does something subtle: it hooks GetCVar("language.2") to return "en" regardless of what’s actually set.

Why? Because some addons read language.2 and try to load kr.lua files that don’t exist, which crashes the addon. By forcing the CVar to report "en", TamrielKR keeps non-Korean-aware addons from blowing up. It’s a defensive fence around an ecosystem assumption (addons assume the language is one of the official ones) that Korean players violate by existing.

This means my obvious detection approach — “just check the CVar” — is exactly what TamrielKR is designed to defeat.

-- This will return "en" on a TamrielKR user. Always.
if GetCVar("language.2") == "kr" then
    -- Dead code. Never reached.
end

The actual signal that a Korean user is present isn’t in the CVar. It’s the presence of TamrielKR itself. If TamrielKR is in the global table, you’re talking to a Korean player. That’s the real check.

A 3-Tier Detection Function

Once I understood that, the detection function wrote itself. But I wanted to be defensive against TamrielKR’s API changing in the future, so I went with three fallback tiers:

My first draft looked like this:

-- BUGGY VERSION — do not use
local function IsKoreanClient()
    if TamrielKR and TamrielKR.GetLanguage then
        local ok, lang = pcall(TamrielKR.GetLanguage, TamrielKR)
        if ok and lang == "kr" then return true end
    end
    if _G["TamrielKR"] then
        return true
    end
    return GetCVar("language.2") == "kr"
end

Read it carefully. There’s a bug.

If TamrielKR exists, GetLanguage exists, and the user is currently in English mode (the API exists but answers "en"), Tier 1 enters its inner block, the pcall succeeds, lang ~= "kr" so the inner if doesn’t fire, and execution falls through to Tier 2. Tier 2 sees TamrielKR in _G and returns true.

So Tier 1 explicitly said “not Korean” and Tier 2 silently overrode it. The fallback was supposed to fire only when Tier 1 couldn’t be evaluated, not when Tier 1 answered no. Conflating “can’t answer” with “answered no” is the classic mistake.

The fix is to make Tier 1 authoritative whenever it has a usable answer:

local function IsKoreanClient()
    -- Tier 1: TamrielKR public API. If the API answers at all, trust it —
    -- whether the answer is "kr" or not. Only fall through if the API is
    -- unreachable (missing or throwing).
    if TamrielKR and TamrielKR.GetLanguage then
        local ok, lang = pcall(TamrielKR.GetLanguage, TamrielKR)
        if ok then return lang == "kr" end
        -- pcall failed: API exists but throws → fall through
    end
 
    -- Tier 2: TamrielKR present but API unusable (renamed/removed/throwing).
    -- Mere presence is still a strong signal of a Korean user.
    if _G["TamrielKR"] then
        return true
    end
 
    -- Tier 3: ESO CVar (currently dead path; future-proof)
    return GetCVar("language.2") == "kr"
end

Note the structural change: if ok then return lang == "kr" end. We return from inside Tier 1 whether the answer is yes or no. Falling through to Tier 2 only happens when pcall itself failed — i.e., the API exists but threw.

Each tier is a different bet:

  • Tier 1 assumes TamrielKR’s current API (GetLanguage()) is reachable. If it is, the answer — yes or no — is authoritative.
  • Tier 2 is a fallback for when Tier 1 can’t get a clean answer — either because GetLanguage no longer exists (renamed, removed) or because the call threw. In both cases, the mere presence of TamrielKR in the global table is still a strong-enough signal: nobody installs TamrielKR by accident. If it’s loaded, the user is almost certainly Korean.
  • Tier 3 assumes ESO might add Korean to its official language list someday. Currently a dead branch, but free to keep. The pcall around TamrielKR.GetLanguage matters: it’s a community addon, and if its API throws, I want to fall through to tier 2, not crash my own addon.

Truth table for the fixed version:

SituationTier 1 resultReaches Tier 2?Final
API present, succeeds, "kr"return truenotrue
API present, succeeds, "en"return falsenofalse
API present, throwsfall throughyestrue (presence)
API absent (rename/move)fall throughyestrue (presence)
TamrielKR not loaded at allfall throughfalls to Tier 3CVar result

The middle row is the one that was broken in the first draft. Tier 1 had the right answer ("en" → not Korean) but Tier 2 silently overrode it with true.

Lesson: fallback chains must distinguish “no answer” from “negative answer.” Conflating them lets a later tier silently overrule an earlier, more authoritative one. This bug shipped in my first version and was caught by code review, not testing — because the failing case (Korean user with TamrielKR running in English mode) is a configuration most testers don’t think to try.

The Loading Strategy

With detection figured out, I had to decide where to put it. The obvious slot is the manifest:

## Lua: ACI_Strings_$(language).lua

But this still doesn’t help. $(language) will never produce kr, so my Korean file will never load through the standard mechanism.

The fix is to load it unconditionally and let it self-check:

## Title: AddOn Conflict Inspector
## Author: Ricci Curvature
## APIVersion: 101049 101050

ACI_Core.lua
ACI_Strings_en.lua
ACI_Strings_kr.lua
ACI_Hooks.lua
...

Both string files are loaded for everyone. ACI_Strings_en.lua populates the default table. ACI_Strings_kr.lua calls IsKoreanClient() at the top — if the answer is no, it returns immediately and overrides nothing. If yes, it writes Korean values into the same table, overwriting the English defaults key by key.

-- ACI_Strings_kr.lua
if not IsKoreanClient() then return end
 
local S = ACI.S
S.HELP_TITLE = "[ACI] 명령어 목록"
-- ... 130 more keys

The cost on non-Korean clients is one function call and one early return. Negligible.

The String Table

The actual lookup is the smallest part:

ACI = ACI or {}
ACI.S = ACI.S or {}
 
function ACI.L(key)
    return ACI.S[key] or key
end

Two design decisions packed into four lines.

The ACI = ACI or {} guard. Lua’s ACI = {} would overwrite an existing table. If for any reason the strings file loads before the core file (because someone reorders the manifest, or because of a future change), ACI.S = {} would wipe out everything the strings file just populated. Idempotent guards make load order forgiving. The cost is zero. The defense is total.

The or key fallback. If a translation is missing, ACI.L("FOO_BAR") returns the literal string "FOO_BAR". This is huge for development. A typo in a key name doesn’t silently print empty strings — it prints the key itself, which is immediately visible in chat. You see [ACI] HELP_TITEL and instantly know what to fix.

It also means partial translations work. If the Korean file only translates 80% of the keys, the other 20% fall through to the English defaults transparently. No crash, no empty UI.

Naming the Keys

I split keys into two flavors by convention:

  • PLAIN_KEY — a literal string used as-is via ACI.L("PLAIN_KEY")
  • FMT_KEY — a string.format template used via string.format(ACI.L("FMT_KEY"), ...)
S.SEPARATOR        = "--------------------------------------------"
S.NO_METADATA      = "[ACI] No metadata."
S.FMT_REPORT_OOD   = "  |cFFFF00%d out-of-date|r"
S.FMT_HOT_EVENT    = "  %d addons, %d regs  %s|r"

The FMT_ prefix is a contract: if you see it, you know there’s a string.format somewhere. If you don’t, you know it’s a direct print. This makes both translation and code review faster — translators know which strings have placeholders, and reviewers can spot a bug like d(ACI.L("FMT_REPORT_OOD")) (missing format) at a glance.

I deliberately did not introduce per-section namespacing (CMD_HOT_TITLE, LABEL_HEAVY, etc.) at this stage. With ~130 keys, section comments are enough organization, and grep is fine. Premature namespacing would just be busywork that has to be redone if the categories turn out wrong. When the table grows past 300 keys or accepts contributions from multiple translators, that’s the right time to formalize.

The Refactor That Wasn’t Obvious

Here’s the part I didn’t see coming.

ACI’s health command produces a structured diagnosis with severity levels. Internally, it builds an array of issue objects:

table.insert(issues, { level = "yellow", msg = #orphans .. " unused libraries" })
table.insert(issues, { level = "red", msg = #svConflicts .. " SV conflict(s)" })

Then PrintHealth filters them, because the OOD count is rendered separately at the top of the report:

for _, i in ipairs(h.issues) do
    if not i.msg:find("out%-of%-date") then
        table.insert(otherIssues, i)
    end
end

See the problem?

That filter is a regex against the English string "out-of-date". The moment msg becomes "32/55 구버전 (58%)", the regex stops matching. Nothing is filtered. The OOD count appears twice in the health report — once at the top in the dedicated section, once in the issues list. Subtle bug, only reproducible on Korean clients, easy to ship without noticing.

The fix is to stop encoding semantic information in display strings. I added a kind tag to every issue:

table.insert(issues, {
    level = "red",
    kind  = "ood",
    msg   = string.format(ACI.L("FMT_HEALTH_ISSUE_OOD"),
                          ood.topLevelOOD, ood.topLevelEnabled, pct)
})

And the filter becomes language-agnostic:

for _, i in ipairs(h.issues) do
    if i.kind ~= "ood" then
        table.insert(otherIssues, i)
    end
end

This is the kind of bug i18n surfaces that you’d never see in monolingual code. Any time your code reads its own output as data — regex matching, substring searching, anything that treats a display string as semantic — translation breaks it. The only safe pattern is to keep semantic tags and display strings on different sides of the wall.

Lesson: if you’re going to localize, audit every place your code parses its own messages first.

The Test That Caught It

After deploying the first pass, I asked the user to run a few commands and screenshot the result. The Korean output looked great — except for one line in /aci health:

[ACI] ● 5 unused libraries

In the middle of a fully-Korean report. That string was being constructed in ACI_Analysis.lua, not ACI_Commands.lua, and I’d missed it on the first pass because my mental model said “localize the print layer.” But the message was being built in the analysis layer and printed in the command layer.

The fix wasn’t just to add a translation. It was to recognize that “raw English string with a number interpolated” was a code smell — the analysis layer shouldn’t be in the business of producing display strings at all. The cleanest solution was to call ACI.L() in the analysis layer, which then needed format keys for issue messages:

S.FMT_HEALTH_ISSUE_SV_CONFLICTS  = "%d SV conflict(s)"
S.FMT_HEALTH_ISSUE_OOD           = "%d/%d out-of-date (%d%%)"
S.FMT_HEALTH_ISSUE_MISSING       = "%d missing dep(s)"
S.FMT_HEALTH_ISSUE_ORPHANS       = "%d unused libraries"
S.FMT_HEALTH_ISSUE_BIG_SV        = "%s uses %.0f%% of SV disk (%.2f MB / %.2f MB)"

This is fine in practice but slightly muddies the layering — the analysis module now imports ACI.L. The principled alternative would be for the analysis layer to return semantic objects ({kind = "orphans", count = 5}) and let the command layer format them. For 5 message types I judged that overkill, but at 50 I’d refactor.

Knowing where to draw that line is the actual i18n skill. The string table is mechanical.

The Cross-Hot Tag and Translator Voice

One more sub-story worth telling. ACI has a feature that flags addons appearing in many “hot” event paths — the kind of addons that load up your main thread with handlers. The English label is:

[cross-hot:6] heavy

I translated it the obvious way:

[교차핫:6] heavy

The user pushed back: this is translation-software voice. “교차핫” is a literal calque of “cross-hot” that doesn’t mean anything in Korean. And heavy is just left in English. A native speaker would never write this.

We iterated on what the label is actually trying to communicate — “this addon shows up in N of the hot events listed above” — and landed on:

[핫이벤트 6개] 과부하

Literally “[6 hot events] overload.” The number is contextualized (it’s a count of events, not an opaque metric), and the warning is in plain Korean.

The lesson here is one I keep relearning: a translation key is a contract about meaning, not a contract about words. The English string "cross-hot:6" is jargon I invented; Korean speakers shouldn’t have to decode my jargon. The right translation is whatever conveys the same meaning in idiomatic Korean, which sometimes means restructuring the entire sentence.

This is also why I’m wary of machine translation for anything UI-facing. Machines preserve words. Humans preserve meaning.

What I’d Tell Past Me

A checklist, from things this project actually taught me:

  1. Check what languages your platform officially supports before designing the loader. The “obvious” pattern ($(language).lua) only works for the official set. Anything outside that needs an unconditional load + self-check.
  2. The real signal isn’t always the obvious one. For Korean ESO, the CVar is a lie; the TamrielKR addon’s presence is the truth. Find the actual signal.
  3. Make your guards idempotent. ACI = ACI or {} is one extra word that immunizes you against any future load-order change. Free defense.
  4. Use key fallback (return ACI.S[key] or key). Typos become visible instead of silent. Partial translations work transparently.
  5. Distinguish format keys from plain keys by naming convention (FMT_ prefix). Translators and reviewers both benefit.
  6. Audit every place your code reads its own output. Regex matches on display strings die the moment you translate. Use semantic tags instead.
  7. Don’t build display strings outside the display layer — or if you do, route them through ACI.L() so they participate in localization.
  8. Translation is meaning, not words. Get a native speaker to review. Calques and untranslated jargon are the give-aways that you used a machine.
  9. Defer namespacing until the table is big enough to need it. Premature categorization is a refactor in disguise.
  10. Test on a real client of the target language. The 5 unused libraries bug only showed up because we screenshotted the Korean output. Code review wouldn’t have caught it.
  11. In fallback chains, distinguish “no answer” from “negative answer.” A later tier should never overrule an earlier one that already answered. My first IsKoreanClient shipped this bug — Tier 1 said “no” and Tier 2 said “yes” anyway. Code review caught it; tests wouldn’t have.

The Numbers

For posterity:

  • 130 string keys in the English table
  • 130 matching keys in the Korean table
  • 14 command functions converted to use ACI.L()
  • 5 health-issue messages refactored to use a kind tag (because the regex filter that read English-only died on translation)
  • 3 detection tiers in IsKoreanClient
  • 2 lines of guard code (ACI = ACI or {}, ACI.S = ACI.S or {})
  • 1 bug (5 unused libraries in the middle of Korean output) caught by user testing
  • 0 changes to the manifest beyond adding two ## Lua: lines Localization isn’t a translation task. It’s an architecture task with a translation task hiding inside it. The sooner you recognize that, the less rework you ship.

Postscript

While writing the Tier-by-Tier walkthrough above, I noticed that the IsKoreanClient function I’d already shipped didn’t actually behave the way I was describing it. Tier 1’s if ok and lang == "kr" then return true end would silently fall through into Tier 2’s presence check whenever the API answered “not Korean,” meaning Tier 2 would override a perfectly valid negative answer from a more authoritative source.

I caught it not by running the code, but by trying to write a sentence that explained what Tier 2 was for. The sentence kept coming out wrong: “Tier 2 fires when Tier 1 can’t get a clean answer” was the intended behavior, but the code as written made Tier 2 fire whenever Tier 1 answered anything other than “yes.” The gap between the sentence I wanted to write and the code I’d actually shipped was the bug.

The fix is the version shown above; the buggy version is the “do not use” block. Both are kept in this article on purpose, because the meta-lesson is the more interesting one:

i18n surfaces bugs you’d never see in monolingual code. Writing about your code surfaces bugs you’d never see by reading it.

Prose is a debugging technique. If you can’t describe the logic in plain English without your sentences contradicting each other, the logic probably isn’t clear — and “not clear” is usually one short hop from “not correct.” This entire article almost shipped with a buggy code sample explained by a sentence that the code didn’t actually obey. The sentence was right; the code wasn’t; and I only noticed because writing forces a level of precision that reading never does.

I’d been planning to put this article up as a polished retrospective. It now serves a second purpose: a worked example of why writing about a system is part of designing it.

EOF — 2026-04-10
> comments