Skip to content

LLM Pipeline

Coming soon — this page will explain how LokAI orchestrates LLM translation jobs, including the job lifecycle, retry and backoff strategy, bounded concurrency, and how glossaries and style guides are injected into every prompt.

  • Job creation and the llm_jobs queue
  • Provider abstraction (OpenAI, Anthropic)
  • Prompt construction — glossary and style guide injection
  • Confidence scoring and uncertainty handling
  • Retry, backoff, and cancellation (Effect-TS)
  • Cascade re-translation when source text changes
  • Usage tracking and cost accounting