Job Description
LLM Evaluation Specialist – Cultural and Linguistic Alignment
Job Summary:
We are looking for linguistically and culturally aware professionals to support the evaluation and enhancement of multilingual prompt-response datasets for large language models (LLMs). This role involves rubric design, evaluation of translations and model outputs, prompt creation, and red teaming focused on identifying and surfacing cultural nuances and biases in LLM behavior.
Key Responsibilities:
Rubric Definition & Prompt Evaluation
- Update rubric definitions with region/language-specific examples to ensure cultural and linguistic relevance.
- Identify the need for additional rubrics tailored to specific languages or regional contexts.
- Review prompts translated from English into the target language and revise where translations appear unnatural or inaccurate.
- Writing of thoughtful prompts which can test the cultural awareness of LLM models.
- Rate prompt-response pairs using a standardized evaluation template based on rubrics and provide detailed justifications to base the findings.
- Document problematic outputs and annotate them with clear explanations of rubric violations or cultural insensitivities.
Required Qualifications:
- Native English Language proficiency and deep familiarity with cultural norms in the corresponding region.
- Experience in LLM evaluation, content moderation, or linguistic QA preferred.
- Strong attention to detail with the ability to identify subtle issues in language use, tone, and cultural references.
- Comfortable working in spreadsheets and evaluation templates.
- Master’s degree in relevant stream.
Preferred Qualifications:
- Prior experience with prompt engineering or LLM testing.
- Familiarity with tools such as Gemini, ChatGPT or similar LLM platforms.
- Ability to clearly articulate reasoning behind rubric ratings or prompt edits.