About the position
Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.\nAs a Prompt Engineer on the Claude Code team, you'll own Claude's behaviors specifically within Claude Code — ensuring users get a consistent, safe, and high-quality experience as we ship new models and evolve the product. This is a highly specialized role sitting at the intersection of model behavior and product quality.\nYou'll be the expert on how Claude behaves inside Claude Code, owning and maintaining the system prompts that ship with each new model snapshot. When a new model drops, you're the person making sure Claude Code feels right within days — not weeks. You'll work closely with Model Quality and Research to understand emergent behaviors and behavioral regressions, and with product and safeguards teams to respond quickly when something goes wrong.\nThis role requires someone who can move fast on behavioral tuning while maintaining rigor, and who cares deeply about the end-to-end developer experience Claude Code delivers. You'll need strong prompting skills, excellent judgment about model behaviors, and the collaborative skills to work across product, safeguards, and research teams.\nSalary: \$320,000-405,000 (SWE-G 5-6)
Responsibilities
Own Claude Code's system prompts for each new model snapshot, ensuring behaviors feel consistent and well-tunedReview production prompt changes and serve as a resource for particularly challenging prompting problems involving alignment and reputational risksLead incident response for behavioral and policy concerns, coordinating with product and safeguards teamsScale prompting and evaluation best practices across claude code and product teams.Deliver product evaluations focused on model behaviorsDefine and streamline processes for rolling out prompt changes, including launch criteria and review practicesCreate model-specific prompt guides that document quirks and optimal prompting strategies for each releaseCollaborate with product teams to translate feature requirements into effective promptsRequirements
Are a power user of agentic coding tools and have strong intuition about model capabilities and limitationsThrive in high-intensity environments with fast iteration cyclesTake full ownership of problems and drive them to completion independentlyAre skilled at creating and maintaining behavioral evaluationsHave strong technical understanding, including comprehension of agent scaffold architectures and model training processesAre an experienced coder comfortable working in Python and TypescriptHave independently driven changes through production systems with strong execution and responsivenessHave experience translating user feedback and product needs into coherent prompts and behavioral specificationsExcel at working across organizational boundaries, collaborating effectively with teams that have differing goals and perspectivesHave experience translating user feedback and behavioral observations into coherent prompt changes and specificationsCare deeply about AI safety and making Claude a healthy alternative in the AI landscapeWe require at least a Bachelor's degree in a related field or equivalent experience.Benefits
competitive compensation and benefitsoptional equity donation matchinggenerous vacation and parental leaveflexible working hoursa lovely office space in which to collaborate with colleagues
AI Safety / LLM500-1000 employeesSan Francisco, CAFounded 2021💰 Series E
Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a family of large language models (LLMs) named Claude. Anthropic operates as a public benefit corporation, which researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe models for the public.