Staff Research Engineer, Model Efficiency
Cohere · New York
🔥8 people viewed this job
About the Role
Who are we?
Our mission is to scale intelligence to serve humanity. We're training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what's best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
Why this role?
Large Language Models (LLMs) continue to push the boundaries of what AI systems can do — but inference is still the bottleneck. The Model Efficiency team is responsible for pushing the limits of LLM inference efficiency across our foundation models. We explore and ship breakthroughs across the model execution stack, including:
model architecture and MoE routing optimization
decoding and inference-time algorithm improvements
software/hardware co-design for GPU acceleration
performance optimization without compromising model quality
Please Note: We have offices in Toronto, Montreal, San Francisco, New York, Paris, Seoul and London. We embrace a remote-friendly environment, and as part of this approach, we strategically distribute teams based on interests, expertise, and time zones to promote collaboration and flexibility. You'll find the Model Efficiency team concentrated in the EST and PST time zones, these are our preferred locations.
As a Staff Research Engineer, you will develop, prototype, and deploy techniques that materially improve how fast and efficiently our models run in production.
You may be a good fit for the model efficiency team if you:
Have a PhD in Machine Learning or a related field
Understand LLM architecture, and how to optimize LLM inference given resource constraints
Have significant experience with one or more techniques that enhance model efficiency
Strong software engineering skills
An appetite to work in a fast-paced high-ambiguity start-up environment
Publications at top-tier conferences and venues (ICLR, ACL, NeurIPS)
Passion to mentor others
If some of the above doesn't line up perfectly with your experience, we still encourage you to apply!
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.
Full-Time Employees at Cohere enjoy these Perks:
🤝 An open and inclusive culture and work environment
🧑💻 Work closely with a team on the cutting edge of AI research
🍽 Weekly lunch stipend, in-office lunches & snacks
🦷 Full health and dental benefits, including a separate budget to take care of your mental health
🐣 100% Parental Leave top-up for up to 6 months
🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend
✈️ 6 weeks of vacation (30 working days!)
Cohere Inc. is a Canada-based international technology company focused on artificial intelligence. Cohere specializes in large language models and AI products for regulated industries, particularly the finance, healthcare, manufacturing, and energy fields, as well as the public sector. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst and is headquartered in Toronto and San Francisco, with offices in Montreal, London, New York City, Paris, and Seoul.
💬 Developer Questions
Ask the team a question — answers show up here
What does the interview process look like?
What AI/vibe coding tools does the team use daily?
How big is the engineering team?
Is the team fully async or are there required meetings?
What does onboarding look like for remote hires?
Can you share more about the tech stack and architecture?
What does career growth look like in this role?
What does a typical day look like?
Is there a salary range you can share?
Is equity or stock options part of the package?
Are there timezone requirements or preferences?
Do you sponsor work visas?
🏢 Is this your listing? Claim it to answer questions
Similar Jobs
Helpful resources
How to Land a Remote Vibe Coding Job
Step-by-step guide to getting hired at async-first companies.
The Complete Vibe Coding Workflow
Real tools and processes for building with AI in 2026.
Companies That Skip Leetcode Interviews
What practical interview formats look like instead.
Remote Developer Salary Guide 2026
Salary ranges by level, stack, and location.
Hiring for a similar role? Post your job here — it's free →