M

Software Engineer II - AI

McGraw Hill · United States

🔥10 people viewed this job

About the Role

Overview Impact the Moment At McGraw Hill, our AI Platform team is building intelligent learning experiences used by millions of students and educators worldwide. We're not bolting AI onto legacy products — we're rethinking how people learn by putting generative AI, retrieval-augmented generation, and agentic workflows at the center of the experience. This is applied AI with real stakes. The models are powerful, the problems are genuinely hard, and the impact — helping a student finally grasp a concept they've been struggling with — is something you'll actually feel. Your Impact on the Team We're looking for a Software Engineer II — AI to join our AI Platform team. You'll build the services and APIs that use AI to power student and teacher experiences — RAG pipelines, LLM orchestration, retrieval and routing layers, the production infrastructure that turns model capabilities into shipped features. A note on what this role is and isn't: This is a software engineering role, not a data science or ML role. You won't be training models, fine-tuning, building eval harnesses for model performance, or running ML experiments — we have an elite team of data scientists and applied researchers who own that work. Your job is to take what they produce and the model APIs we use and turn them into reliable, observable, scalable production systems. If your background is primarily in MLOps, model training, or data science and you're hoping to do more of that here, this isn't the right fit — but we'd encourage you to check our other openings. This is a role designed for a software engineer who's ready to go deeper. You've got a couple of years of professional experience, you've shipped real things, and you're excited about working on AI-powered products at a level beyond tutorials and side projects. Senior engineers will be around to pair with, learn from, and review your work — but increasingly, you'll be the one breaking down problems, proposing approaches, and owning features through to production. This is a remote position open to applicants authorized to work for any employer within the United States. What You'll Learn We think the middle of your career is when the right environment matters most. Here's what working on this team looks like in practice: You'll learn how production AI systems actually behave — not just the happy path, but the long tail of weird inputs, hallucinations, retrieval misses, and latency cliffs that you only encounter when real users hit your code. You'll see how senior engineers reason about LLM tradeoffs and bring you into those conversations. You'll write code that runs at the scale of millions of students, and you'll be responsible for operating it — with backup from teammates when things get tough. You'll get fast, specific feedback on your work, and you'll be expected to give the same in return. By the end of your first year, you'll have shipped features you can point to and say I built that. What You'll Do Build the services that deliver AI to users. Pick up work on our RAG pipelines, LLM orchestration layers, and the APIs that surface AI capabilities to users. You'll integrate with model providers (Azure OpenAI and others), wire up retrieval and routing logic, and build the production glue that turns research into shipped product. You'll start with well-scoped features and grow into owning them end-to-end as you ramp. Work across our backend stack. Most of your time will be in Python (FastAPI, async/await, Pydantic), with opportunities to contribute to Go services as you grow. You'll touch PostgreSQL, async task workers, and the integrations that connect our services together. Ship with care. Write code that's tested, readable, and considerate of the people who'll maintain it after you. Participate in code reviews — both giving and receiving — and learn how your team thinks about quality, observability, and reliability. Get good at AI engineering — the software side. You don't need to arrive as an expert. You do need to be the kind of person who reads the docs, runs the experiments, asks the awkward questions, and forms a real point of view about how to build with LLMs reliably — prompt design, retrieval quality, latency and cost tradeoffs, graceful failure modes. We'll invest in your growth here. (To be clear: the model science itself lives with our data science team. You'll partner with them, but you won't be doing their job.) Collaborate broadly. You'll work with data scientists evaluating model outputs, product managers shaping features, designers thinking about UX, and other engineers across the org. Communicating clearly — in PRs, in design discussions, in Slack — is part of the job. Grow into more. Over time, you'll start influencing design decisions on your team, mentoring engineers newer than you, and taking on larger pieces of work. We'll meet you where you are and help you get to where you want to go. What You Bring We're looking for someone who meets the c

💬 Developer Questions

Ask the team a question — answers show up here

🎯

What does the interview process look like?

🤖

What AI/vibe coding tools does the team use daily?

👥

How big is the engineering team?

Is the team fully async or are there required meetings?

🚀

What does onboarding look like for remote hires?

🔧

Can you share more about the tech stack and architecture?

📈

What does career growth look like in this role?

📅

What does a typical day look like?

💰

Is there a salary range you can share?

📊

Is equity or stock options part of the package?

🌍

Are there timezone requirements or preferences?

🛂

Do you sponsor work visas?

🏢 Is this your listing? Claim it to answer questions

Similar Jobs

Helpful resources

Hiring for a similar role? Post your job here — it's free →