A

Data Scientist, Assessment & Learning Analytics

Amira Learning · United States

Full-timeLeadPython

🔥22 people viewed this job

About the Role

Job Title: Data Scientist, Assessment & Learning AnalyticsLocation: RemoteEmployment Type: Full-Time About Us:Amira Learning accelerates literacy outcomes by delivering the latest reading and neuroscience with AI. As the leader in third-generation edtech, Amira listens to students read out loud, assesses mastery, helps teachers supplement instruction and delivers 1:1 tutoring. Validated by independent university and SEA efficacy research, Amira is the only AI literacy platform proven to achieve gains surpassing 1:1 human tutoring, consistently delivering effect sizes over 0.4. Rooted in over thirty years of research, Amira is the first, foremost, and only proven Intelligent Assistant for teachers and AI Reading Tutor for students. The platform serves as a school district's Intelligent Growth Engine, driving instructional coherence by unifying assessment, instruction, and tutoring around the chosen curriculum. Unlike any other edtech tool, Amira continuously identifies each student's skill gaps and collaborates with teachers to build lesson plans aligned with district curricula, pulling directly from the district's high-quality instructional materials. Teachers can finally differentiate instruction with evidence and ease, and students get the 1:1 practice they specifically need, whether they are excelling or working below grade level. Trusted by more than 2,000 districts and working in partnership with twelve state education agencies, Amira is helping 3.5 million students worldwide become motivated and masterful readers. Job Summary:We are seeking an exceptional Data Scientist to work at the intersection of applied statistics, data science, and educational analytics. In this role, you will own the quantitative rigor behind our AI-powered literacy platform — designing and validating the statistical models, pipelines, and experiments that determine how millions of students are assessed and supported in learning to read. You will collaborate closely with AI engineers, product managers, and external partners to ensure our measurement systems are statistically sound, interpretable, and continuously improving. This is a high-impact, technically deep role for someone who thinks rigorously about measurement and analytics, and loves building things that matter. Essential Functions:Statistical Modeling & Validation • Design and validate models underlying adaptive assessment systems, automated scoring pipelines, and real-time diagnostic feedback • Develop and maintain automation pipelines for evaluating the impact of system changes on downstream score distributions and student classifications • Apply and extend a variety of state-of-the-art statistical models and approaches to estimate student performance, growth, and score trajectories • Develop simulations to evaluate and validate assessment design decisions • Conduct rigorous validity and reliability analyses on data from early reading/literacy assessments Data Science & ML Collaboration • Partner with ML engineers to design experiments validating AI scoring models, including automated speech recognition, NLP-based scoring, and adaptive algorithm performance • Build AI-powered data pipelines and analytical tooling to monitor score quality, flag anomalies, and support continuous improvement of assessment models • Use AI-assisted development tools — including Cursor, Claude Code, and similar platforms — as core parts of your daily workflow; comfort and enthusiasm for these tools is essential • Develop and validate norm-referenced and criterion-referenced score reporting frameworks grounded in statistical best practices • Conduct linking, equating, and comparability studies to ensure consistent score interpretation across years, cohorts, and assessment variants Research, Communication & External Engagement • Translate complex statistical methodology and results into clear, compelling narratives for non-technical audiences including school district leaders and state procurement teams • Contribute to technical reports, white papers, and RFP responses demonstrating statistical rigor and validity evidence • Support ongoing research addressing the unique challenges of AI-powered formative literacy assessment at scale Continuous Learning Systems • Oversee data collection frameworks and longitudinal analytical designs that support ongoing model improvement • Monitor assessment system performance across diverse student populations, and produce solutions to alleviate issues of fairness and equity • Collaborate cross-functionally to deliver analytical insights that directly inform product decisions and instructional recommendations Qualifications (Education and Experience):Education & Experience • Master's or Ph.D. in Statistics, Data Science, Quantitative Social Science, Applied Mathematics, Educational Measurement, or a closely related field • 5+ years of hands-on experience applying statistical and data science methods in applied, production settings (educational assessment e

Amira Learning has 1 open position on Remote Vibe Coding Jobs.

💬 Developer Questions

Ask the team a question — answers show up here

🎯

What does the interview process look like?

🤖

What AI/vibe coding tools does the team use daily?

👥

How big is the engineering team?

Is the team fully async or are there required meetings?

🚀

What does onboarding look like for remote hires?

🔧

Can you share more about the tech stack and architecture?

📈

What does career growth look like in this role?

📅

What does a typical day look like?

💰

Is there a salary range you can share?

📊

Is equity or stock options part of the package?

🌍

Are there timezone requirements or preferences?

🛂

Do you sponsor work visas?

🏢 Is this your listing? Claim it to answer questions

Similar Jobs

Helpful resources

Hiring for a similar role? Post your job here — it's free →