About the Role
AI and Data Platform Engineer Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work® Institute and recognized as a 'Cool Vendor' and a 'Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. We are looking for a strong AI Engineer who thrives at the intersection of AI assisted development, data platform engineering, and high velocity delivery.This role focuses on delivering high‑quality, scalable, and governed data assets through spec‑driven development, AI‑enabled engineering workflows, and standardized data patterns. You will work closely with platform, DevOps, SRE, and analytics consumers to enable self‑service analytics while reducing operational toil through automation and reuse.This is a hands‑on engineering role for someone who enjoys modern data engineering, high engineering rigor, and rapid, AI‑accelerated delivery. You may need to work with PST overlapping hours.
Responsibilities:Spec‑Driven & AI‑Accelerated Data Engineering
• Deliver high‑quality data ingestion, transformation, and modeling solutions using Fabric and associated Azure services through spec‑driven development.
• Apply AI‑enabled development workflows using tools such as GitHub Copilot and LLM‑assisted coding to maximize development velocity without compromising quality.
• Translate business and analytics requirements into clear technical specifications, schemas, transformations, and acceptance criteria.
• Participate actively in automated reviews, AI‑assisted refactoring, and iterative improvements.Data Pipelines & Modeling
• Build modular, reusable, and testable pipeline components aligned with standardized data patterns.
• Design and implement ingestion and transformation pipelines using Fabric‑native constructs (Lakehouse, Warehouse, Notebooks, Dataflows Gen2).
• Optimize transformations for scale, performance, and cost efficiency.
• Ensure pipelines are dependency‑aware, well‑orchestrated, and production‑ready.
Self‑Service Analytics Enablement
• Produce consumer‑ready, tested, and governed datasets to enable self‑service analytics for business and analytics users.
• Apply strong data modeling practices (relational, dimensional, and analytical models) to support reporting and insights.
• Ensure data quality through validation checks, reconciliations, and automated testing.
• Platform Collaboration & Production Readiness
• Collaborate closely with DevOps, SRE, and QA engineers to ensure production readiness and operational reliability.
• Follow consistent CI/CD, testing, and release practices across environments.
• Support investigation and resolution of data issues by improving pipeline robustness and observability.Engineering Excellence & Automation
• Participate in code and design reviews, upholding engineering standards for quality, performance, and maintainability.
• Build reusable data assets and automated workflows to reduce manual data support and operational effort.
• Continuously improve reliability, reusability, and maintainability of data solutions.Qualifications & Experience : Core Requirements
• Bachelor's or master's degree in computer science, Engineering, or related field
• 4+ years of professional software, data, or AI engineering experience
• Strong hands-on experience with:
• SQL (advanced querying and optimization)
• Data modeling (relational, dimensional, and analytical models)
• Distributed data processing concepts
• AI Engineering & Development Velocity
• Demonstrated experience using AI assisted development / vibe coding to accelerate delivery
• Comfort working from detailed specifications rather than exploratory coding alone
• Strong engineering discipline: clean code, modular design, automated testing
• Fabric & Azure (Required / Strongly Preferred)Experience with strong exposure to:
• Lakehouse, Warehouse, Notebooks, Dataflows Gen2
• OneLake conceptsPrior experience with:
• Databricks (Spark, notebooks, pipelines)
• Azure services such as ADLS, Azure SQL, Synapse
• Ability to translate Databricks patterns into Fabric native implementationsPersonal Attributes
• Forward thinking, self-starter mindset
• Strong problem solving and systems thinking skills
• Comfortable operating in ambiguity with clear specifications as the stabilizing force
• Passion for building end to end, production grade AI and data platforms
PayThe wage range for this role takes into account the wide range of factors that are considered in making compensation deci