Oliver James logo

AI Engineer

Oliver James

Responsibilities

In this role, you will

Production AI & Prototyping: Improve and expand existing AI/analytics platforms while designing new solutions through research, experimentation, and rapid prototyping to address complex public-sector healthcare challenges.

Model Development & Explainability: Build interpretable, defensible machine learning models and statistical methods that integrate into user workflows, including feature attribution, benchmarking, and structured output summaries.

Client Deliverables & Communication: Develop clear, client-ready reports, exhibits, and presentations that communicate methodology, findings, limitations, and actionable recommendations to non-technical audiences.

Governed Generative AI: Design and implement governed GenAI solutions, including prompt engineering, RAG frameworks, and agent-based workflow orchestration, with strong evaluation standards and traceability for regulated environments.

Operational ML Performance: Take ownership of model performance by defining success metrics, monitoring data quality and drift, tuning thresholds, and building dashboards for triage and KPI tracking.

Business Development Support: Contribute to proposals and RFPs by drafting technical approaches, outlining methodologies, developing solution diagrams, and participating in capability demonstrations.

Cross-Functional Collaboration: Work closely with actuarial, clinical, pharmacy, and policy SMEs to translate business needs into measurable solutions and ensure outputs align with real-world operational decisions.

Engineering Excellence: Partner with data engineering to build scalable, production-ready pipelines with strong quality controls, reproducibility, logging, and documentation standards.

Qualifications

Executive-Level Communication: Ability to produce polished client-facing reports and presentations and clearly articulate tradeoffs, risks, and limitations to non-technical stakeholders.

Applied ML & Statistical Expertise: Strong hands-on experience applying machine learning to large-scale datasets, leveraging interpretable techniques, comparative analytics, and defensible anomaly detection suitable for regulated review.

End-to-End ML Delivery: Proven track record of leading projects from ambiguous problem definition through deployment and adoption in operational environments.

Healthcare Data Experience: Direct experience working with healthcare datasets (claims and encounters preferred), including feature engineering, validation, and explainability within audit and oversight contexts.

Applied Generative AI: Practical experience with prompt engineering, API integrations, and governed retrieval systems (RAG) with structured evaluation and guardrails.

Production Engineering Discipline: Advanced Python skills in collaborative codebases (OOP, modular design) with strong engineering practices such as testing, code reviews, structured logging, and Git-based workflows.

Data at Scale: Strong SQL and distributed data processing experience (Spark, PySpark, Databricks preferred), with a focus on data quality, validation, and performance optimization

Desired Skills and Experience

Responsibilities

In this role, you will

Production AI & Prototyping: Improve and expand existing AI/analytics platforms while designing new solutions through research, experimentation, and rapid prototyping to address complex public-sector healthcare challenges.

Model Development: Build interpretable, defensible machine learning models and statistical methods that integrate into user workflows, including feature attribution, benchmarking, and structured output summaries.

Client Deliverables & Communication: Develop clear, client-ready reports, exhibits, and presentations that communicate methodology, findings, limitations, and actionable recommendations to non-technical audiences.

Governed Generative AI: Design and implement governed Gen AI solutions, including prompt engineering, RAG frameworks, and agent-based workflow orchestration, with strong evaluation standards and traceability for regulated environments.

Operational ML Performance: Take ownership of model performance by defining success metrics, monitoring data quality and drift, tuning thresholds, and building dashboards for triage and KPI tracking.

Business Development Support: Contribute to proposals and RFPs by drafting technical approaches, outlining methodologies, developing solution diagrams, and participating in capability demonstrations.

Cross-Functional Collaboration: Work closely with actuarial, clinical, pharmacy, and policy SMEs to translate business needs into measurable solutions and ensure outputs align with real-world operational decisions.

Engineering Excellence: Partner with data engineering to build scalable, production-ready pipelines with strong quality controls, reproducibility, logging, and documentation standards.

Qualifications

Executive-Level Communication: Ability to produce polished client-facing reports and presentations and clearly articulate tradeoffs, risks, and limitations to non-technical stakeholders.

Applied ML & Statistical Expertise: Strong hands-on experience applying machine learning to large-scale datasets, leveraging interpretable techniques, comparative analytics, and defensible anomaly detection suitable for regulated review.

End-to-End ML Delivery: Proven track record of leading projects from ambiguous problem definition through deployment and adoption in operational environments.

Healthcare Data Experience: Direct experience working with healthcare datasets (claims and encounters preferred), including feature engineering, validation, and ability to explain within audit and oversight contexts.

Applied Generative AI: Practical experience with prompt engineering, API integrations, and governed retrieval systems (RAG) with structured evaluation and guardrails.

Production Engineering Discipline: Advanced Python skills in collaborative codebases (OOP, modular design) with strong engineering practices such as testing, code reviews, structured logging, and Git-based workflows.

Data at Scale: Strong SQL and distributed data processing experience (Spark, PySpark, Databricks preferred), with a focus on data quality, validation, and performance optimization.

Job Type

Job Type
Full Time
Location
Indianapolis, IN

Share this job: