Search by job, company or skills

honeywell aerospace technologies

Principal AI Engr

12-14 Years
Save
new job description bg glownew job description bg glow
  • Posted 4 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description

As the Principal AI/ML Verification and Validation Engineer, you will lead the strategy, architecture, and implementation of verification frameworks for AI-enabled avionics systems. You will define methods for assuring AI/ML components in safety-critical environments, support certification activities aligned with DO-178C and DO-330 expectations, and drive automation across the V&V lifecycle.

This role bridges traditional safety-critical engineering and modern AI/ML practices by improving traceability, test automation, robustness evaluation, and evidence generation. You will partner with software, systems, safety, and certification teams to build scalable assurance workflows that improve verification efficiency, strengthen compliance readiness, and support timely program delivery.

Responsibilities

  • Define AI/ML V&V strategy: Lead verification and validation approaches for AI-enabled airborne software, including planning, traceability, test coverage, robustness evaluation, and evidence generation aligned with certification objectives.
  • Architect automation frameworks: Design and deploy scalable V&V workflows and CI/CD pipelines that support model validation, software integration, simulation, hardware-in-the-loop testing, and regression execution.
  • Advance AI-augmented assurance: Develop or guide tools that improve requirement-to-test traceability, support test generation, identify anomalous results, and prioritize verification effort using data-driven methods where appropriate.
  • Lead tool qualification activities: Support qualification planning and evidence development for verification tools under DO-330 and help establish confidence in automated workflows used in regulated environments.
  • Drive robustness and explainability methods: Establish approaches for adversarial testing, corner-case analysis, formal methods, uncertainty evaluation, and model interpretability where traditional structural coverage methods may be insufficient for AI/ML behavior.
  • Collaborate across engineering functions: Partner with software, systems, safety, test, and certification teams to align technical approaches, certification artifacts, and program milestones.
  • Influence standards and mentor teams: Provide technical leadership, define best practices, and mentor engineers working on AI/ML assurance in safety-critical programs.

Qualifications

  • Experience: 12+ years of experience in software verification and validation, safety-critical software engineering, or a closely related field.
  • Certification knowledge: Deep understanding of DO-178B/C software lifecycle objectives and practical experience supporting certification or compliance activities in regulated environments.
  • AI/ML assurance background: Experience verifying, testing, or assuring AI/ML systems, including areas such as computer vision, deep learning, or reinforcement learning.
  • Programming skills: Strong proficiency in C/C++ and Python, with hands-on experience building test automation, analysis workflows, or supporting tooling.
  • Verification methods: Strong foundation in statistical verification, uncertainty quantification, robustness testing, and formal or semi-formal analysis methods.
  • Toolchain familiarity: Experience with established V&V tools such as VectorCAST, LDRA, or Parasoft, along with modern automation and scripting practices.
  • DevOps/MLOps exposure: Experience with CI/CD, containerized test environments, and data or model versioning practices used to support traceability and repeatability.
  • Collaboration and leadership: Demonstrated ability to work across engineering disciplines and provide technical leadership on complex, high-assurance programs.

Preferred Qualifications

  • AI-enabled test engineering: Experience building or applying AI/ML techniques to improve requirement-to-test traceability, test generation, anomaly detection, or verification workflow efficiency.
  • Regulatory familiarity: Exposure to FAA or EASA guidance related to machine learning, software assurance, or tool qualification in safety-critical environments.
  • MLOps for regulated systems: Experience with tools such as DVC, MLflow, Docker, or Kubernetes to support reproducible, traceable, and controlled validation workflows.
  • Simulation and synthetic data: Hands-on experience with digital twin, simulation, or model-based environments such as MATLAB/Simulink, Unreal Engine, AirSim, or similar platforms.
  • Formal methods and static analysis: Familiarity with approaches or tools such as Polyspace, Astrée, abstract interpretation, or SMT solvers for verifying software properties.
  • Legacy integration experience: Experience connecting modern Python-based automation or AI testing frameworks with established V&V toolchains used for C, C++, or Ada systems.

More Info

Job ID: 147484061