Search by job, company or skills

HCLTech

Platform Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

The AI Red Teaming Consultant is responsible for conducting adversarial testing, vulnerability analysis, and risk evaluation of AI and generative AI systems to ensure secure, ethical, and resilient operations.

The role supports the implementation of Responsible AI governance practices by identifying systemic weaknesses, bias, and model risks, and by recommending mitigation strategies.

The incumbent collaborates with cross-functional teams in AI engineering, cybersecurity, and compliance to strengthen the integrity, fairness, and trustworthiness of enterprise AI systems

Roles and Responsibilities:

1. Red Teaming & Adversarial Testing

Design and execute red teaming exercises to identify vulnerabilities and robustness gaps in AI/ML and generative AI systems.

Simulate adversarial attack scenarios (prompt injection, model inversion, data poisoning) and assess model resilience under stress conditions.

Partner with AI developers and security teams to recommend defensive strategies and mitigation measures.

2. AI Risk Assessment & Model Evaluation

Conduct comprehensive risk assessments of AI systems focusing on bias, fairness, explainability, safety, and data privacy.

Evaluate generative AI and traditional models across multiple domains (NLP, vision, code intelligence) for robustness and ethical risk.

Prepare detailed reports, dashboards, and presentations summarizing vulnerabilities, findings, and mitigation recommendations.

3. Responsible AI & Governance Alignment

Support implementation of Responsible AI frameworks such as ISO/IEC 42001, NIST AI RMF, and EU AI Act compliance requirements.

Ensure AI testing practices align with organizational governance, security, and ethical AI standards.

Collaborate with governance teams to embed testing and validation checkpoints into the AI lifecycle.

4. Collaboration, Training & Advisory

Work cross-functionally with engineering, risk, and compliance teams to operationalize AI red teaming best practices.

Conduct workshops and training sessions on AI risk management, adversarial testing, and Responsible AI practices.

Support external client engagements by providing technical assessments and advisory services on AI robustness and resilience.

5. Observability & Monitoring Support

Contribute to developing monitoring frameworks and audit dashboards for tracking model drift, bias, and anomaly detection.

Assist in setting up feedback loops and continuous validation processes for deployed AI systems.

Education & Certification

Bachelor's degree or equivalent in AI/ML, Computer Science, Data Science, or a related field.

Master's degree in AI, Data Science, or Cybersecurity is preferred.

Preferred Certifications:

Certification in ISO/IEC 42001 / Gen AI / Responsible AI Certification is preferred.

Key Skills and Experience

  • Strong understanding of the AI lifecycle, including model development, deployment, risk management, and assurance.
  • Familiarity with AI ML frameworks NLP/ Computer Vision/ Generative AI is essential.
  • Understanding of red teaming skillset for identifying adversarial risks and validating AI governance safeguards, which will be an added advantage.
  • Proven experience in AI Red Teaming, adversarial testing, or security assessment of AI systems.
  • Knowledge of Responsible AI frameworks, governance principles, and ethical AI practices.
  • Familiarity with adversarial attack techniques (model inversion, prompt injection, data poisoning) and defense strategies.
  • Exposure to AI governance frameworks (ISO/IEC 42001, NIST AI RMF, EU AI Act) and privacy regulations (GDPR, CCPA).
  • Experience with explainability tools (e.g., SHAP, LIME) and bias detection methodologies.
  • Excellent analytical, communication, and stakeholder collaboration skills for engaging both technical and non-technical teams.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 135787107