Search by job, company or skills

A

Information Technology Governance Manager

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Title : AI Security Governance & Policy Specialist

Location: Kuwait

Experience Required: 37 years of relevant experience (governance/risk focus; AI governance exposure required)

Education & Qualifications : Bachelor's degree in Cybersecurity, Computer Science, Information Systems, Risk/Compliance, or related discipline.

Job Brief

We are hiring an AI Security Governance & Policy Specialist to build and operate security governance for LLM/ML/AI systems across the full lifecycle. This role focuses on delivering tangible governance outputs, including drafting and maintaining enforceable policies and standards, translating regulatory and security frameworks into practical control requirements, performing AI security risk assessments, reviewing solution designs and technical evidence, and ensuring required testing and monitoring are implemented with proper documentation.

The specialist will partner with VMPT and engineering teams to ensure AI solutions undergo appropriate security testing, adversarial assessments, secure coding validation, vulnerability management, and evidence-based exception handling where required.

Key Responsibilities

AI Policies, Standards & Control Requirements

  • Draft, maintain, and publish AI security policies, standards, and minimum baselines for LLM/ML/AI solutions.
  • Produce clear, testable requirements covering:
  • Model onboarding and approval (internal and third-party models, versioning, provenance, licensing constraints)
  • Secure AI SDLC requirements (threat modeling, secure design, testing, change control, release gates)
  • AI data governance (classification, retention, minimization, leakage prevention, approved RAG sources)
  • Logging and telemetry requirements (audit trails, redaction guidance, monitoring and alerting expectations)

AI Risk Assessments & Control Validation

  • Conduct hands-on AI security risk assessments for new and modified use cases.
  • Apply NIST AI RMF principles to structure risks in a practical and implementable manner.
  • Maintain an AI security risk register with remediation tracking and closure evidence.
  • Validate control implementation by reviewing architecture diagrams, IAM configurations, secrets management, logging setups, test results, and monitoring dashboards.

Testing, Assurance & Evidence Requirements

  • Define and enforce minimum assurance requirements including:
  • Adversarial testing / AI red teaming
  • Secure code review expectations (prompt handling, retrieval logic, output controls, tool invocation)
  • Pre- and post-production monitoring (abuse detection, data leakage indicators, anomaly detection)
  • Align guidance with industry references such as OWASP LLM Top 10 and translate them into actionable mitigation checklists.

Cloud, Vendor & Data Handling Governance

  • Maintain secure-by-default patterns for Azure-based GenAI deployments, including:
  • Identity and access controls (least privilege, service identities, key rotation)
  • Network isolation and endpoint protection
  • Logging/telemetry baselines and retention requirements
  • Environment separation and deployment guardrails
  • Define enterprise GenAI data handling requirements (permitted data, masking/redaction requirements, approval workflows, logging obligations).
  • Support onboarding of third-party AI vendors by defining security questionnaires, minimum clauses, assurance evidence requirements, and acceptable risk thresholds.

Core Experience

  • 510 years of experience in cybersecurity governance, risk management, compliance, security assurance, or architecture with strong delivery focus.
  • Proven ability to translate frameworks into implementable controls, standards, and procedures validated through evidence.
  • Practical understanding of LLM/AI patterns (RAG, agents, embeddings, fine-tuning, model APIs, orchestration) and related security implications.

Framework Familiarity

  • Working knowledge of NIST AI RMF (govern/map/measure/manage concepts).
  • Familiarity with ISO/IEC 42001 and its integration into enterprise ISMS environments.
  • Working knowledge of ISO/IEC 27001-style control environments and audit requirements.

Professional Skills

  • Strong policy and standards drafting capabilities (clear, measurable requirements).
  • Effective stakeholder engagement with engineering, data science, VMPT, and risk teams.
  • Ability to produce audit-ready documentation and structured evidence packs.

Highly Recommended Certifications

  • ISC2 CISSP
  • ISACA CISM
  • IAPP AIGP (Artificial Intelligence Governance Professional)
  • ISO/IEC 27001 Lead Implementer (or Lead Auditor equivalent)

Preferred Certifications

  • ISC2 CCSP
  • Microsoft Certified: Cybersecurity Architect Expert (SC-100)
  • Microsoft Certified: Azure AI Engineer Associate (AI-102)

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 143933419