For a well-known Islamic Bank in Kuwait, we are hiring an AI Security Governance & Policy Specialist to build and operate security governance for LLM/ML/AI systems across the full lifecycle (use-case intake design build test deploy monitor retire).
This position focuses on delivering tangible governance outputs: writing and maintaining enforceable policies and standards, translating frameworks into practical control requirements, performing AI security risk assessments, reviewing solution designs and technical evidence, and ensuring required testing and monitoring are implemented and evidenced.
The role aligns AI governance and controls to recognized frameworks such as NIST AI RMF and ISO/IEC 42001, while ensuring consistency with existing enterprise security baselines (e.g., ISO/IEC 27001-style controls). The specialist partners with VMPT to ensure AI solutions undergo appropriate testing (including adversarial testing/red teaming), secure code practices, vulnerability management, and evidence-based exception handling where required.
Key Responsibilities
AI Policies, Standards, and Control Requirements
- Draft, maintain, and publish AI security policies, standards, and minimum baselines for LLM/ML/AI solutions.
- Produce clear, testable requirements and guidance covering:
- Model onboarding and approval (internal and third-party models, versioning, provenance, licensing/usage constraints)
- Secure AI SDLC requirements (threat modeling, secure design, testing, change control, release gates)
- AI data governance requirements (data classification, retention, data minimization, leakage prevention, approved sources for RAG)
- Logging/telemetry requirements for AI apps and model endpoints (audit trails, redaction guidance, monitoring and alerting expectations)
- Maintain reusable governance artifacts and templates: intake forms, control checklists, evidence packs, exception/waiver forms, and sign-off criteria.
AI Risk Assessments and Control Validation
- Perform hands-on AI security risk assessments for new and changed use cases, documenting:
- threat scenarios specific to AI and LLM integrations
- required controls and compensating controls
- residual risk and recommended risk treatment
- Apply NIST AI RMF concepts to structure AI risks in a practical way that engineering teams can implement and test.
- Maintain an AI security risk register with clear remediation actions, owners, dates, and closure evidence.
- Validate control implementation by reviewing technical evidence (architecture diagrams, IAM role assignments, secrets handling approach, logging configuration, test results, monitoring dashboards), and track remediation through closure.
Testing, Assurance, and Evidence Requirements
- Define and enforce minimum assurance requirements for AI solutions, including:
- adversarial testing / AI red teaming minimum expectations (in coordination with VMPT)
- secure code review expectations for AI application layers (prompt handling, retrieval logic, output handling, tool invocation)
- pre-production and post-release monitoring requirements (abuse detection signals, data leakage indicators, model/API usage anomalies)
- Review and validate the evidence that testing and monitoring were performed and met required outcomes (reports, findings, retest results, operational telemetry).
- Ensure guidance remains aligned with industry references such as OWASP LLM Top 10, and translate that guidance into actionable requirements and developer-friendly mitigation checklists.
Cloud, Vendor, and Data Handling Governance
- Maintain secure-by-default patterns for Azure-based GenAI deployments, including:
- identity and access controls (least privilege, service identities, key rotation expectations)
- network isolation and endpoint exposure controls
- secrets management and encryption requirements
- logging/telemetry baselines and retention requirements
- environment separation and deployment guardrails
- Define governance requirements for enterprise GenAI data handling (what data is permitted, what must be masked/redacted, what requires additional approvals, what must be logged).
- Support onboarding of third-party AI services and vendors by defining security questionnaires, minimum security clauses, required assurance evidence, and acceptable risk thresholds.
Education and Qualification
- Bachelor's degree in Cybersecurity, Computer Science, Information Systems, Risk/Compliance, or related discipline.
- 510 years of experience in cybersecurity governance, risk management, compliance, security assurance, or security architecture with strong delivery orientation.
- Demonstrated ability to translate frameworks into practical, implementable controls, standards, and operating procedures that can be validated with evidence.
- Practical understanding of LLM/AI solution patterns (RAG, agents, embeddings, fine-tuning, model APIs, orchestration) and their security implications.
- Working knowledge of NIST AI RMF (govern/map/measure/manage concepts applied to AI risk).
- Familiarity with ISO/IEC 42001 concepts and mapping into enterprise ISMS environments.
- Working knowledge of ISO/IEC 27001-style control environments and audit expectations.
High Recommend Certificates
- ISC2 CISSP
- ISACA CISM
- IAPP AIGP (Artificial Intelligence Governance Professional)
- ISO/IEC 27001 Lead Implementer (or Lead Auditor equivalent)
Preferred
- ISC2 CCSP
- Microsoft Certified: Cybersecurity Architect Expert (SC-100)
- Microsoft Certified: Azure AI Engineer Associate (AI-102)