
Search by job, company or skills
**HIRING**
Job Title - ROLE 04 — Data Platform Engineer
Contract Length - 12 Months Contract to Perm
Location – Abu Dhabi
Start Date – ASAP – Subject to Security Clerance
About the Team
My client have a product and platform execution team — not a traditional enterprise data function. They build and operate AI-enabled systems that create impact across the organization. The primary output is working software in production
They are looking for someone who Executes the data warehouse build, data quality layer, and additional source integrations in parallel with the Lead.
You are the second data platform engineer, working alongside the Lead to build the data infrastructure at speed. While the Lead focuses on enterprise extraction and platform architecture, you own the data warehouse build, the data quality monitoring layer, and the expansion of ingestion to additional source systems.
You are a strong independent executor. You receive architectural direction from the Lead and the Principal Data Architect, and you build to it — reliably, observably, and fast.
What This Role Owns
- Data warehouse build: Azure Data Lake Services + Synapse/Fabric — dimensional models, ingestion patterns, governed access views
- Data quality monitoring: automated checks on completeness, freshness, row counts, and classification coverage — quality failures pause pipelines, not skip records
- Additional source ingestion: pipelines for data sources beyond the initial three (research center data, PMO data, procurement systems)
- Data mesh enablement: governed data products that entities own and the platform makes accessible
- Agentic platform data readiness: clean, structured data outputs formatted to the contracts defined by the Principal AI Engineer
- Schema deployment: migration scripts, versioning, blue-green deployment support
- Pipeline maintenance: monitoring, troubleshooting, and optimizing existing pipelines
Key Decisions
- Data quality check implementation — what to validate, what thresholds trigger alerts
- Warehouse query optimization and partitioning strategy
- Pipeline scheduling and dependency ordering for new sources
- Incident response for pipeline failures (autonomous resolution; escalate to Lead for extended outages)
Required Skills & Technologies
- Python — async data pipelines, background jobs, scripting
- PostgreSQL — schema design, migrations (Alembic), query optimization
- Azure Data Lake Storage + Synapse Analytics
- dbt — transformation, testing, documentation
- Apache Airflow or Azure Data Factory
- Data quality frameworks (Great Expectations, dbt tests, or custom)
- Observability — structured logging, alerting, Azure Monitor or Prometheus/Grafana
- Microsoft Graph API — SharePoint, M365 data extraction
- Redis — queue management, caching
- Docker — containerized pipeline jobs
- SQL — advanced analytical queries, window functions, performance tuning
Ideal Candidate
What this role does Not Do
- Define platform architecture or engineering standards (Lead Data Platform Engineer)
- Define data models or classification schema (Principal Data Architect)
- Define governance policies (Governance Lead)
- Build API endpoints (Backend Engineer)
- Write AI prompts (AI Engineer)
- Build frontend components (Full Stack Engineer)
Thanks,
Ollie
Job ID: 145962623