Job Role:-Databricks Data Platform Engineer
Job Location:-Abu Dhabi, UAE
Experience:-5+ Years
Role SummaryDesign and build scalable, high-performance, and governed data pipelines on Azure Databricks to support analytics, reporting, and enterprise data intelligence initiatives.
Key Responsibilities- Develop batch pipelines using Delta Lake with proper table design, partitioning, optimization, and lifecycle management
- Implement incremental ingestion using Autoloader with schema evolution and checkpointing
- Build Structured Streaming pipelines with watermarking, state management, and late data handling
- Develop declarative pipelines using Lakeflow standards
- Ensure pipelines support idempotency, replayability, and safe backfills
- Optimize Spark workloads (AQE, skew handling, shuffle tuning, join optimization)
- Build curated, analytics-ready datasets for SQL analytics and downstream consumption
- Develop DBSQL views aligned with enterprise semantic standards
- Implement CI/CD deployments using Repos and asset-based packaging
- Add monitoring, logging, and operational observability to pipelines
Secondary Responsibilities- Ensure compliance with Unity Catalog access controls and governance policies
- Embed data quality validations as per governance standards
- Support migration of legacy data workloads to Databricks including validation and performance tuning
- Collaborate with DataOps for deployment readiness and production stability
Required Skills & Experience- Strong hands-on experience with Azure Databricks, Spark, and PySpark
- Deep knowledge of Delta Lake, Autoloader, and Structured Streaming
- Experience with Lakeflow pipelines and DBSQL development
- Expertise in Spark performance tuning and optimization techniques
- Understanding of data governance, access control, and data quality frameworks
- CI/CD and DevOps experience in data platform environments
- Experience supporting production-grade data platforms