Minimum Qualification
- Overall 5-7 years of experience in data engineering and transformation on Cloud
- 3+ Years of Very Strong Experience in Azure Data Engineering, Databricks
- Expertise in supporting/developing Data warehouse workloads at enterprise level
- Experience in pyspark is required developing and deploying the workloads to run on the Spark distributed computing
- Candidate must possess at least a Graduate or bachelor's degree in Computer Science/Information Technology, Engineering (Computer/Telecommunication) or equivalent.
- Cloud deployment: Preferably Microsoft azure
- Experience in implementing the platform and application monitoring using Cloud native tools
Experience in implement application self-healing solutions through proactive and reactive automated measures