- Collaborate with senior management and solution architects to contribute to data-oriented decisions across the organization.
- Assist in the development and maintenance of scalable data pipelines, from data ingestion to analytics and machine learning.
- Contribute to custom data pipelines and extractors, leveraging Python, Spark, and other technologies.
- Support in shaping business decisions by offering insights into data technologies and architecture.
- Be involved in the lifecycle of our data platforms, from architecture to production.
- Assist in the evolution of data models, focusing on data discovery, mapping, and cleansing.
- Adopt best practices for data extraction, under the guidance of senior team members.
- Work closely with Data Scientists, Machine Learning Engineers, and other stakeholders to assist in operationalizing machine learning models.
Bachelor's degree in Computer Science, Engineering, or a related field.
- Minimum of 3-5 years experience in data engineering, with a focus on big data and cloud solutions.
- Solid understanding of ETL design, both batch and real-time, and data modeling.
- Proficiency in Python, Spark, SQL, and modern data processing algorithms.
- Experience with Containerization & Cloud-native tools (e.g., Docker, Kubernetes) and Public/Hybrid Cloud technologies (e.g., Google Cloud Platform, Azure, AWS) is a plus.
- Familiarity with real-time data processing and event streaming technologies like Kafka.
- Experience working with REST API
- Understanding of DevOps practices and CI/CD methodologies.
- Familiarity with Agile methodologies
Saudi nationals only.