Search by job, company or skills

takamol holding

Senior Data Engineering Specialist

new job description bg glownew job description bg glownew job description bg svg
  • Posted 4 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

The Senior Data Engineering Specialist is responsible for building and maintaining scalable, reliable data platforms. A key focus of this role is architecting and implementing end-to-end data solutions, including ingestion, orchestration, ETL/ELT pipelines, transformation, and advanced data modeling.

Key Responsibilities:

1. Data Architecture:

Design, implement, and optimize modern cloud data warehouse and lakehouse architectures (e.g., BigQuery, Snowflake, ClickHouse) to support high-performance analytics. Knowledge on OLTP and NoSQL architectures is desirable.

2. Data Modeling:

Develop and maintain robust data models that support business needs, ensuring scalability and accuracy. Experience with Dimensional Modeling and methodologies like Data Vault is required. Experience with Semantic Layers is highly desirable.

3. Data Pipeline Development:

Build and manage automated ETL/ELT pipelines to ensure data reliability and availability. Requires strong proficiency in Python, Change Data Capture (CDC) patterns, and dbt.

4. Orchestration:

Design and schedule complex data workflows using orchestration tools such as Dagster, Airflow, etc.

5. Data Quality:

Implement automated data quality checks and validation frameworks to ensure consistency and integrity from source to destination.

6. Platform Engineering:

Manage infrastructure using Kubernetes (Helm charts) and Infrastructure as Code tools like Terraform.

7. Collaboration:

Collaborate with data scientists, analysts, and business teams to translate functional requirements into technical specifications and actionable data products.

8. Documentation & Best Practices:

Document data models, semantic layer configurations, and architectural decisions to promote transparency and knowledge sharing across the organization.

Required Skills & Qualifications:

  • Bachelor's degree in computer science or equivalent.
  • 4+ years of experience in data modeling and data warehousing.
  • Advanced SQL & Warehousing: Expert-level proficiency in writing and optimizing SQL for modern columnar databases (e.g., BigQuery, Snowflake).
  • Modern Transformation: Extensive experience building modular data transformations using dbt (Data Build Tool), including testing and documentation.
  • Programming: Strong coding skills in Python for data manipulation, scripting, and custom pipeline development.
  • Orchestration: Ability to manage dependencies and schedule workflows using tools like Airflow, Dagster, etc.
  • Infrastructure as Code: Familiarity with deploying data infrastructure using Terraform and Kubernetes.
  • Cloud Services: Hands-on experience with cloud-native data services (storage, compute, and networking) within AWS, GCP, or Azure.
  • Data Modeling: Strong grasp of data modeling concepts, including Star Schema and Data Vault methodologies.
  • Performance Optimization: Experience tuning data pipelines and queries for speed and cost-efficiency in a distributed environment.
  • CI/CD: Proficiency with version control (Git) and automated deployment pipelines.
  • AI & ML Integration: Ability to collaborate with ML Engineers to industrialize AI projects, supporting feature store development and integrating model inference into production data pipelines.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145837051

Similar Jobs