Search by job, company or skills

ValueLabs

Data Engineer (PySpark + Cloudera)

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Location: Dubai WFO

Notice: Immediate/Serving notice - 30 days only.

Skill: PySpark + Cloudera Data Platform+ Banking Domain

Experience: 5+ years

Role: Data Engineer

  • Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
  • Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
  • Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
  • Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
  • Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
  • Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
  • Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
  • Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
  • Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.

  • PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
  • Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  • Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
  • Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
  • Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  • Scripting and Automation: Strong scripting skills in Linux.

Note: Looking for immediate to 30 days official Notice period candidates only.

Interested candidates please share your CV to [Confidential Information] with below details:

Total Exp:

Relevant Exp in PySpark:

Relevant Exp in Cloudera:

Relevant Exp in Banking Domain:

Current CTC:

Expected CTC:

Current location:

Preferred location:

Notice period:

Highest Education (fulltime):

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 136916975