Search by job, company or skills

Avrioc Technologies

Senior Data Engineer

This job is no longer accepting applications

new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 months ago

Job Description

About Us:

Avrioc Technologies has been over a decade in the making and continues to grow with innovative products used daily across the GCC market and worldwide.

We invest in infrastructural design combined with excellent career opportunities, strong administrative support, and a friendly environment that enables us to nurture our company's greatest asset: our employees!

Qualifications & Requirements:

  • 8+ years of experience.
  • Degree in Computer Science, Data Science, Mathematics, IT, or a related field.
  • At least 5 years of experience as a Data Engineer or in a similar role.

Responsibilities:

  • Collaborate with multiple stakeholders to deeply understand the needs of data practitioners and deliver scalable solutions.
  • Define, build, and maintain the Data Platform as a Data Engineer.
  • Work with emerging technologies to build distributed applications.
  • Lead end-to-end development efforts to ensure on-time delivery of high-quality solutions that meet requirements, align with architectural vision, and comply with applicable standards.
  • Present technical solutions, capabilities, considerations, and features in business terms.
  • Support the development of critical initiatives such as Data Discovery, Data Lineage, and Data Quality.
  • Build data systems, pipelines, analytical tools, and programs.
  • Conduct complex data analysis and report on findings.

Skills and Attributes:

  • Strong technical expertise in data modeling, data ingestion, distributed processing, data warehousing, and ETL processes.
  • Experience with Apache Kafka and Spark Streaming.
  • Familiarity with connecting and querying diverse data sources, including NoSQL databases (e.g., MongoDB), MySQL, Elasticsearch, and Kafka topics.
  • Proficiency in SQL, Python, PySpark, Spark SQL, and a solid understanding of the Databricks platform.
  • Ability to build large-scale batch and real-time data pipelines.
  • Knowledge of best practices for optimizing Databricks cluster configurations (size, type, memory) based on data requirements and delivering optimized code.
  • Experience in maintaining, monitoring, and handling enhancement requests for data pipelines.
  • Familiarity with visualization tools such as Kibana and QlikView.
  • Exposure to Docker, Kubernetes, and GitLab.
  • Experience with JIRA and Confluence for documentation and project management.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 126305571

Similar Jobs