Search by job, company or skills

ATOM Insurance

Full Stack Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 27 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

INTRODUCTION

ATOM is the end-to-end insurance operating system, enabling any entity within the

insurance lifecycle to operate the entirety of their activities and cooperate/interact with

third parties. Its unique advantage is to incorporate not only insurance underwriting and

claims handling, but also all associated support and corporate activities.

ATOM Technologies is the team behind the platform. Based in the DIFC Innovation Hub,

ATOM Technologies is a group of more than 25+ passionate and dedicated insurance and

software professionals on a mission to upgrade the insurance industry from the inside out.

RESPONSIBILITIES:

The Full Stack Data Engineer is responsible for integrating data engineering, analytics, and

data science to drive business intelligence and data-driven decision-making. They will

design, implement, and optimize data pipelines, ETL processes, and data models to ensure

efficient data flow and accessibility. Collaborating closely with cross-functional teams,

they will gather requirements and develop data solutions that support business

intelligence, reporting, and analytics needs.

They will leverage their expertise in Tableau BI, AWS, and Python coding to build scalable

data solutions, while applying machine learning techniques to enhance insights and

automate data processes. Ensuring data quality, security, and compliance across all

initiatives will be key to their success in this role, as they work to deliver reliable, high-

performance data solutions aligned with elseco's objectives.

They will be joining the data engineering team to support the department in cultivating

the enterprise data supply chain. Their activities will include preparing data for the

creation of data products defined by the business, such as embedded analytics, while

supporting the overall data strategy to enhance quality, security, and governance

throughout the organization.

SPECIFIC DUTIES AND RESPONSIBILITIES:

Design, build, and maintain scalable ETL/ELT pipelines for structured and

unstructured data.

Develop data models and optimize database performance for analytics and

reporting.

Integrate data from multiple sources (APIs, cloud storage, databases) into a

centralized data warehouse.

Manage and optimize AWS-based data infrastructure (S3, Redshift, Glue, Lambda,

etc.).

Ensure data quality, integrity, and security across all stages of data processing.

Develop and maintain interactive dashboards and reports using Tableau BI.

Work with stakeholders to define key business metrics, KPIs, and reporting

requirements.

Perform exploratory data analysis (EDA) to uncover trends and insights.

Automate reporting and data extraction processes for business teams.

Develop and deploy predictive models and machine learning algorithms for

business use cases.

Conduct statistical analysis, feature engineering, and model optimization.

Implement A/B testing frameworks to validate hypotheses and business decisions.

Work with big data technologies to process large-scale datasets.

Work closely with data analysts, software engineers, and business teams to align

data initiatives with business goals.

Write clean, reusable, and well-documented Python code for data processing and

analysis.

Maintain clear documentation for data pipelines, ETL workflows, and ML models.

KNOWLEDGE AND SKILLS

Bachelor's degree in Computer Science, Engineering, Data Science, Information

Systems, or a related field.

Proficient in Python, including libraries like Pandas, NumPy, PySpark, and Scikit-learn.

Strong experience with SQL, ETL processes, and data pipeline development.

Expertise in Tableau BI for data visualization and dashboarding.

Hands-on experience with AWS services (S3, Redshift, Lambda, Glue, Athena, etc.).

Familiarity with relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB,

DynamoDB).

Knowledge of supervised and unsupervised learning models, regression,

classification, clustering.

Experience with Git, Docker, and CI/CD pipelines for data workflows.

Strong problem-solving and analytical thinking skills.

Ability to communicate complex data concepts to non-technical stakeholders.

Experience working in agile environments with cross-functional teams.

Self-motivated with the ability to manage multiple projects simultaneously.

Experience with Kafka, Airflow, or other data orchestration tools would be

advantageous.

Knowledge of distributed computing frameworks (Spark, Hadoop) would be

advantageous.

Exposure to AI/Deep Learning frameworks (TensorFlow, PyTorch) would be

advantageous.

Certification in AWS, Tableau, or relevant data technologies would be

advantageous.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 137001015