Search by job, company or skills

G

Data Architect Senior Manager

8-10 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Established in 2008, Geidea epitomizes customer focused empowerment and commercial success through continuous innovation.

Geidea makes best in class digital payment solutions available for all by attracting and leveraging the best creative & entrepreneurial talent in the market

Our solutions give any business the chance to get ahead and reach for more no matter their size or maturity.

Our technology mirrors our people - Smart, Innovative & Forward Thinking

www.geidea.net

To maintain a competitive advantage as we grow, we are currently looking for a new Date Architect Senior Manager.

Job purpose:

A highly skilled Data Architect to design and lead the development of scalable, secure, and high-performance data platforms across the enterprise. This role is central to shaping the data ecosystem supporting our Fintech servicesincluding real-time financial transactions, credit scoring models, regulatory reporting, and customer analytics.

Data Architect will work across engineering, analytics, and product teams to build modern data infrastructure incorporating Big Data platforms, data lakes, ELT/ETL pipelines, and data warehouses. Your solutions must ensure high availability, governance, security, and performancecompliant with SAMA and other regulatory frameworks.

Responsibilities:

  • Data Architecture Design:
  • Lead the end-to-end architecture of enterprise data platforms including Data Lakes, Lakehouse, and Data Warehouses.
  • Design and maintain canonical data models (conceptual, logical, and physical) for structured, semi-structured (JSON, XML), and unstructured data.
  • Develop architectural blueprints for hybrid and cloud-native solutions leveraging AWS, Azure, or GCP.
  • Standardize data ingestion, transformation, and serving layers for streaming and batch use cases.
  • Big Data & Distributed Systems Engineering:
  • Architect Big Data processing solutions using Apache Spark, Flink, Presto, Trino, or Databricks for large-scale processing of financial and behavioral data.
  • Implement distributed file systems (e.g., HDFS, S3, ADLS) and optimize file formats like Parquet, ORC, and Avro.
  • Ensure scalable compute using EMR, Dataproc, or AKS/Kubernetes-based platforms.
  • Data Lakes and Lakehouse Architecture:
  • Design and operationalize data lakes as central repositories for raw and curated data assets.
  • Build Delta Lake, Apache Hudi, or Iceberg-based architectures to support ACID transactions, schema evolution, and time travel.
  • Define governance standards across raw, staged, curated, and analytics layers of the lake architecture.
  • ELT/ETL Frameworks & Pipelines:
  • Develop robust ELT/ETL pipelines using tools like Apache Airflow, DBT, AWS Glue, Azure Data Factory, or Kafka Connect.
  • Optimize data transformations for performance, reusability, and modular design (e.g., using SQL/Scala/Python).
  • Ensure orchestration of dependencies, retries, alerting, and logging in a fully observable pipeline ecosystem.
  • Data Warehousing & BI Integration:
  • Architect cloud-based data warehouses such as Snowflake, BigQuery, Redshift, or Synapse Analytics.
  • Define dimensional models (Star, Snowflake), facts/dimensions, and materialized views optimized for analytics and dashboarding tools (e.g., Power BI, Tableau, Looker).
  • Enable self-service analytics by integrating semantic layers, metadata management, and data cataloging tools.
  • Security, Governance & Compliance:
  • Implement data encryption (at-rest/in-transit), tokenization, and row/column-level security mechanisms.
  • Define and enforce data governance policies, including data lineage, classification, and auditing aligned with SAMA, NCA, GDPR, and internal policies.
  • Integrate with data catalogs (e.g., Alation, DataHub, Apache Atlas) and governance tools.
  • DevOps & Observability:
  • Support CI/CD for data pipelines and infrastructure as code using Terraform, CloudFormation, or Pulumi.

Implement observability practices via data quality checks, SLAs/SLIs, and monitoring tools like Prometheus, Grafana, or Datadog.

Qualifications:

  • Bachelor's or master's degree in computer science, Data Engineering, or a related technical field.
  • 8+ years of experience in data architecture, with significant contributions to production-grade data systems.
  • Proven track record in designing and deploying petabyte-scale data infrastructure in Fintech, Banking, or RegTech environments.

Our values guide how we think and act - They describe what we care about the most

Customer first - It's embedded in our design thinking and customer service approach

Open - Openness allows us to constantly improve and evolve

Real - No jargon and no excuses!

Bold - Constantly challenging ourselves and our way of thinking.

Resilient If we fail, we bounce back stronger than before.

Collaborative - We know that we can achieve a lot more as a team.

We are changing lives by constantly striving for a better solution.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 144568369