Floward
Job Purpose
The Senior Data Engineer is responsible for designing, implementing, and maintaining scalable data infrastructure that empowers analytics, experimentation, and machine learning at Floward. The role requires advanced hands-on expertise in building robust data pipelines, optimizing data architecture, and upholding data governance. Working cross-functionally with product, analytics, and engineering teams, the Senior Data Engineer ensures data is accurate, accessible, and actionable to drive business impact.
Key Responsibilities
Data Infrastructure & Architecture
- Design and implement scalable data infrastructure tailored to Floward's growing data needs.
- Develop and manage high-volume data storage solutions that support business analytics and operational demands.
- Ensure data architecture aligns with both performance and cost-efficiency goals.
ETL & Data Pipeline Development
- Build and maintain batch and real-time ETL pipelines using tools like Airflow and DBT.
- Deliver clean, validated, and business-ready datasets for downstream consumption.
- Support machine learning and experimentation workflows with reliable data inputs.
Best Practices & Standards
- Define and enforce data engineering standards for coding, deployment, and documentation.
- Promote reusability, observability, and maintainability in all engineering work.
- Continuously optimize pipeline performance and system reliability.
Stakeholder Collaboration
- Partner with cross-functional stakeholders to gather data requirements and define models.
- Collaborate with local and global teams to refine data definitions and integrations.
- Align data solutions with strategic business objectives.
Cross-Team Knowledge Sharing
- Foster a culture of collaboration and data literacy through knowledge-sharing sessions.
- Contribute to community practices around data quality, tooling, and governance.
- Mentor team members in best practices and technology adoption.
Data Governance & Quality
- Champion data quality by implementing validation, monitoring, and lineage tracking.
- Support data governance initiatives by applying consistent standards and documentation.
- Ensure compliance with internal data policies and external regulatory requirements.
Code Quality & Documentation
- Maintain well-documented, testable codebases for all data workflows.
- Conduct code reviews and provide constructive feedback.
Ensure consistent documentation for all pipelines and infrastructure components.
DevOps & Infrastructure Automation
- Manage infrastructure components on AWS and GCP, including Redshift, BigQuery, and Lambda.
- Integrate CI/CD pipelines using tools like GitHub Actions for seamless deployments.
- Implement observability tools to monitor pipeline health and job performance.
Education
Skills and Qualifications
Bachelor's degree in Computer Science, Data Engineering, or a related technical field.
Experience
- 35 years in data engineering, working with cloud-native tools and data platforms.
- Proficiency in Python and SQL; strong experience with data orchestration (e.g., Airflow).
- Familiarity with data modeling, data warehousing, and data lakes.
- Hands-on experience with GCP or AWS environments.
- Experience in DBT, CI/CD processes, and infrastructure-as-code practices is a plus.