Key Responsibilities
- Design, implement, and maintain robust ETL/ELT pipelines using Python, SQL, and AWS services including S3, Glue, CloudFormation and Lambda.
- Architect scalable data warehouses, data lakes, and real-time data solutions to support analytics and machine learning workflows.
- Collaborate with data scientists, analysts, and product teams to understand requirements and deliver impactful data solutions.
- Lead troubleshooting and problem-solving efforts to resolve complex data engineering challenges.
- Mentor and guide junior engineers, promote best practices in coding, data governance, and cloud infrastructure.
- Communicate technical concepts clearly to both technical peers and non-technical stakeholders.
- Optimize systems for performance, cost-efficiency, and data quality in cloud environments.
Requirements
Must-Have Skills and Experience
- 5+ years of professional experience in data engineering or related roles.
- Bachelor's degree in Computer Science, Engineering, or a related field is preferred but not required.
- Expert proficiency in SQL for complex query writing, data modeling, and performance tuning.
- Strong Python programming skills for developing scalable and maintainable data pipelines.
- Hands-on experience with AWS data services such as S3, Glue, Lambda, and familiarity with AWS ecosystem tools.
- Proven ability to architect and optimize data systems for large-scale data processing.
- Excellent communication skills with the ability to explain technical issues clearly.
- Demonstrated problem-solving mindset, with a focus on innovation and technical leadership over task execution.
- Experience working in Agile or DevOps-driven teams with CI/CD pipelines.