We are seeking a highly skilled Data Engineer to focus on maintaining data streams and ETL pipelines within a cloud-based environment. The ideal candidate will have experience in building, monitoring, and optimizing data pipelines, ensuring data consistency, and proactively collaborating with upstream and downstream teams to enable seamless data flow across the organization.
In this role, you will troubleshoot and resolve pipeline issues, contribute to enhancing data architecture, implement best practices in data governance and security, and ensure scalability and performance of data solutions. You will understand the business context of data, supporting analytics and decision-making by collaborating with data scientists, analysts, and other stakeholders.
This position requires client presence between 25%-50% of the time per month at the client’s office in London.
Education & Experience :
Bachelor's degree in Computer Science, Data Science, or related field.
Minimum 4 years of experience in data engineering or related roles.
Proficiency with SQL (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB).
Strong programming skills in Python, with experience in building scalable data solutions.
Experience with data pipeline orchestration tools such as Dagster or similar.
Familiarity with cloud platforms (AWS) and data services (S3, Redshift, Snowflake).
Understanding of data warehousing concepts and modern warehousing solutions.
Experience with CI/CD pipelines for data workflows.
Problem-solving skills, attention to detail, proactive mindset.
Ability to work collaboratively in a fast-paced environment.
Excellent communication skills for translating technical concepts to non-technical stakeholders.
Experience with streaming technologies like Kafka.
Familiarity with Docker, ECS for data workflows.
Experience with BI tools such as Tableau or Power BI.
Understanding of machine learning pipelines.
Come and join our #ParserCommunity.