Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading company in data solutions is searching for a Data Engineer to enhance and maintain scalable data pipelines. This role involves collaborating with analytics teams and ensuring data quality while leveraging technologies like AWS and Python. The ideal candidate will possess a Bachelor's degree and relevant experience in data engineering, driving data-driven decision-making across the organization.
Job Description
Develop, maintain scalable data pipelines and build out new integrations to support continuing increases in data volume and complexity
Develop and maintain scalable, optimized data pipelines leveraging Python and AWS services to support increasing data volume and complexity, while ensuring seamless integration with AI platforms like Bedrock and Google. Further enhance data accessibility and drive data-driven decision making by collaborating with analytics and business teams to refine data models for business intelligence tools
Develop, maintain, and optimize scalable data pipelines using Python and AWS services (e.g., S3, Lambda, ECS, EKS, RDS, SNS/SQS, Vector DB)
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines
Collaborate with analytics and business teams to create and improve data models for business intelligence
End-to-end ownership of data quality in our core datasets and data pipelines
Participate in code reviews and contribute to DevOps / DataOps / MLOps
Job Requirements
Bachelor's degree in Computer Science, Engineering, or a related field
2-3 years of experience in data engineering or a similar role
Strong programming skills in Python, SQL, AWS and related tech stack
Experience with building scalable data pipelines with technologies such as Glue, Airflow, Kafka, Spark etc.
Experience using Snowflake, DBT, Bedrock is a plus
Good understanding of basic machine learning concepts (Sagemaker)