Overview
We are seeking a highly skilled AWS Data Engineer to join our team on a contract basis. The successful candidate will design, develop, and maintain scalable data pipelines and platforms, enabling advanced analytics and AI-driven solutions. This is an exciting opportunity to work on cutting-edge projects in a fully remote environment.
Responsibilities
- Design, develop, and maintain ETL/ELT pipelines for large-scale structured and unstructured data.
- Build and optimize data solutions using AWS services (S3, Lambda, Glue, Step Functions, EMR).
- Implement scalable data models and warehouses in Postgres and Snowflake.
- Develop distributed data workflows using Databricks and PySpark APIs.
- Write clean, reusable, and efficient Python code for data transformation/orchestration.
- Automate infrastructure provisioning and deployments using Terraform (IaC).
- Collaborate with data scientists, ML engineers, and stakeholders to deliver high-quality datasets.
- Monitor, troubleshoot, and optimize pipelines for performance and reliability.
- Stay updated with emerging trends in data engineering, cloud computing, and AI/LLMs.
Required Skills & Experience
- Strong programming experience in Python with ETL pipeline development.
- Advanced SQL skills; hands-on with Postgres and Snowflake.
- Proven experience with AWS data/compute services (S3, Lambda, Glue, EMR, Step Functions, CloudWatch).
- Proficiency in PySpark and Databricks for distributed data processing.
- Solid experience with Terraform for infrastructure automation.
- Strong understanding of cloud-architectures and serverless computing.
- Excellent problem-solving, debugging, and performance optimization skills.
- Effective communication and collaboration in fast-paced, agile teams.
Additional Information
- Seniority Level: Mid-Senior level
- Industry: Oil and Gas; Oil, Gas, and Mining; IT Services and IT Consulting
- Employment Type: Contract
- Job Functions: Information Technology
- Skills: Electronic Medical Record (EMR); Terraform; Amazon Web Services