Enable job alerts via email!

AWS Data Engineer - Fully Remote - US Only

Scalepex

Plano (TX)

Remote

USD 90,000 - 140,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a forward-thinking company as an AWS Data Engineer, where you will design and optimize data pipelines to support analytics in the utilities industry. This role offers the opportunity to work with cutting-edge AWS technologies and collaborate with top-tier professionals. If you have a passion for data engineering and a desire to make an impact in the utilities sector, this position is perfect for you. Embrace the challenge of building scalable solutions while ensuring data security and compliance in a dynamic environment. Take your career to new heights with this exciting opportunity!

Qualifications

  • 5+ years of experience in data engineering with strong AWS proficiency.
  • Hands-on experience with distributed systems and scalable architectures.

Responsibilities

  • Design and build scalable data pipelines using AWS services.
  • Implement ETL/ELT processes to clean and transform data.

Skills

AWS Services (Step Functions, Lambda, Glue, S3, DynamoDB, Redshift)
Python Programming
Data Engineering
ETL/ELT Processes
Distributed Systems
Data Governance
Analytical Skills

Tools

PySpark
Pandas

Job description

AWS Data Engineer - Fully Remote - US Only

4 weeks ago Be among the first 25 applicants

❋ Why Scalepex?
Scalepex is a dynamic services firm specializing in providing solutions for premium brands like Nike, Pepsi, Toyota, Virgin, and Walgreens. Our mission is to connect prominent market leaders with top-tier professionals from around the world, fostering collaboration, efficiency, and growth.

❋ Take your portfolio to the next level by working with one of our fastest growing clients.

Join the Innovation Frontier at Scalepex!

About The Role

We are seeking an experienced AWS Data Engineer with a strong background in building scalable data solutions and expertise in utilities-related datasets. The ideal candidate will have at least 5 years of experience in data engineering, a deep understanding of distributed systems, and proficiency with AWS services and tools like Step Functions, Lambda, Glue, and Redshift. This role will focus on designing, developing, and optimizing data pipelines to support analytics and decision-making in the utilities industry.

Key Responsibilities
  1. Design and build data pipelines: Develop scalable, reliable data pipelines using AWS services (e.g., Glue, S3, Redshift) to process and transform large datasets from utility systems like smart meters or energy grids.
  2. Workflow orchestration: Use AWS Step Functions to orchestrate workflows across data pipelines; experience with Airflow is acceptable but Step Functions is preferred.
  3. Data integration and transformation: Implement ETL/ELT processes using PySpark, Python, and Pandas to clean, transform, and integrate data from multiple sources into unified datasets.
  4. Distributed systems expertise: Leverage experience with complex distributed systems to ensure reliability, scalability, and performance in handling large-scale utility data.
  5. Serverless application development: Use AWS Lambda functions to build serverless solutions for automating data processing tasks.
  6. Data modeling for analytics: Design data models tailored for utilities use cases (e.g., energy consumption forecasting) to enable advanced analytics.
  7. Optimize data pipelines: Continuously monitor and improve the performance of data pipelines to reduce latency, enhance throughput, and ensure high availability.
  8. Ensure data security and compliance: Implement robust security measures to protect sensitive utility data and ensure compliance with industry regulations.
Requirements
Required Qualifications
  1. Minimum of 5 years of experience in data engineering.
  2. Proficiency in AWS services such as Step Functions, Lambda, Glue, S3, DynamoDB, and Redshift.
  3. Strong programming skills in Python with experience using PySpark and Pandas for large-scale data processing.
  4. Hands-on experience with distributed systems and scalable architectures.
  5. Knowledge of ETL/ELT processes for integrating diverse datasets into centralized systems.
  6. Familiarity with utilities-specific datasets (e.g., smart meters, energy grids) is highly desirable.
  7. Strong analytical skills with the ability to work on unstructured datasets.
  8. Knowledge of data governance practices to ensure accuracy, consistency, and security of data.
  9. Strong experience in AWS data engineering.
  10. Ability to work independently and with cross-functional teams, including interfacing and communicating with business stakeholders.
  11. Professional oral and written communication skills.
  12. Strong problem solving and troubleshooting skills with mature judgment.
  13. Excellent teamwork and interpersonal skills.
  14. Ability to obtain and maintain the required clearance for this role.
Additional Details
  • Seniority level: Mid-Senior level
  • Employment type: Full-time
  • Job function: Information Technology
  • Industries: IT Services and IT Consulting
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Engineer, Big Data - Databricks/Python/SQL/Power BI - Remote

Lensa

City of Yonkers

Remote

USD 77,000 - 156,000

Today
Be an early applicant