Enable job alerts via email!

Senior Data Engineer

Jeavio

United States

Remote

USD 100,000 - 130,000

Full time

30+ days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company in data engineering is seeking a Senior Data Engineer to design and maintain high-performance data pipelines and infrastructure. This role requires strong expertise in AWS services, data architecture, and programming in Python, with a collaborative approach to integrate various components. The ideal candidate will have 5+ years of experience, ready to lead and optimize data-driven solutions.

Qualifications

  • 5+ years of experience in data engineering roles.
  • Strong background in AWS services and infrastructure.
  • Expertise in building data pipelines and automating tasks.

Responsibilities

  • Design, develop, and maintain data pipelines using Airflow and AWS.
  • Implement data warehousing solutions with Databricks and PostgreSQL.
  • Automate tasks using GIT/Jenkins and optimize ETL processes.

Skills

Data Engineering
Python
AWS
ETL Processes
Data Warehousing
PostgreSQL
Airflow
Databricks
Jenkins
SQL

Education

Bachelors or masters in computer science or related fields

Job description

We are seeking an experienced Senior Data Engineer to join our team. The ideal candidate will have a strong background in data engineering and AWS infrastructure, with hands-on experience in building and maintaining data pipelines and the necessary infrastructure components. The role will involve using a mix of data engineering tools and AWS services to design, build, and optimize data architecture.


Key Responsibilities:

  • Design, develop, and maintain data pipelines using Airflow and AWS services.
  • Implement and manage data warehousing solutions with Databricks and PostgreSQL.
  • Automate tasks using GIT / Jenkins.
  • Develop and optimize ETL processes, leveraging AWS services like S3, Lambda, AppFlow, and DMS.
  • Create and maintain visual dashboards and reports using Looker.
  • Collaborate with cross-functional teams to ensure smooth integration of infrastructure components.
  • Ensure the scalability, reliability, and performance of data platforms.
  • Work with Jenkins for infrastructure automation.

Technical and functional areas of expertise:

  • Working as a senior individual contributor on a data intensive project
  • Strong experience in building high performance, resilient & secure data processing pipelines preferably using Python based stack.
  • Extensive experience in building data intensive applications with a deep understanding of querying and modeling with relational databases preferably on time-series data.
  • Intermediate proficiency in AWS services (S3, Airflow)
  • Proficiency in Python and PySpark
  • Proficiency with ThoughtSpot or Databricks.
  • Intermediate proficiency in database scripting (SQL)
  • Basic experience with Jenkins for task automation

Nice to Have :

  • Intermediate proficiency in data analytics tools (Power BI / Tableau / Looker / ThoughSpot)
  • Experience working with AWS Lambda, Glue, AppFlow, and other AWS transfer services.
  • Exposure to PySpark and data automation tools like Jenkins or CircleCI.
  • Familiarity with Terraform for infrastructure-as-code.
  • Experience in data quality testing to ensure the accuracy and reliability of data pipelines.
  • Proven experience working directly with U.S. client stakeholders.
  • Ability to work independently and take the lead on tasks.

Education and experience:

  • Bachelors or masters in computer science or related fields.
  • 5+ years of experience

Stack/Skills needed:

  • Databricks
  • PostgreSQL
  • Python & Pyspark
  • AWS Stack
  • Power BI / Tableau / Looker / ThoughSpot
  • Familiarity with GIT and/or CI/CD tools

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.