Enable job alerts via email!

AWS Data Engineer

Datavail Infotech

Toronto

On-site

CAD 80,000 - 100,000

Full time

Yesterday
Be an early applicant

Job summary

A technology services firm in Toronto is seeking a Data Engineer to modernize ETL jobs using AWS technologies and to work within a scrum team. The ideal candidate has over 5 years of experience, is proficient with Spark and AWS, and can effectively communicate technical concepts. This full-time position offers competitive pay in a dynamic work environment.

Qualifications

  • 5+ years of relevant experience in data engineering.
  • Experience with process orchestration tools like ActiveBatch.
  • Data visualization experience.

Responsibilities

  • Work within a scrum team to deliver product stories.
  • Modernize legacy ETL jobs utilizing AWS technologies.
  • Design and implement data pipelines.

Skills

Proficient with Spark
AWS experience (RedShift, Glue, Step Functions, QuickSight)
Ability to communicate technical concepts
Ability to design and implement functional code
Innovative problem-solver
Knowledge of RDBMS
Experience with data modeling
Experience writing and optimizing AWS Glue jobs
Experience with SQL

Tools

AWS Glue
Informatica
Teradata
QuickSight

Job description

Job Description :

  • Work within a scrum team(s) to deliver product stories according to priorities set by FCC and the Product Owners.
  • Interact with stakeholders.
  • Work with FCC's data pipeline to modernize legacy ETL jobs utilizing AWS technologies and DataVault 2.0.
  • Relevant Experience: 5 years overall, 7 years preferred.

Skills & Experience :

  • Proficient with Spark.
  • AWS experience (RedShift, Glue, Step Functions, QuickSight). SAS experience (SAS Enterprise Guide, SAS Data Integration, SAS MIP) is a plus.
  • Ability to communicate moderate to complex technical concepts to technical and non-technical personnel.
  • Ability to conceptualize and articulate ideas clearly and concisely.
  • Ability to design and implement functional, easy-to-understand code.
  • Innovative problem-solver and critical thinker with a customer focus.
  • Advocate for smart, clean, and maintainable code.
  • Passion for technology, software, and data development.
  • Knowledge of RDBMS.
  • Experience designing and building data environments to support reporting and analytics, including data integrations and flow between disparate data systems.
  • Experience with data modeling, data engineering, and/or data warehouse building.

Required Experience :

  • Experience writing, troubleshooting, and optimizing AWS Glue jobs.
  • Experience with SQL.
  • Experience designing, implementing, and orchestrating data pipelines.
  • Experience designing and implementing QuickSight reports/dashboards.
  • Experience with Informatica and Teradata is preferred.

Nice to have :

  • Experience with process orchestration tools like ActiveBatch.
  • Data visualization experience.

Key Skills

Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala

Employment Type : Full-Time

Experience : 5+ years

Vacancy : 1

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.