Enable job alerts via email!

Senior Data Engineer

Edjuster

Kitchener

On-site

CAD 85,000 - 120,000

Full time

15 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company in Kitchener is seeking a Senior Data Engineer to design and implement robust data solutions. The ideal candidate will have strong experience with AWS services, Kafka, and data engineering principles. Join a dynamic team that values innovation and collaboration, and drive impactful projects in the world of data.

Qualifications

  • 3+ years of experience as a Data application developer.
  • Experience implementing software applications supporting data lakes and data applications on AWS.
  • Experience leading a small to medium-sized team.

Responsibilities

  • Design, build and maintain efficient data architecture and code.
  • Collaborate with teams to deliver projects throughout the software development cycle.
  • Migrate data from various sources to AWS databases.

Skills

Python
Shell scripting
SQL
AWS Services
Confluent Kafka

Education

Bachelor’s degree in Computer Science, Software Engineering, MIS

Tools

Confluent Kafka
AWS
Snowflake

Job description

At Viral Nation , we specialize in building social-first ecosystems for brands to connect with the modern consumer journey. Our integrated solutions align strategy, talent, media, and technology with culturally relevant creativity to scale the world’s fastest-growing digital brands. Viral Nation offers a fluid, creative, and growth-oriented environment that will support your ambitions to apply your talents in an open, collaborative, and fast-paced culture.

What you’ll do here :

  • Design, build and maintain efficient, reusable, and reliable architecture and code.
  • Lead a team : task delegation, organization, motivation and feedback.
  • Conduct code reviews and deployment upon task completion.
  • Participate in architecture and system design discussions.
  • Develop data mesh architectures and build big data pipelines.
  • Perform hands-on development / coding and unit testing of the applications.
  • Collaborate with the development and AI teams to build individual components into complex enterprise web systems.
  • Work in a team environment with product, frontend design, production operation, QE / QA and cross-functional teams to deliver projects throughout the software development cycle.
  • Architect and implement CI / CD strategy for EDP.
  • Implement high velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred).
  • Ensure the best possible performance and quality of high scale web applications and services.
  • Identify and resolve any performance issues.
  • Keep up to date with new technology development and implementation.
  • Participate in code reviews to ensure standards and best practices are met.
  • Migrate data from traditional relational databases, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift.
  • Migrate data from AWS DynamoDB to relational databases such as PostgreSQL.
  • Migrate data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift.
  • Work closely with Data Scientist leads, CTO, Product, Engineering, DevOps and other members of the AI Data Science teams.
  • Collaborate with the product team, share feedback from project implementations and influence the product roadmap.
  • Be comfortable in a highly dynamic, agile environment without sacrificing the quality of work products.

What you need to have :

  • Bachelor’s degree in Computer Science, Software Engineering, MIS or equivalent combination of education and experience.
  • 3+ years of experience as a Data application developer.
  • 2+ years of experience with Apache Kafka or Confluent Kafka.
  • Experience in Analysis, Design, Development, and Testing of features utilizing Confluent Kafka.
  • Maintain and enhance Confluent Kafka architecture, design principles, and CI / CD Deployment procedures.
  • Experience with building streaming applications with Confluent Kafka.
  • Experience leading a small to medium-sized team.
  • AWS Solutions Architect or AWS Developer Certification preferred.
  • Experience implementing software applications supporting data lakes, data warehouses and data applications on AWS for large enterprises.
  • Solid programming experience with Python, Shell scripting and SQL.
  • Experience with AWS services such as CloudFormation, S3, Athena, Glue, EMR / Spark, RDS, Redshift, DataSync, DMS, MongoDB, PostgreSQL, Lambda, Step Functions, IAM, KMS, SM etc.
  • Experience implementing solutions on AWS based data lakes.
  • Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS.
  • Knowledge of ETL / ELT.
  • Experience with end-to-end data solutions (ingest, storage, integration, processing, access) on AWS.
  • Experience working with Object stores (S3) and JSON.
  • Good experience with AWS Services – Glue, Lambda, Step Functions, SQS, DynamoDB, S3, Redshift, RDS, Cloudwatch and ECS.
  • Hands-on experience with Python, Django.
  • Great knowledge of Data Science models.
  • Knowledge of Snowflake is a plus.

Nice to have :

  • Experience in AWS AI solutions such as Recognition, Comprehend and Transcribe.
  • Experience with Azure Data services.
  • Experience in building MLOps Pipelines.

Viral Nation is committed to diversity, equity, and inclusion in our agency. We welcome applications from people with visible and non-visible disabilities. Accommodations are available on request for candidates taking part in all aspects of the recruiting and selection process.

J-18808-Ljbffr

Create a job alert for this search

Senior Data Engineer • Kitchener, ON, Canada

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.