Enable job alerts via email!

URGENT – Data Engineer (AWS, Python, Apache Spark)

VTRAC Consulting Corporation

Nova Scotia

Hybrid

CAD 80,000 - 110,000

Full time

2 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

VTRAC Consulting Corporation is seeking a Data Engineer specializing in AWS, Python, and Apache Spark. The successful candidate will design and maintain scalable data systems in a collaborative remote environment. This position offers an opportunity to contribute to cutting-edge data projects and develop data-driven solutions that meet business needs. Ideal applicants will have extensive experience in data engineering and a strong educational background in computer science or engineering.

Qualifications

  • 5+ years of experience in data engineering.
  • Strong experience with data systems and tools.
  • 3+ years of experience with data modeling and architecture.

Responsibilities

  • Design, build, and maintain large-scale data systems.
  • Develop and maintain data pipelines and lakes.
  • Collaborate with teams to enhance data quality.

Skills

Python
Data Engineering
Data Validation
Data Cleansing
Collaboration
Communication

Education

Bachelor’s degree in Computer Science
Bachelor’s degree in Engineering

Tools

Apache Beam
Apache Spark
AWS Glue
Amazon Redshift
Google BigQuery
Snowflake

Job description

VTRAC Consulting Corporation

Intelligent Solutions

Thank you for applying to VTRAC opportunities. Please e-mail your resume as an MS-WORD document in confidence Subject: URGENTData Engineer (AWS, Python, Apache Spark), Attention: samz@vtrac.com or call: (647) 254-0904

Position #: 251179

Position: URGENTData Engineer (AWS, Python, Apache Spark)

Position Type: Contract

No. of Positions: 1

Location: Remote (Nova Scotia)

Description:

An exciting opportunity for a Data Engineer to join a collaborative environment and help build and maintain the data infrastructure. The successful candidate will be responsible for designing, building, and maintaining data systems, including data pipelines, data warehouses, and data lakes. You will work closely with data architects, data scientists, and other stakeholders to ensure that the entire data systems meet the needs of our business. This is a fully remote opportunity with the potential to become a permanent position.

Key Responsibilities:

  • Design, build, and maintain large-scale data systems.
  • Design and implement data warehouses using tools such as Amazon Redshift, Google BigQuery, and Snowflake.
  • Develop and maintain data pipelines using tools such as Apache Beam, Apache Spark, and AWS Glue.
  • Develop and maintain data lakes using tools such as Apache Hadoop, Apache Spark, and Amazon S3.
  • Work with data architects to design and implement data models and data architectures.
  • Collaborate with data scientists to develop and deploy machine learning models and data products.
  • Ensure data quality and integrity by developing and implementing data validation and data cleansing processes.
  • Collaborate with other teams to ensure that data systems meet the business’s needs.
  • Stay up-to-date with new technologies and trends in data engineering and make recommendations for adoption.

Qualifications:

  • 5+ years of experience in data engineering or a related field
  • 5+ years of experience with programming languages such as Python, Java, and Scala
  • 3+ years of experience with data modeling and data architecture
  • 3+ years of experience with data engineering tools such as Apache Beam, Apache Spark, AWS Glue, Amazon Redshift, Google BigQuery, and Snowflake
  • Strong experience with data warehousing and data lakes
  • Strong experience with data validation and data cleansing
  • Strong collaboration and communication skills
  • Bachelor’s degree in Computer Science, Engineering, or a related field

Nice to Have:

  • Experience with machine learning and data science
  • Experience with cloud-based data platforms such as AWS, GCP, or Azure
  • Experience with containerization using Docker and Kubernetes
  • Experience with agile development methodologies such as Scrum or Kanban
  • Experience with data governance and data security

We thank all candidates in advance. Only selected candidates for interviews will be contacted. For other exciting opportunities, please visit us at www.vtrac.com . VTRAC is an equal-opportunity employer.

Toronto. Houston. New York. Palo Alto.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.