Enable job alerts via email!

Bilingual Data Engineer – GCP

ProViso Staffing

Toronto

On-site

CAD 90,000 - 120,000

Full time

2 days ago
Be an early applicant

Job summary

A leading staffing agency in Toronto is seeking a Data Engineer to build and maintain data pipelines on Google Cloud Platform. The role demands 8+ years of experience in data engineering and bilingual proficiency in English and Spanish. You will work collaboratively with international teams and contribute to the migration and transformation of data to a unified cloud platform. Knowledge of ETL processes and programming in Python or Java is essential.

Qualifications

  • 8+ years of experience as a Data Engineer, with recent ETL experience.
  • Bilingual in English and Spanish with professional proficiency.
  • 3+ years working on cloud platforms, preferably GCP.
  • 5+ years programming in Python or Java and advanced SQL.

Responsibilities

  • Design, develop, and maintain data pipelines on GCP.
  • Collaborate with cross-functional teams to deliver data solutions.
  • Standardize data from multiple sources for effective analytics.
  • Ensure data quality and integrity through monitoring.

Skills

Data pipeline development
ETL processes
Bilingual in English and Spanish
Cloud technologies
Programming in Python or Java
Advanced SQL
Data modeling
Analytical skills
Problem-solving

Education

Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or related field

Tools

Google Cloud Platform (GCP)
BigQuery
Dataflow
Pub/Sub
Cloud Composer
AirFlow

Job description

Story Behind the Need:

• Business group: Technology Program Management & Transformation – team that manages cloud data integration and governance for IB, providing efficient and standardized solutions that enable different business areas to access and use data securely and effectively.
• Project: International Banking – Data Migration and Transformation to Cloud
• Mission: To develop and maintain a cloud platform that integrates data from various sources, standardizes processes, and facilitates data publication for consumers and producers, ensuring quality, security, and accessibility.
• Scope: The primary objective of this project is to design and implement a unified cloud data platform for IB across different countries.
• Impact: Efficient ingestion of data from multiple on-premises sources to the cloud.
• Data transformation and standardization into organized layers (landing and standardization zones).
• Rapid data delivery for bank teams requiring reports, indicators, or advanced analysis.
• Incorporation of reusable data products for transversal consumption across the organization.
• The project prioritizes agility in delivery, enabling tactical solutions (quick, connected to local sources) when necessary, while promoting progressive advancement toward fully cloud-based strategic solutions.

Candidate Value Proposition:

• The successful candidate will have the opportunity to gain exposure to cloud technologies, international stakeholders, and to big-data enterprise tools.

Typical Day in Role:

• Move data from on prem to cloud, working with Mexico Chile Peru as some of the main stakeholders.
• There are multiple technologies in each country, so the goal is to build different data pipelines to ingest into cloud for each
• Running processes from repositories into cloud including data management, connections, patterns, and ensure there is a clear vision for all the data sources; main challenge is to understand how data is managed in each country and working with them; internally to organize and standardize data within the repository
• Design, develop, and maintain data pipelines and infrastructure on Google Cloud Platform (GCP) to support the creation of scalable data products. Reporting to the Senior Data Engineer, this role focuses on implementing robust and efficient data solutions to enable data-driven decision-making and support business objectives.
• Migration is final goal
• Receiving use cases from business – sometimes mortgage, AML, analytics and they will need to prioritize data pipeline accordingly
• Data Modelling to connect different needs from the business
• Standardize data from different sources and countries so that the Canadian banking stakeholders can understand
• Translate local business processes into global business ones
• Documentation and communication will be in English, though local communication with the remote teams for the respective countries will be in Spanish
• Main tool will be GCP and BigQuery for data storage
• Composer (AirFlow onPrem) will be used for data transformation
• Build and maintain data pipelines and ETL/ELT processes on GCP to ensure reliable and efficient data flow for data products.
• Collaborate with the Senior Data Engineer and cross-functional teams (e.g., Data Scientists, Product Managers) to understand requirements and deliver high-quality data solutions.
• Implement data models, schemas, and transformations to support analytics and reporting needs.
• Ensure data quality, integrity, and performance by monitoring and optimizing data pipelines.
• Adhere to data governance, security, and compliance standards within GCP environments.
• Troubleshoot and resolve issues in data pipelines to minimize downtime and ensure operational efficiency.
• Contribute to the adoption of best practices and tools for data engineering, including documentation and testing.
• Stay updated on GCP services and data engineering trends to enhance pipeline capabilities.

Candidate Requirements/Must Have Skills:

• 8+ years of experience as a Data Engineer, with some recent experience working with ETL
• Bilingual in English and Spanish required (verbal / written professional proficiency in both)
• 3+ years of hands-on expertise in building data pipelines on cloud platforms, with a preference for Google Cloud Platform (GCP)
• 3+ years’ experience with GCP tools such as BigQuery, Dataflow, Pub/Sub, or Cloud Composer, AirFlow
• 5+ years’ experience in programming languages like Python or Java and advanced SQL for data processing

Nice-To-Have Skills:

• 3+ years’ experience with data modeling, schema design, and data warehousing concepts.
• Familiarity with version control systems (e.g., Git) and basic CI/CD practices is a plus.
• Understanding of data governance and security practices in cloud environments.
• Experience from FI/banking
• Experience working in an Agile environment

Soft Skills Required:

• Strong problem-solving skills and ability to work collaboratively in a team environment.
• Effective communication skills to translate technical concepts to non-technical stakeholders.

Education:

• Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field.

Best VS. Average Candidate:

• Ideal candidate needs to have solid knowledge of end-to-end pipeline processes and able to describe it in detail, how data is handled, cleaned, organized and built; bilingual in Spanish is crucial to deal with international stakeholders, Data modelling experience is important for at least 1 of the hires; experience with cloud technologies is very important.

Possible previous titles:

• Cloud data engineers, ETL data engineers

Candidate Review & Selection:

• 2 rounds – MS Teams Video Interviews –going through experience, testing language skills, there may be a technical assessment
• 1st – with senior Data Engineer on the team, and possibly Delivery Director – 30 minutes
• 2nd – with HM (Data Science Director) – 30 minutes

Job Details

13414

Contract

2.5 Months

Toronto

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs