R | V Tech’s Core Data Team is searching for a talented Analytics and Data Engineer or a Senior Analytics and Data Engineer to design and implement scalable Extract, Load, Transform (ELT) pipelines, develop SQL and Python-based transformation logic, and ensure our data infrastructure supports seamless interaction with data across the company.
In this role, the Analytics and Data Engineer will set the standard for R | V Tech’s Data Engineering best practices while also contributing to high-impact projects and enabling teams to leverage data effortlessly.
This joint venture is experiencing significant growth, and this role will focus on selecting and building patterns that enable us to scale efficiently while maintaining high standards of data quality and accessibility.
Design, build, and maintain scalable Extract, Load, Transform (ELT) pipelines and workflows that transform raw data into actionable insights.
Develop and optimize SQL and Python-based transformation code to process and curate data for analytics and reporting.
Architect data models and schemas that enable self-service analytics and ensure data accuracy and consistency.
Collaborate with cross-functional teams to identify data needs, create reusable data patterns, and implement best practices.
Monitor and improve the performance and reliability of data pipelines, platform and infrastructure.
Champion data quality through validation processes, monitoring frameworks, and automated testing.
Partner with platform engineers to integrate new tools, technologies, and frameworks into the data ecosystem.
Act as a mentor and thought leader for the data engineering community within the organization.
Streamline tools and processes, mature existing patterns, and establish new patterns as needed to allow the organization to scale gracefully.
Why Join the Core Data Team?
Real-World Impact: See your ideas drive real-world impact across some of the world’s most recognized car brands.
New Challenges: From zonal architecture to in-vehicle intelligence, work on cutting-edge technologies and challenging technical problems to define what’s next for the automotive experience.
Engineering-Led: Thrive in an Engineering-Centric Culture that embraces first principles thinking to build scalable solutions.
Highly Collaborative: Collaborate with teams of talented passionate Engineers who believe the best work comes from bringing bold and diverse thinkers together.
Opportunities: Professional Development, Career Advancement, and Tuition Reimbursement.
Benefits: Health, Well-Being, Financial, and LGBTQIA2+ Benefits start on Day 1.
Time Away from Work: We’re passionate about creating a better way to explore the world, which includes a variety of ways to give Employees the time they need to rest, recharge, and go exploring.
Cultivating Inclusion and Belonging: Read our 2024 Impact Report to learn more about our beliefs that inclusivity and equity are essential for meaningful and lasting social and environmental solutions.
You might be a great fit if you also happen to be a fan of Data, Food, and Pets.
Minimum Requirements
Office Location Requirement: Ability to work from R | V Tech’s Yaletown Office at least three days per week is required for this role.
Work From Home: Ability to work from home two days a week is supported as well.
On-Call: Participation in a rotating on-call schedule, including occasional weekends, late nights, and holidays, to resolve critical production issues is also required.
Education: At least a Bachelor’s Degree is required for this role.
Experience: At least 2 years of professional experience is needed for this role in:
Data Engineering and Data Analytics
Building and Managing data platforms.
BI and Analytics tools like Tableau, Looker, and Microsoft Power BI, along with their integration into data platforms.
Skills:
Strong communication and collaboration skills are essential to excel in this role as our Core Data Team partners with and supports colleagues across the business with varying levels of technical expertise.
Strong problem-solving skills and a passion for creating efficient, maintainable, and scalable data systems.
Expertise in SQL and Python for data transformation and pipeline development.
Programming languages for data manipulation, analysis, and visualization.
Preferred:
Bachelor’s Degree with a major in Computer Science, Engineering, Statistics, Mathematics, or a similar discipline is highly preferred.
Master's Degree or PhD is highly preferred but not required.
Strong experience with Extract, Load, Transform (ELT) patterns and modern data stack tools such as fivetran, dbt, Databricks (preferred), Snowflake, BigQuery is highly desirable.
Experience with any data quality, data security and monitoring initiatives is preferred.
Experience with orchestration tools like Airflow or similar is also preferred.