Job Search and Career Advice Platform

Enable job alerts via email!

Staff Software Engineer - Data Engineering

Lookout Inc

Canada

On-site

CAD 130,000 - 160,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading cloud security company in Canada is seeking a Staff Software Engineer to join their Data Engineering team. The role involves developing scalable data engines, managing data from various services, and ensuring high performance across distributed systems. Candidates should have strong software engineering skills, expertise in big data technologies, and familiarity with AWS services. The position offers a competitive salary and the chance to work on critical security solutions.

Benefits

Competitive salary
Bonus and equity options
Health benefits

Qualifications

  • 5-8+ years of experience developing and maintaining software.
  • Hands-on knowledge of Spark and streaming workloads.
  • Experience in performance tuning of distributed platforms.

Responsibilities

  • Create and maintain code and repositories.
  • Develop and contribute to the test framework.
  • Conduct functional and system testing on data pipelines.
  • Build automation tests for CI/CD pipeline.

Skills

Strong software engineering fundamentals
Experience with data platforms
Broad understanding of cloud architecture
Rich experience with big data platforms
Spark and Kafka experience
Administration of JVM-based systems
Advanced SQL knowledge
Excellent communication skills
Experience in ETL testing
CI/CD tools knowledge

Education

BS degree in Computer Science or similar

Tools

Scala
Java
Python
Gradle
Maven
AWS
Docker
Job description

Lookout, Inc. is the endpoint to cloud security company purpose-built for the intersection of enterprise and personal data. We safeguard data across devices, apps, networks and clouds through our unified, cloud-native security platform — a solution that's as fluid and flexible as the modern digital world.Solve maximally. By giving organizations and individuals greater control over their data, we enable them to unleash its value and thrive. Lookout is trusted by enterprises of all sizes, government agencies and millions of consumers to protect sensitive data, enabling them to live, work and connect — freely and safely.

As a Staff Software Engineer on the Data Engineering team, you will contribute broadly to the data engines, ETLibril pipelines, analysis and aggregation services, and other core intellectual property related services at Lookout. The position is an opportunity to tackle the most interesting challenges in the company and join the team that provides and supports the fundamental building blocks of the Security Platform that underlies Lookout’s category-defining personal and enterprise products. This platformToronto is capable of analyzing millions of apps daily, trusted by Google to help identify potentially harmful applications before they enter the Play Store and by leading telecommunications providers to protect their customers. You’ll ownParams product quality and work closely with演坊 development team.

We are looking for motivated engineers who have experience building, monitoring, and maintaining high‑volume, low‑latency distributed SaaS solutions, with emphasis on data management and performance. As part of the peaceful team you will be responsible for a massively scalable platform that processes data from a variety of core services, including static analysis and phishing detection.

Our culture is built on agile development, small focused teams, meaningful metrics, rapid feedback, well‑designed APIs, coherent UIs, test‑driven development, automation wherever possible, and making the right decisions.

Requirements for the position:
  • Strong software engineering fundamentals – object‑oriented design, data structures, and algorithms.
  • Experience building frameworks and solutions for data platforms, focusing on BI pipelines.
  • Broad understanding of cloud architecture tools and services such as S3, EMR, Glue, Athena, Kafka, Kubernetes, and Lambda functions. Experience in AWS is highly desirable.
  • Rich experience and deep expertise in big‑data and large‑scale data platforms, especially in Data Lake.
  • Stream processing engines – Spark Structured Streaming/Kafka.
  • Analytical processing on big data using Spark.
  • Hands‑on administration, configuration management, monitoring, and performance tuning of Spark batch and streaming workloads, distributed platforms, and JVM‑based systems.
  • Advanced working knowledge of SQL and experience working with relational databases, query authoring (SQL), and familiarity with a variety of databases.
  • Excellent written and verbal communication skills.
  • Experience in ETL testing (Billing or Data Warehouse) is a plus.
  • Knowledge of automation, CI/CD tools such as Jenkins, CloudBees, Spinnaker, and others.
  • 5‑8+ years of overall experience developing and maintaining large‑scale, distributed production‑class software on public cloud platforms such as AWS.
  • BS degree in Computer Science or similar engineering discipline, or equivalent work experience.
Responsibilities:
  • Create and maintain code and repositories.
  • Create large data sets for functional and performance tests.
  • Develop and contribute to the test framework.
  • Debug test failures and triage production issues.
  • Validate and certify release candidates.
  • Conduct functional, system, and smoke testing on various data pipelines.
  • Build tools for validation and debugging of data pipelines.
  • Work with peer data engineers, developers, and other team members to understand complex systems and develop solutions.
  • Understand business requirements; author designs and PoC’s for data engineering projects.
  • Build automation tests that fit into the CI/CD pipeline.
  • Participate in code reviews.
Tools that you will work with:
  • Scala, Java, Python, and others as needed.
  • Build tools: Gradle, Maven, Sbt, and others.
  • AWS primitives and distributed technologies: SWF, EMR, Kafka, EC2, IAM roles, and others.
  • Understanding of Docker is nice to have.

The Canadian base salary range for this full‑time position is available below. We offer base + bonus + equity + benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all Canadian locations. Within the range, individual pay is determined by work location and additional factors, including job‑related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Please note that the compensation details listed in Canadian role postings reflect the base salary only, and do not include bonus, equity, or benefits.

Remote, Canada

$130,000 - $160,000 CAD

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.