Enable job alerts via email!

Cloud Data Engineer

AlphaPoint

Southwestern Ontario

Remote

CAD 90,000 - 120,000

Full time

Today
Be an early applicant

Job summary

A financial technology company is seeking a skilled engineer to build scalable data infrastructure, implement ETL pipelines, and develop integrations with legacy systems. Candidates should have a computer science degree and 7-10 years of software development experience, with proficiency in Python, Node.js, or Java. This role offers a fully remote work environment, competitive compensation, and opportunities for impactful contributions.

Benefits

100% Remote Work Environment
Competitive compensation
Equity or stock options
A culture of autonomy, experimentation, and learning
Opportunity to make a real impact

Qualifications

  • 7-10 years experience in software development.
  • Familiarity with distributed computing and data modeling principles.
  • Experience with messaging systems like RabbitMQ.

Responsibilities

  • Build a scalable infrastructure for data processing.
  • Monitor and extract data from various sources.
  • Architect backend solutions to support microservices.

Skills

Python
Node.js
Java
Creativity in data harvesting
Communication skills
Proficiency with Git
Test Driven Development

Education

Bachelor’s degree in computer science or similar discipline

Tools

Apache Airflow
Spark
NoSQL databases
Redis
Kafka
AWS
Job description

About Us

AlphaPoint’s AI Labs’ team of engineers and AI scientists is solving complex business problems by bridging the gap between transformative breakthroughs in AI technology and increasingly competitive markets.

Our team is developing and applying the latest generative AI, data and knowledge modeling technologies to large scale problems, right at the edge of what is possible.

AlphaPoint is a financial technology company powering digital asset exchanges and brokerages worldwide.

The Role
  • Build a scalable and highly performant infrastructure to process batch and real-time workloads
  • Work with the AI engineering team and external engineering teams to monitor and extract data from a vast array of data sources
  • Implement ETL data pipelines
  • Architect backend data solutions to support various microservices
  • Develop third-party integrations with large-scale legacy systems
You
  • Bachelor’s degree in computer science or similar discipline
  • 7-10 years experience in software development
  • Proficient in Python, Node.js, and/or Java
  • Familiarity with the basic principles of distributed computing and data modeling
  • Experience building ETL pipelines using Apache Airflow and Spark, Databricks, or other pipeline orchestration tools
  • Experience with NoSQL databases such as MongoDB, Cassandra, DynamoDB, or CosmosDB
  • Experience with real-time stream processing systems like Kafka, AWS Kinesis, GCP Data Flow
  • Experience with Redis, Elasticsearch, Solr
  • Experience with messaging systems like RabbitMQ, AWS SQS, GCP Cloud Tasks
  • Ability to find creative ways to harvest data in unstructured formats by scraping, modeling, and ingesting data into semantic databases and graphs
  • Familiarity with Delta Lake and Parquet files
  • Familiarity with one or more cloud providers: AWS, GCP, or Azure
  • Proficiency with Test Driven Development (TDD)
  • Proficiency with Git using services such as Github or Bitbucket
Preferred Qualifications
  • Experience in a production environment with large-scale knowledge systems.
  • Great written and verbal communication skills
  • Team player hungry to learn from and teach fellow team members
Benefits
  • 100% Remote Work Environment
  • Competitive compensation
  • Equity or stock options (if applicable)
  • A culture of autonomy, experimentation, and learning
  • Opportunity to make a real impact on company trajectory
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.