Perform is hiring for a Data Engineer! This is a hands‑on engineering role focused on building production‑grade pipelines, schemas, and supporting services. You will work closely with other developers in an agile environment and will have opportunities to interact directly with the users of the tools you build.
What You’ll Do
- Design and deliver data infrastructure including warehouse and schema design, pipelines, OLTP components, and ETL workflows.
- Build and maintain supporting services and internal tooling used to manage and maintain data.
- Develop new data pipelines, APIs, and data management tools in an agile delivery environment.
- Support the end‑to‑end lifecycle including design, deployment, testing, operations, monitoring, and ongoing support.
- Collaborate with developers and stakeholders to ensure data solutions meet business and product needs.
- Contribute to discussions and decisions around data quality, testing strategy, and test automation.
Who You Are
- Strong data engineering skills with hands‑on experience in data migrations and data modeling.
- Proficient in Python with a track record of delivering production pipelines.
- Experience with SQL and NoSQL databases in production environments.
- Familiarity with the Azure data ecosystem (Azure Data Factory, Azure Data Lake, Microsoft Fabric, and related tooling).
- Experience using Git‑based version control (Git, GitHub).
- Experience with automated deployment of pipelines and schema into production.
- Comfortable working with security, testing, performance, JSON, and REST APIs.
- Experienced with agile methods and continuous delivery concepts.
- Strong verbal and written communication skills.
It is an asset if you have
- Node.js experience.
- Experience with SQL Server, Azure Synapse, and MongoDB.
- Exposure to Kafka, Cassandra, or other cloud‑based warehouses.