We build. We create impact.
We are seeking a Senior Data Engineer to support the growth of the company and build our data analytics SaaS platform for the retail industry, especially in the context of increasing diversity of volume, sources, and types of data utilized by the organization.
We work on solutions that :
- Handle large volumes of diverse data from IoT / event streams, computer vision, transactional, operational and supply chain sources
- Enrich AI and Computer Vision capabilities with a business intelligence layer, transforming data into actionable insights for retailers
Your Role
As a member of the Memorys Data Team, your role will involve ensuring the optimal use of data within our analytics platforms. Your primary responsibilities will include:
- Design, build and manage our data pipelines, ensuring data is seamlessly integrated into our data lakehouse.
- Work collaboratively with various teams—including Data Science, Infrastructure, Software, Product and Consulting—to understand their data needs and provide solutions.
- Implement robust and fault‑tolerant systems for data ingestion and processing.
- Participate in data architecture engineering and data management decisions, bringing your strong experience and knowledge to bear.
- Ensure the security, integrity and compliance/quality of data according to industry and company standards.
- Prepare the data specifications for collection from partners and assist with the data aspects of implementing Memory analytics platforms.
- Conduct assessment and continuous improvement of data models that integrate multiple data types from various sources, both structured and unstructured.
Qualifications
- At least 8 years of engineering experience related to Data Engineering / Data Ops.
- Proficiency in at least one programming language commonly used within Data Engineering such as Python, Scala or Java.
- Experience with distributed processing technologies and frameworks such as Spark / Databricks and distributed storage systems (e.g., HDFS).
- Solid understanding of Spark and ability to write, debug and optimise Spark code.
- Expertise with an orchestrator such as Airflow, Dagster, Prefect or similar frameworks.
- Knowledge of Data Lakehouse architectures or OLAP / Cube systems for advanced analytics.
- Familiarity with Docker & Kubernetes.
- Knowledge of Kafka, Event Hubs, Pub/Sub is a plus.
- Knowledge of dbt is a plus.
- Deep understanding of best practices in data architecture, data management, security and compliance.
- Team spirit and excellent communication skills to collaborate with cross‑functional teams.
By joining our team you’ll have the opportunity to work on exciting projects in the retail sector, learn and grow in a dynamic environment.