About our Data Engineering role
We are looking for a skilled professional to join our data and AI engineering team.
The Opportunity
- This position involves building and managing ETL / ELT pipelines using various tools such as Databricks, dbt, PySpark, and SQL.
- Our ideal candidate will contribute to scalable data platforms across cloud environments like Azure, AWS, GCP.
- Implementing and maintaining CI / CD workflows using GitHub Actions or Azure Pipelines is also essential.
Key Responsibilities
- Data Engineer Key Tasks :
- Build and manage large-scale data pipelines.
- Develop and maintain high-quality code in a collaborative environment.
- Collaborate with cross-functional teams to drive business outcomes.
- Desirable Skills :
- Hands-on experience with cloud-native data engineering.
- Comfort working with at least one major cloud platform (Azure, AWS, GCP).
- Expertise in CI / CD automation, especially with GitHub Actions or Azure Pipelines.
Requirements
- A strong background in computer science and software engineering.
- Excellent problem-solving skills and analytical thinking.
- Ability to work independently and collaboratively in a dynamic environment.
Benefits
- Professional Growth Opportunities :
- Possibilities of career development and the opportunity to shape the company's future.
- An employee-centric culture directly inspired by employee feedback.
- Flexible Work Arrangements :
- Hybrid work model and flexible working schedule that would suit night owls and early birds.
What We\'re Looking For
- We believe diverse perspectives and backgrounds lead to better ideas.
Culture and Benefits
- Employee-Centric Culture :
- An inclusive and supportive work environment.
- Ongoing opportunities for growth and development.
- Friendly Work Environment :
- Collaborative workspace with talented professionals.