About Gemma Analytics
At Gemma, we help organizations activate and operationalize their data using state-of-the-art technology. With our two verticals, GEM.AI for applied AI andGEM.BI for modern business intelligence, we enable clients to make better choices and take full ownership of their data. We are service-driven at our core, focused on delivering practical, high-quality solutions that create long-term impact. You can read more about our data philosophy here. Our clients range from Series A ventures to established SMEs, with company sizes spanning from 30 to over 13,000 employees. Across every engagement, we prioritize honesty, clarity, and a genuinely enjoyable working relationship.
About the job
As one of our Data Engineers, you help clients get value from their data by building and maintaining reliable, scalable data infrastructure. You design pipelines, ensure data quality, and make data easy to access and use. Your work supports confident, data-informed decisions. You also help improve our internal systems and modernize our tech stack for better performance and scalability.
As a senior team member, you act as a sparring partner and coach to your colleagues. You’re someone others turn to for advice on technical challenges, project structure, and best practices. You’re excited to help them grow.
Responsibilities:
- Work across a broad range of technologies in a tooling-agnostic environment, staying current with the evolving data landscape
- Partner with domain experts and client stakeholders to solve complex, high-impact data challenges
- Lead the modernization of our data infrastructure and tech stack to ensure scalability and long-term efficiency
- Define and enforce technical standards for AI and ML projects, ensuring they are production-ready, scalable, and business-relevant
- Mentor team members through code reviews, pair programming, and knowledge sharing
- Drive internal best practices by leading sparring sessions and shaping scalable, maintainable project structures
Who you are
We believe in a good mixture of experience and upside in our team. We are looking for both types of people equally. For this role, we require more expertise and proof of trajectory.
- 3–5 years of hands-on experience in data engineering or analytics engineering, with a strong focus on building and maintaining robust data pipelines and analytics-ready data models
- Proficient inPython(or a similar scripting language) for use cases such asAPI integration,data loading, andautomation
- Proficient in SQL and experienced with relational databases, capable of translating complex business logic into clear, maintainable queries
- Experience working withmodern data stack tools, such as Snowflake, BigQuery, Airflow, Airbyte/Fivetran, Git, and CI/CD workflows
- Interest in exploring new technologies and staying up to date (e.g. DuckDB/Motherduck, Apache Iceberg/Delta Lake andDagster)
- Strong communication skills inEnglish(written and spoken), with the ability to explain technical decisions and collaborate with both technical and non-technical stakeholders
- Comfortable working inclient-facing projects, navigating ambiguity, and delivering high-quality results with minimal oversight
- Experience coaching or mentoring junior team members through code reviews, sparring, and knowledge sharing
- Bonus:
- Docker, Kubernetes
- ML Engnieering: deployment and monitoring of prediction models
- Fluency in German
Gemma Perks
We are located in Berlin, close to Nordbahnhof. We are currently 18 colleagues and will grow to 24 colleagues in 2026. Other perks include
- We have an honest, inclusive work environment and want to nurture this environment
- We have frequent team events in Berlin: our cultural base
- We encourage workations and even do workations as a company - it is up to you to come to the office between one and five days a week
- We don’t compromise on equipment - a powerful Laptop, extra screens, and all the tools you need to be effective
- We will surround you with great people who love to solve (mostly data) riddles
- We believe in efficient working hours rather than long working hours - we focus on the output rather than the input
- We learn and share during meetups, lunch & learn sessions and are open for further initiatives
- We pay a market-friendly salary and we additionally distribute at least 20% of profits to our employees
- We are fast-growing, have technology at our core, yet we do not rely on a VC and operate profitably
- We have a great yearly offsite event that brings us all together for a full week, enjoying good food, having a good time, and of course, solving complex data-related tasks
How you'll get here
- CV Screening
- Initial Conversation
- Hiring Test @home
- Interviews with 2-3 future colleagues
- Reference calls
- Offer + Hired