Senior Data Analytics Engineer

Sei unter den ersten Bewerbenden.
Nur für registrierte Mitglieder
Berlin
EUR 70.000 - 90.000
Sei unter den ersten Bewerbenden.
Gestern
Jobbeschreibung

We are Gemma Analytics: a Berlin-based company specializing in generating insights in high-performance data infrastructure. Gemma was founded in early 2020 by two data enthusiasts. Since then, we have helped over 70 companies become more data-driven and successful. We foster a fun, honest, and inclusive work environment. We are always looking for data-minded people from whom we can learn.

Tasks

Gemma Analytics is data-driven and helps clients become more data-driven.

As our Senior Data & Analytics Engineer, you will play a critical role in helping our clients unlock business value from their data. You are not just technically strong; you are a Data Magician who uncovers structure in chaos and turns raw data into meaningful, actionable insights. You delve into complex datasets, spot what others overlook, and guide clients toward pragmatic, high-impact solutions.

Beyond client work, as a senior team member, you act as a sparring partner and coach to your colleagues. You are someone others turn to for advice on technical challenges, project structure, and best practices, and you are excited to help them grow.

You will have the opportunity to work on challenging problems while helping startups and SMEs make well-informed decisions based on:

  • Our tooling-agnostic approach, touching on multiple technologies and understanding current possibilities in the data landscape
  • Collaborating with domain experts and client stakeholders across various industries to solve data challenges
  • Supporting and mentoring team members through code reviews, pair programming, and knowledge sharing
  • Leading internal sparring sessions and contributing to developing team-wide best practices and scalable project structures

Technologies

We use state-of-the-art technologies while remaining pragmatic. Our workflow follows an ELT philosophy, dividing tasks between Data Engineering and Analytics Engineering.

Data Loading

  • Using schedulers like Apache Airflow or Prefect to run Python DAGs; we also work with dlt as a framework
  • Using connectors like Fivetran or Airbyte Cloud

Data Warehousing

  • PostgreSQL for smaller datasets
  • Snowflake or BigQuery for larger datasets

Data Transformation

  • Using dbt (data build tool), following best practices for modular, testable, and documented code
  • Working with version control, peer reviews, data testing, and engineering best practices
  • For small businesses or specific needs, recommending PowerBI, Tableau, Looker, Holistics, or ThoughtSpot
  • Exploring new tools like Lightdash, Omni, dlt, and others

Minimum Requirements

We value a good mix of experience and potential. For this role, we require demonstrated expertise and a clear trajectory of growth.

  • At least 3-4 years hands-on experience in data or analytics engineering, focusing on building and maintaining robust data pipelines and analytics-ready data models
  • Proficiency in SQL and experience with relational databases, capable of translating complex business logic into maintainable queries
  • Hands-on experience with dbt (preferably dbt Cloud) in production, following best practices
  • Strong understanding of data modeling techniques (e.g., Kimball, Data Vault, star/snowflake schemas) and warehousing principles
  • Experience with modern data stack tools like Snowflake, BigQuery, Airflow, Airbyte/Fivetran, Git, CI/CD workflows
  • Proficiency in Python or similar scripting languages for API integration, data loading, and automation
  • Excellent communication skills in English, both written and spoken, with the ability to explain technical decisions to stakeholders
  • Comfortable working in client-facing projects, navigating ambiguity, and delivering high-quality results with minimal oversight
  • Experience coaching or mentoring junior team members through code reviews and knowledge sharing
  • Bonus: Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker) and fluency in German

Additional Information

Our location is in Berlin, near Nordbahnhof. Currently, we are 20 colleagues, expecting to grow to 22 this year. Other perks include:

  • Hybrid work model with in-office meetings twice a week
  • Intra-EU workations for up to 3 months per year (and beyond, if permitted)
  • An inclusive, honest work environment that we nurture
  • High-quality equipment: powerful laptops, extra screens, and necessary tools
  • Great colleagues who love solving data riddles
  • Focus on output over long hours, with efficient working practices
  • Learning and sharing through meetups, lunch & learn sessions, and other initiatives
  • Market-competitive salary, with at least 20% profit sharing
  • Profitability without VC reliance
  • Annual offsite events (e.g., Austria 2021, Czech Republic 2022, Spain 2025)

Hiring Process

  • CV screening
  • Initial phone/coffee/tea chat
  • Home-based hiring test
  • Interviews with future colleagues
  • Reference checks
  • Offer and onboarding

Key Skills

Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala

Employment Type:

Full Time

Experience:

3-4 years

Vacancy:

1

Position:

Senior Data Engineer • Berlin, Berlin, Germany