Enable job alerts via email!

Senior GCP Data Engineer - Future Opening

Xebia

Snowflake (AZ)

Remote

USD 100,000 - 130,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company is seeking a Senior Data Engineer to work closely with engineering, product, and data teams. The role involves designing and maintaining data platforms, mentoring engineers, and delivering robust data solutions for clients globally. Candidates should have extensive experience with GCP, Apache Airflow, and strong Python skills. This position offers the opportunity to work on innovative projects while collaborating with a talented team.

Qualifications

  • 5+ years in a senior developer role with hands-on experience in building data processing pipelines.
  • Proficiency with GCP services, especially BigQuery and BigQuery SQL.

Responsibilities

  • Designing, building, and maintaining data platforms and pipelines.
  • Integrating data sources and optimizing data processing.
  • Mentoring new engineers and delivering scalable solutions.

Skills

Python
GCP
Apache Airflow
Data Modelling
SQL
NoSQL
English

Tools

Docker
Kubernetes
Terraform
Kafka

Job description

  • On-site, Remote, Hybrid
+2 more
  • Data

We are Xebia - a place where experts grow. For nearly two decadesnow, we've been developing digital solutions for clients from manyindustries and places across the globe. Among the brands we’ve workedwith are UPS, McLaren, Aviva, Deloitte, and many, many more.

We're passionate about Cloud-based solutions. So much so, that wehave a partnership with three of the largest Cloud providers in thebusiness – Amazon Web Services (AWS), Microsoft Azure & Google CloudPlatform (GCP). We even became the first AWS Premier Consulting Partnerin Poland.

Formerly we were known as PGS Software. In 2021, we joined XebiaGroup – a family of interlinked companies driven by the desire to make adifference in the world of technology.

Xebia stands for innovation, talented team members, and technologicalexcellence. Xebia means worldwide recognition, and thought leadership.This regularly provides us with the opportunity to work on global,innovative projects.

Our mission can be captured in one word: Authority. We want to be recognized as the authority in our field of expertise.

What makes us stand out? It's the little details, like our attitude,dedication to knowledge, and the belief in people's potential -emphasizing every team members development. Obviously, these things arenot easy to present on paper – so make sure to visit us to see it withyour own eyes!

Now, we've talked a lot about ourselves – but we'd love to hear more about you.

Send us your resume to start the conversation and join the #Xebia.

About the role:

As a Senior Data Engineer at Xebia, you will work closely with engineering, product, and data teams to deliver our clients scalable and robust data solutions. Your key responsibilities will include designing, building, and maintaining data platforms and pipelines and mentoring new engineers.

You will be:
  • working with various clients globally, delivering software systems and best practices for scalable and robust solutions,
  • engineering data platforms for scale, performance, reliability, and security,
  • integrating data sources and optimizing data processing,
  • proactively addressing challenges, resolving blockers, and driving effective communication across distributed teams,
  • continuously seeking opportunities to enhance data systems and ensuring alignment with evolving business needs.
Job requirements
Your profile:
  • available to start immediately,
  • 5+ years in a senior developer role, with hands-on experience in building data processing pipelines,
  • proficiency with GCP services, especially BigQuery and BigQuery SQL, for large-scale data processing and optimization,
  • extensive experience with Apache Airflow, including DAG creation, triggers, and workflow optimization.
  • knowledge of data partitioning, batch configuration, and performance tuning for terabyte-scale processing.
  • strong Python proficiency, with expertise in modern data libraries and frameworks (e.g., Databricks, Snowflake, Spark, SQL).
  • experience with unit testing, pre-commit checks, and strict type enforcement for data pipelines.
  • deep understanding of relational and NoSQL databases, data modelling, and data warehousing concepts.
  • excellent command of oral and written English.

Work from the European Union region and a work permit are required.

Nice to have:
  • expertise in optimizing BigQuery performance using tools like Query Profiler and addressing a compute resource bottlenecks,
  • prior experience developing or testing custom operators in Apache Airflow,
  • familiarity with Docker, Kubernetes, Helm, Terraform, Kafka, and CI/CD pipelines for data environments.
Recruitment Process:

CV review – HR call – InterviewClient Interview – Decision

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.