Enable job alerts via email!

Senior Data Engineer

Box

Warszawa

On-site

PLN 212,000 - 298,000

Full time

Today
Be an early applicant

Job summary

A leading cloud content management company in Warsaw is seeking a skilled Data Engineer. You will work with data engineers to design and implement data architecture, build pipelines, and optimize data processing. Ideal candidates have 5+ years of experience, strong SQL skills, and familiarity with GCP tools. This role emphasizes collaboration and adherence to diverse values.

Qualifications

  • 5+ years of relevant industry or academic experience working with large amounts of data.
  • Experience building and optimizing scalable data pipelines, architectures, and data sets.
  • Strong analytic skills related to working with structured and unstructured datasets.

Responsibilities

  • Identify business opportunities and design scalable data solutions.
  • Build and own data pipelines that clean, transform, and aggregate data.
  • Create and maintain optimal data pipeline architecture.

Skills

Data pipeline optimization
SQL
Scala
Java
Python
BigQuery
Hadoop
Spark
Kafka
Tableau

Tools

GCP
Kubernetes
Docker
Job description

Box is the world’s leading Content Cloud, trusted by over 115K organizations worldwide, including nearly 70% of the Fortune 500. By joining Box, you will have the opportunity to drive our platform forward, bringing intelligence to content management and empowering customers to transform workflows.

Founded in 2005, Box is headquartered in Redwood City, CA, with offices across the United States, Europe, and Asia. We are expanding our Data Engineering initiative and seeking a skilled professional to help build the data platform engineering features and capabilities of our cloud cost management platform.

Responsibilities
  • Work with a team of high-performing data engineers and analysts to identify business opportunities and design scalable data solutions
  • Build and own data pipelines that clean, transform, and aggregate data from disparate sources
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional and non-functional business requirements
  • Identify, design, and implement internal process improvements, automating manual processes and optimizing data delivery
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from various sources using GCP BigQuery and Spark
  • Build analytics tools to provide actionable insights into operational efficiency and key business performance metrics
  • Collaborate with stakeholders, including Executive, Product, Data, and Design teams, to support their data infrastructure needs
Requirements
  • 5+ years of relevant industry or academic experience working with large amounts of data
  • Experience building and optimizing scalable data pipelines, architectures, and data sets
  • Expertise in SQL and at least one programming language: Scala, Java, or Python
  • Experience with GCP (BigQuery, Dataproc, Dataflow/Fusion) and big data tools: Hadoop, Spark, Kafka, etc.
  • Strong analytic skills related to working with structured and unstructured datasets
  • Familiarity with Virtualization/container abstractions and orchestration (Kubernetes, Docker, etc.) and Visualization software: Tableau

Box lives its values, with community and in-person collaboration being a core part of our culture. We are an equal opportunity employer, valuing diversity and not discriminating on the basis of protected grounds.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.