Enable job alerts via email!

Big Data Developer

Infosys Limited

Warszawa

Hybrid

PLN 180,000 - 240,000

Full time

Today
Be an early applicant

Job summary

A leading tech company seeks a Data Engineer to join their team in Poland. The successful candidate will work on data modeling, build key elements of the data platform, and optimize data warehousing. This hybrid role requires a strong grasp of AWS and SQL, and previous experience in ETL projects or Big Data. A financial background is a plus but not mandatory.

Qualifications

  • Experience in Data Modeling, ETL projects, and Big Data as a developer or engineer.
  • Good experience in Python, Java, or similar languages.
  • Proven experience with AWS.
  • SQL and data modeling proficiency.
  • Comfortable with Data Warehousing concepts.
  • Experience building robust and reliable data sets.

Responsibilities

  • Support building robust data models in Snowflake.
  • Optimize the Data Warehouse to reduce complexity and cost.
  • Define and manage best practices for the Data Warehouse.
  • Collaborate and set standards for data management.
  • Monitor and improve data quality within the warehouse.
  • Prioritize data governance issues.

Skills

Data Modeling
ETL projects
Big Data
Python
Java
AWS
SQL
Infrastructure as Code (IaC)

Tools

Snowflake
dbt
Airflow
Terraform
AWS CloudFormation
Ansible
Job description

Position: Data Engineer
Location: Krakow, Wroclaw, Warsaw
Type of work: Hybrid (2-3 days per week in office) - obligatory
Contract: only contract of employment
Salaries: base + financial bonus

Please note: for this role, we cannot consider candidates who require company support to legally live and work in Poland

The Role

You’ll be a senior contributor in our Data Engineering team, working across various projects, building out key elements of our data platform using a range of modern tools including Snowflake, dbt, and Airflow. You’ll help us minimize our cloud costs, drive best practices across all our Data Disciplines, and scale and automate our data governance.

What you’ll be working on:
  • Support the building of robust data models downstream of backend services (mostly in Snowflake) that support internal reporting, financial, and regulatory use cases.
  • Focus on optimization of our Data Warehouse, spotting opportunities to reduce complexity and cost.
  • Help define and manage best practices for our Data Warehouse. This may include payload design of source data, logical data modeling, implementation, metadata, and testing standards.
  • Set standards and ways of working with data across VXBS, working collaboratively with others to make it happen.
  • Take established best practices and standards defined by the team and apply them within other areas of the business.
  • Investigate and effectively work with colleagues from other disciplines to monitor and improve data quality within the warehouse.
  • Contribute to prioritization of data governance issues.
You should Apply if
  • You have experience and a passion for Data Modeling, ETL projects, and Big Data as a developer or engineer
  • You have good experience in Python, Java, or similar languages
  • You have proven experience with AWS
  • SQL and data modeling is second nature to you
  • You are comfortable with general Data Warehousing concepts
  • You strive for improvement in your work and that of others, proactively identifying issues and opportunities
  • You have experience building robust and reliable data sets requiring a high level of control
  • You have experience working with IaC tools such as Terraform, AWS CloudFormation, or Ansible
Nice to have:
  • Any experience working within a finance function or knowledge of accounting.
  • Experience working in a highly regulated environment (e.g., finance, gaming, food, healthcare).
  • Knowledge of regulatory reporting and treasury operations in retail banking
  • Have previously used dbt, Databricks, or similar tooling
  • Experience working with orchestration frameworks such as Airflow/Prefect
  • Design and implementation knowledge of stream processing frameworks like Flink, Spark Streaming, etc.
  • Used to AGILE ways of working (Kanban, Scrum)
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.