Enable job alerts via email!

Middle Data Engineer (Data, ML, BI & Automation) — BigQuery & Looker

Kyriba

Województwo mazowieckie

On-site

PLN 120,000 - 150,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A global leader in financial solutions is seeking a Data Engineer in Poland to design and maintain scalable data pipelines and infrastructure. The ideal candidate will have experience with Google Cloud and a strong background in data engineering and machine learning. This role focuses on optimizing data flows for analytics and automation while ensuring data quality and governance.

Qualifications

  • Proven experience as a Data Engineer or similar role.
  • Experience building reliable data pipelines for analytics, ML, BI, and automation use cases.
  • Solid knowledge of GCP data and streaming services.

Responsibilities

  • Design and optimize ELT/ETL pipelines using Google BigQuery.
  • Collaborate with Data Scientists for dataset delivery.
  • Implement data governance and compliance practices.

Skills

Google BigQuery
Python
SQL
GCP services
Machine Learning
Data Engineering
Data Governance

Education

Bachelor’s or Master’s degree in Computer Science or related field

Tools

Google Cloud Storage
Looker
MuleSoft
Job description

It's fun to work in a company where people truly BELIEVE in what they're doing!

We're committed to bringing passion and customer focus to the business.

About Us

Kyriba is a global leader in liquidity performance that empowers CFOs, Treasurers and IT leaders to connect, protect, forecast and optimize their liquidity. As a secure and scalable SaaS solution, Kyriba brings intelligence and financial automation that enables companies and banks of all sizes to improve their financial performance and increase operational efficiency. Kyriba’s real-time data and AI-empowered tools empower its 3,000 customers worldwide to quantify exposures, project cash and liquidity, and take action to protect balance sheets, income statements and cash flows. Kyriba manages more than 3.5 billion bank transactions and $15 trillion in payments annually and gives customers complete visibility and actionability, so they can optimize and fully harness liquidity across the enterprise and outperform their business strategy. For more information, visit www.kyriba.com.

We are seeking a versatile and innovative Data Engineer to design, build, and maintain scalable data pipelines and infrastructure that support analytics, reporting, Machine Learning (ML), Generative AI (GenAI), Business Intelligence (BI), and automation initiatives. The ideal candidate will have practical experience with Google Cloud, BigQuery, and modern data processing, with a keen interest in enabling advanced analytics and automation across the organization.

Key Responsibilities
Data Engineering
  • Design, implement, and optimize robust ELT/ETL pipelines using Google BigQuery, Cloud Storage, and GCP services (e.g., Dataflow, Pub/Sub, Cloud Composer) to support analytics, ML, BI, and automation use cases.
  • Build and maintain data architectures for structured and unstructured data, ensuring data quality, lineage, and security.
  • Integrate data from multiple sources, including external APIs and on-premise systems, to create a unified, well-modeled data environment.
  • Apply BigQuery best practices including partitioning, clustering, materialized views, and cost/performance optimization.
Machine Learning & GenAI
  • Collaborate with Data Scientists and ML Engineers to deliver datasets and features for model training, validation, and inference.
  • Develop and operationalize ML/GenAI pipelines, automating data preprocessing, feature engineering, model deployment, and monitoring using Vertex AI and/or BigQuery ML.
  • Support the deployment and maintenance of GenAI models and LLMs in production environments, including prompt/feature pipelines and inference orchestration.
  • Stay current on emerging ML and GenAI technologies and best practices across the GCP ecosystem.
Business Intelligence & Reporting
  • Partner with BI Developers and Analysts to provide clean, reliable, governed data sources for reporting and dashboarding in Looker (semantic modeling in LookML).
  • Enable data access and transformation for self-service BI; ensure BI solutions are scalable, secure, and performant.
  • Integrate advanced analytics and ML/GenAI outputs into BI datasets, Looks, and Explores for actionable insights.
Automation
  • Partner with Automation Specialists to design and implement data-driven automated workflows using MuleSoft and/or GCP services (e.g., Cloud Functions, Workflows, Cloud Run).
  • Develop and maintain automation scripts and integrations to streamline data flows, improve operational efficiency, and reduce manual effort.
Governance & Collaboration
  • Implement data governance, security, and compliance best practices across all data assets, leveraging tools such as Dataplex and Data Catalog for lineage and metadata.
  • Document data flows, pipelines, and architectures for technical and business stakeholders.
  • Collaborate across teams (data science, BI, business, IT) to align data engineering efforts with strategic objectives and SLAs.
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field.
  • Proven experience as a Data Engineer or similar role.
  • Expertise with Google BigQuery and Google Cloud Storage; solid knowledge of GCP data and streaming services (Dataflow/Apache Beam, Pub/Sub, Cloud Composer/Airflow).
  • Strong programming skills in Python and SQL.
  • Experience building reliable data pipelines for analytics, ML, BI, and automation use cases.
  • Familiarity with ML frameworks (scikit-learn, TensorFlow, PyTorch), MLOps on GCP (Vertex AI Pipelines/Model Registry) or BigQuery ML, and GenAI libraries/tooling where applicable.
  • Experience supporting BI/reporting solutions, preferably with Looker and LookML.
  • Hands-on experience with automation/integration platforms such as MuleSoft is a strong plus.
  • Understanding of data governance, security, quality, and compliance on cloud platforms.
  • Excellent communication, collaboration, and problem-solving skills.
Nice to Have
  • Experience deploying and operationalizing GenAI/LLM solutions at scale on GCP (Vertex AI, vector search, embeddings).
  • Experience with API development and integration (Cloud Run/Functions, Apigee).
  • Knowledge of DevOps/CI-CD for data solutions (Cloud Build, Git, Infrastructure as Code such as Terraform).
  • Relevant Google Cloud or Looker certifications (e.g., Professional Data Engineer, LookML Developer).
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.