Enable job alerts via email!

ETL Developer

Groupe SII

Szczecin

On-site

EUR 40,000 - 60,000

Full time

7 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Groupe SII is seeking an ETL Developer/Data Engineer to work on data products for the pharmaceutical industry. This role involves designing high-performance data architectures, implementing ETL processes, and enhancing data processing efficiency. Ideal candidates should have strong programming skills in Python or R, experience with data warehousing, and familiarity with various storage solutions.

Qualifications

  • At least 4 years of experience in data pipeline programming (Python or R).
  • Proficiency in SQL and experience in the Snowflake environment.
  • Knowledge of Pharma data formats is a plus.

Responsibilities

  • Develop and implement scalable, high-performance data architectures.
  • Write, debug, and optimize complex SQL queries and ETL processes.
  • Analyze performance issues to enhance data processing efficiency.

Skills

Data modeling
ETL processes
SQL
Python
Git

Tools

Snowflake
Talend
Airflow
Docker

Job description

Social network you want to login/join with:

We are looking for an ETL Developer/Data Engineer to work on various data products dedicated to the pharmaceutical industry. The solutions you will be developing enable services and applications to provide insight for commercial customer service relevant data as well as proactive maintenance and repair to the client.

Apply to Sii and join our

Data & Analytics Competency Center!

Your role

We are looking for an ETL Developer/Data Engineer to work on various data products dedicated to the pharmaceutical industry. The solutions you will be developing enable services and applications to provide insight for commercial customer service relevant data as well as proactive maintenance and repair to the client.

Apply to Sii and join our

Data & Analytics Competency Center!

Your role
  • Develop and implement scalable, high-performance data architectures
  • Design data models, schema and structures that support business needs
  • Ensure solutions are designed for optimal performance, scalability, and reliability
  • Write, debug, and optimize complex SQL queries and stored procedures
  • Develop ETL (Extract, Transform, Load) processes using tools and technologies compatible with Snowflake
  • Implement and manage data integration solutions
  • Analyze performance issues and implement solutions to enhance data processing efficiency
  • Participate in code reviews and ensure adherence to development standards and best practices
  • Your skills
  • At least 4 years of working with programming languages focused on data pipelines (Python or R)
  • Experience with data modeling, data warehousing, and ETL processes
  • Proficiency in SQL and experience in the Snowflake cloud environment
  • Previous work with different types of storage (filesystem, relations, MPP, NoSQL) and various kinds of data (structured, unstructured, metrics, logs, etc.)
  • Familiarity with data integration tools (e.g. Talend, Informatica) and cloud platforms (preferably AWS)
  • Exposure to open-source and proprietary cloud data pipeline tools such as Airflow, Glue, DBT, and Dataflow
  • Experience in working in data architecture concepts in any of the following areas: data modeling, metadata, workflow management, ETL/ELT, real-time streaming, data quality, distributed systems
  • Very good knowledge of relational databases and data serialization languages such as JSON, XML, YAML
  • Excellent knowledge of Git, Gitflow, and DevOps tools, e.g. Docker, Bamboo, Jenkins, Terraform
  • Capability to conduct performance analysis, troubleshooting, and remediation
  • Knowledge of Pharma data formats is a big plus

  • Job no.240702-JNCIR

    Get your free, confidential resume review.
    or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.