Enable job alerts via email!

data analyst

Lorien

London

On-site

GBP 45,000 - 75,000

Full time

30+ days ago

Job summary

An established industry player is seeking a skilled Data Engineer to join their innovative team. In this role, you will design and maintain ETL processes, ensuring efficient data transfer to Snowflake, Teradata, and SQL Server. Your expertise in SQL Server and Snowflake architecture will be crucial as you build scalable data pipelines and collaborate with cross-functional teams to implement data strategies. This position offers an exciting opportunity to work with large datasets and cloud platforms, making a significant impact on data management and performance optimization. If you are passionate about data engineering and eager to contribute to a dynamic environment, this role is perfect for you.

Qualifications

  • Proven experience with SQL Server and Snowflake architecture.
  • Strong knowledge of ETL processes and data warehousing best practices.

Responsibilities

  • Design and maintain ETL processes for data transfer to Snowflake and SQL Server.
  • Optimize database performance and ensure data quality and integrity.

Skills

SQL Server (T-SQL, stored procedures, optimization)
Snowflake architecture
ETL processes
Data modelling
Problem-solving
Communication skills
Data governance

Education

Degree in Computer Science or related fields

Tools

Informatica PowerCenter
AWS
Airflow
GitLab
Job description

We are looking for a skilled Data Engineer with expertise in SQL Server, Teradata and Snowflake to join our team. The role involves designing, developing, and maintaining ETL processes to transfer data from source systems into Snowflake, Teradata and SQL Server. You will optimize database performance and work on creating scalable data pipelines for large datasets. Collaboration with data scientists, analysts, and business teams is essential for defining and implementing data strategies. Strong experience in SQL Server (T-SQL, stored procedures, optimization), and Teradata and Snowflake architectures is required. The role also includes troubleshooting and improving existing data systems for efficiency and performance. You will handle data migration and integration tasks from legacy systems into Snowflake. Knowledge of cloud platforms (AWS, Azure, Google Cloud) and data warehousing best practices is necessary. Proficiency in data modelling and ETL tools is expected. A degree in Computer Science or related fields, along with strong communication and problem-solving skills, is preferred.

Responsibilities:
  • Design, develop, and maintain ETL processes to move data from source systems to Snowflake, Teradata and SQL Server environments.
  • Build scalable data pipelines for processing and storing large datasets.
  • Collaborate with analysts and business stakeholders to define and implement data strategies.
  • Optimize database performance, including query optimization, indexing, and partitioning across all enterprise data platforms.
  • Work with cross-functional teams to gather requirements and develop effective data solutions.
  • Ensure data quality, consistency, and integrity throughout the data lifecycle.
  • Maintain and improve existing data architectures and workflows to meet business requirements.
  • Create and maintain documentation for data systems, pipelines, and processes.
  • Monitor and troubleshoot data pipeline performance issues and resolve them promptly.
  • Assist in the migration and integration of data from legacy systems into Snowflake, and other strategic data stores.

Perform data transformation tasks using SQL, Snowflake SQL, and relevant data processing tools.

Required Skills and Qualifications:
  • Proven experience working with SQL Server (e.g., T-SQL, Stored Procedures, Indexing, Query Optimization, System Catalog Views).
  • Strong experience in Snowflake architecture, including data loading, transformation, and performance tuning.
  • Proficient in ETL processes using tools such as Informatica PowerCenter and BDM, AutoSys, Airflow, and SQL Server Agent.
  • Experience with cloud platforms preferably AWS.
  • Strong knowledge of AWS cloud services, including EMR, RDS Postgres, Athena, S3 and IAM.
  • Solid understanding of data warehousing principles and best practices.
  • Strong proficiency in SQL for data manipulation, reporting, and optimization.
  • Knowledge of data modelling and schema design.
  • Experience working with large, complex datasets and implementing scalable data pipelines.
  • Familiarity with version control tools such as GitLab.
  • Experience with data integration, data governance, and security best practices.
  • TSYS experience & knowledge of credit cards data.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.