Enable job alerts via email!

ERP/CRM Developer

ZipRecruiter

Carlisle

On-site

GBP 50,000 - 70,000

Full time

8 days ago

Job summary

A leading recruitment platform in the UK is seeking an experienced data integration specialist to architect ETL pipelines and analyze SAP data models. This role involves leveraging Azure cloud technologies and requires proficiency in SQL, Python, and PySpark. The ideal candidate will work closely with SAP teams to ensure seamless data integration and reporting capabilities.

Qualifications

  • Experience in architecting and implementing ETL pipelines.
  • Proficient in analyzing SAP internal data models.
  • Familiarity with Azure Data Lake Storage and Power BI.

Responsibilities

  • Design and implement robust ETL pipelines for SAP data.
  • Collaborate with technical teams to understand data models.
  • Build data transformations leveraging SQL, Python, and PySpark.

Skills

ETL pipeline development
SAP data interpretation
Cloud integration (Azure)
SQL
Python
PySpark

Tools

Azure Data Factory
SAP Datasphere
Microsoft cloud solutions
Alteryx

Job description

Job Description

  • Architect and implement robust ETL pipelines to extract data from SAP ECC, SAP S/4 HANA, SAP HANA, and SAP Datasphere using best-practice integration methods (e.g., ODP, CDS views, RFCs, BAPIs).
  • Analyze and interpret SAP’s internal data models (e.g., tables like BKPF, BSEG, MARA, EKPO) and work closely with SAP functional and technical teams.
  • Work with SAP Datasphere (SAP Data Warehouse Cloud) to federate or replicate SAP data for consumption in the EDP (highly desired).
  • Review or interpret ABAP logic when necessary to understand legacy transformation rules and business logic (nice to have).
  • Lead the end-to-end data integration process from SAP ECC, ensuring deep alignment with the EDP’s design and downstream data usage needs.
  • Leverage knowledge of HANA Data Warehouse and SAP BW to support historical reporting and semantic modeling.
  • Design and build robust, scalable ETL/ELT pipelines to ingest data into Microsoft cloud using tools such as Azure Data Factory, or Alteryx.
  • Automate data movement from SAP into Azure Data Lake Storage / OneLake, enabling clean handoffs for consumption by Power BI, data science models, and APIs.
  • Build data transformations in SQL, Python, and PySpark, leveraging distributed compute (e.g., Synapse or Spark pools).
  • Work closely with cloud architects to ensure integration patterns are secure, cost-effective, and meet performance SLAs.

Note: The original description was somewhat cluttered, and I have cleaned it up for clarity and readability while preserving all essential information.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs