Enable job alerts via email!

Data Analyst

Recruitech

Durban

On-site

ZAR 40 000 - 85 000

Full time

2 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Ein innovatives Unternehmen sucht einen erfahrenen Dateningenieur, der komplexe Datensätze zusammenstellt und skalierbare Datenpipelines entwickelt. In dieser spannenden Rolle werden Sie die Infrastruktur für die Datenextraktion, -transformation und -ladung aufbauen und sicherstellen, dass die Datenqualität und -integrität gewährleistet sind. Sie werden auch interne Prozessverbesserungen identifizieren und umsetzen, um die Effizienz zu steigern. Wenn Sie eine Leidenschaft für Daten haben und in einem dynamischen Umfeld arbeiten möchten, ist dies die perfekte Gelegenheit für Sie.

Qualifications

  • 5-7 Jahre Erfahrung in Datenbankmanagement oder Datenengineering.
  • Starke Kenntnisse in SQL und relationalen Datenbanken.
  • Erfahrung mit Cloud-Plattformen wie AWS oder Azure.

Responsibilities

  • Entwicklung und Wartung skalierbarer Datenpipelines und ETL-Prozesse.
  • Identifizierung und Implementierung interner Prozessverbesserungen.
  • Sicherstellung der Datenqualität und -integrität über Systeme hinweg.

Skills

Python
SQL
Database Management
Data Engineering
Scala
No-SQL Technologies
Data Integration Patterns
BI Tools

Education

Bachelor's Degree in Computer Science
Bachelor's Degree in Engineering
Bachelor's Degree in Mathematics

Tools

AWS
GCP
Azure
Apache Spark
Docker
Kubernetes
Power BI
Snowflake

Job description

Responsibilities :

  1. Assembling large, complex datasets that meet both non-functional and functional business requirements.
  2. Designing, developing, monitoring, and maintaining scalable data pipelines and ETL processes.
  3. Building infrastructure for optimal extraction, transformation, and loading of data from various sources using integration and SQL technologies, often cloud-based.
  4. Identifying, designing, and implementing internal process improvements, including infrastructure redesign for scalability, optimizing data delivery, and automating manual processes.
  5. Building analytical tools to utilize data pipelines, providing actionable insights into key business metrics.
  6. Ensuring data quality, consistency, integrity, and security across systems.
  7. Driving continuous improvement of data engineering practices and tooling.

Required Skills and Experience :

  1. Bachelor's Degree in Computer Science, Engineering, Mathematics, or related field.
  2. 5-7 years of experience in database management, data engineering, or similar roles.
  3. Proficiency in programming languages such as Python or Scala.
  4. Strong proficiency in SQL and experience with relational databases (e.g., MSSQL, PostgreSQL, MySQL).
  5. Hands-on experience with No-SQL database technologies.
  6. Experience in database optimization and performance tuning.
  7. Good understanding of data integration patterns.
  8. Exposure to BI tools such as Power BI or Yellowfin is advantageous.
  9. Experience setting up MS SQL replication and data archiving strategies.
  10. Experience with cloud platforms (AWS, GCP, Azure) and services like S3, Lambda, Redshift, BigQuery, or Snowflake.
  11. Familiarity with big data technologies like Apache Spark, Data Bricks, and Hive.
  12. Knowledge of data modeling, warehousing concepts, and data governance.
  13. Exposure to data cleansing and de-duplication techniques is beneficial.

Advantageous :

  1. Experience with stream processing tools (Kafka, Spark Streaming, Flink).
  2. Knowledge of containerization (Docker) and orchestration tools (Kubernetes).
  3. Understanding of CI/CD principles and infrastructure-as-code.
  4. Exposure to machine learning workflows and MLOps.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.