A law firm based in Sandton is seeking a Data Engineer to join their Data & Analytics team.
Role Summary/Objective
This role is responsible for designing, maintaining, and optimizing the firm’s data infrastructure. This role ensures that the data pipeline from collection to storage to processing is functioning efficiently, allowing the data team to extract actionable insights. The Data Engineer will work closely with both the Head of Data & Analytics and other technical teams to contribute to the design of the firm’s data architecture and long-term data strategy, ensure seamless data integration and secure management of firm-wide data.
Key Responsibilities (include but are not limited to)
Core Engineering Duties
- Build, maintain, and optimize data pipelines for efficient data collection, transformation, and integration.
- Design and manage scalable databases, improving accessibility and performance.
- Ensure compliance with data governance policies and privacy regulations (e.g., GDPR, POPIA).
- Support data analytics teams with infrastructure and troubleshooting.
- Automate workflows using AI/ML techniques for anomaly detection and routine task prediction.
- Implement testing frameworks and perform data quality assessments.
Collaboration & Communication
- Work with cross-functional teams including developers, analysts, and business stakeholders to understand data needs and deliver integrated solutions.
- Document workflows and system configurations to support knowledge sharing and consistency.
Continuous Improvement & Innovation
- Stay informed on emerging technologies and suggest improvements to enhance data engineering practices.
- Contribute to the development and enforcement of data engineering standards for code quality, documentation, and security.
Growth & Team Contribution
- Participate in code reviews and knowledge-sharing sessions to support team development.
- Take ownership of assigned projects and contribute to planning and execution alongside senior team members.
Qualifications and Skills
- 5+ years of experience in data engineering or SQL-based development.
- Degree in Computer Science, Information Systems, or related field.
- Proficiency in SQL, NoSQL, and cloud platforms (AWS, Azure).
- Experience with data pipeline frameworks (e.g., Apache Kafka, Airflow).
- Familiarity with Azure Data Factory, Synapse, or similar.
- Optional: Python scripting for ETL or data manipulation.
- Strong problem-solving skills and attention to detail.
- Ability to work collaboratively with data analysts and business teams.
- Experience in professional services or finance environments preferred.
- Exposure to Aderant or iManage advantageous.