s a Data Engineer you will design, build and operate scalable data solutions across on-premises and cloud (AWS) platforms, with a strong emphasis on high-performance data processing, data modelling, Oracle database optimisation and the delivery of robust ETL/ELT pipelines. You will work closely with data analytics, BI, data science, and business stakeholders to ensure data availability, integrity, and performance.
- Working Hours: Mon-Fri
- Working Location: Central
- Salary Package: Up to $7,000 (basic) + AWS + VB
Key Responsibilities:
- Architect, build and maintain end-to-end data pipelines (ingest, transform, load) to feed analytics, reporting and model development.
- Design and optimise data warehouses / data marts / data lakes supporting business intelligence, analytics and operational reporting.
- Lead database performance tuning efforts, particularly on Oracle (e.g., query optimisation, indexing, partitioning, statistics, execution plan analysis) to ensure high-availability and optimal throughput.
- Manage cloud-based data services on AWS (e.g., Redshift, RDS, Athena, S3, Glue) and integrate them with on-premises systems as needed.
- Collaborate with stakeholders (data scientists, analysts, business users, product teams) to translate business requirements into robust data solutions.
- Implement data quality, data governance and metadata management frameworks to ensure reliability and trust in the data.
- Monitor, troubleshoot and optimise data flows, storage, and compute resources for cost, performance and scalability.
- Mentor and guide more junior data engineers, review code, enforce best practices, and contribute to the continuous improvement of the data engineering function.
Required Skills & Experience:
- Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering or a related discipline.
- 5+ years’ experience in data engineering or related roles, working with large scale data systems.
- Strong expertise in Oracle database administration and optimisation: SQL performance tuning, indexing strategy, partitioning, execution plans, capacity planning.
- Proven experience working with AWS services for data (e.g., AWS RDS/Oracle, Redshift, S3, Glue, Athena, etc.).
- Solid programming skills in SQL and at least one other language (e.g., Python, Java).
- Experience designing and implementing ETL/ELT workflows, and data modelling for both OLAP and OLTP.
- Good understanding of data warehousing concepts: star/snow‑flake schemas, slowly evolving dimensions, fact/dimension modelling.
- Experience with cloud and hybrid architectures, data lakes, metadata management and data governance.
- Strong analytical, problem-solving and troubleshooting skills, especially under performance constraints.
- Excellent stakeholder management, communication skills, and ability to work cross‑functionally in a collaborative environment.
- Prior experience mentoring or guiding junior staff desirable.
Nice to Have:
- Certification in AWS (e.g., AWS Certified Data Analytics or AWS Certified Database – Specialty).
- Experience with big-data technologies (e.g., Spark, Kafka, Hadoop) or real‑time streaming architectures.
- Exposure to cost‑optimisation of cloud data services.
- Familiarity with on‑premises to cloud migration projects (Oracle to AWS).
What We Offer:
- Competitive salary and bonus structure.
- Flexible/hybrid work arrangements.
- Opportunity to work on strategic, high-impact data platforms across the organisation.
- Professional development support, certifications, conferences etc.
- A collaborative team environment with a culture of continuous improvement.
By submitting your resume, you consent to the collection, use, and disclosure of your personal information per ScienTec’s Privacy Policy (scientecconsulting.com/privacy-policy).
This authorizes us to:
- Contact you about potential opportunities.
- Delete personal data as it is not required at this application stage.
All applications will be processed with strict confidence. Only shortlisted candidates will be contacted.