Enable job alerts via email!

Software Engineer - Data Ingestion and Pipelines

Intelmatix

Riyadh

On-site

SAR 40,000 - 80,000

Full time

8 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is on the lookout for a talented Software Engineer specializing in Data Ingestion and Pipelines. In this pivotal role, you will design and optimize data pipelines to ensure high-quality data integration from diverse sources. Collaborating with data scientists and analysts, you'll contribute to building scalable data infrastructure that supports business intelligence and analytics. This position offers an exciting opportunity to work with cutting-edge technologies and make a significant impact on data-driven decision-making. If you are passionate about data and eager to drive improvements in data processing workflows, this role is perfect for you.

Qualifications

  • Minimum of 1 year of experience in software development.
  • Proficiency in programming languages like Python, Java, or Scala.

Responsibilities

  • Design and maintain robust data pipelines and ETL processes.
  • Implement data quality checks to ensure accuracy and reliability.

Skills

Python
Java
Scala
SQL
Data Modeling
Analytical Skills
Communication
Team Player

Education

Bachelor's or Master's degree in Computer Science
Relevant coursework in data engineering

Tools

AWS
Azure
Google Cloud
PostgreSQL
MySQL
Docker
Kubernetes
Apache Airflow
Terraform

Job description

Software Engineer - Data Ingestion and Pipelines

The Data Ingestion and Pipelines (DIP) team is seeking a highly skilled Software Engineer. The ideal candidate will have a strong background in software development with a focus on building and optimizing data pipelines, ensuring data quality, and integrating data from various sources. As a Software Engineer, you will play a key role in designing, developing, and maintaining scalable data infrastructure that supports our business intelligence and analytics efforts.

Key Responsibilities:

  1. Data Pipeline Development: Design, develop, and maintain robust data pipelines and ETL processes to ingest, transform, and load data from diverse sources into our data warehouse.
  2. Data Quality and Governance: Implement and monitor data quality checks, ensuring accuracy, consistency, and reliability of data.
  3. Optimization: Optimize data processing workflows for performance, scalability, and cost-efficiency.
  4. System Monitoring and Maintenance: Monitor and maintain data systems, responding to SEVs or other urgent issues to ensure continuous operations.
  5. Collaboration: Work closely with data scientists, analysts, and other engineering teams to understand data requirements and deliver solutions that meet their needs.
  6. Documentation: Maintain comprehensive documentation for data pipelines, systems architecture, and processes.

Qualifications:

  • Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Relevant coursework or projects in data engineering are a plus.
  • Experience: Minimum of 1 year of experience in software development.
  • Technical Skills:
    • Proficiency in programming languages such as Python, Java, or Scala.
    • Knowledge of data modeling and schema design.
    • Familiarity with SQL skills and relational databases (e.g., PostgreSQL, MySQL).
    • Familiarity with at least one cloud platform (e.g., AWS, Azure, Google Cloud) and its data services.
  • Analytical Skills: Strong problem-solving skills with a keen eye for detail and a passion for data.
  • Communication: Excellent written and verbal communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders.
  • Team Player: Ability to work effectively in a collaborative team environment, as well as independently.

Preferred Qualifications:

  • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka).
  • Familiarity with AWS and its data services (e.g., S3, Athena, AWS Glue).
  • Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake).
  • Knowledge of containerization and orchestration tools (e.g., Docker, ECS, Kubernetes).
  • Familiarity with data orchestration tools (e.g., Prefect, Apache Airflow).
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Familiarity with Infrastructure-as-code tools (e.g., Terraform, AWS CDK).

Company Industry: IT - Software Services

Department / Functional Area: IT Software

Keywords: Software Engineer - Data Ingestion And Pipelines

Disclaimer: Naukrigulf.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advise against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@naukrigulf.com

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.