Software Engineer - Data Ingestion and Pipelines
The Data Ingestion and Pipelines (DIP) team is seeking a highly skilled Software Engineer. The ideal candidate will have a strong background in software development with a focus on building and optimizing data pipelines, ensuring data quality, and integrating data from various sources. As a Software Engineer, you will play a key role in designing, developing, and maintaining scalable data infrastructure that supports our business intelligence and analytics efforts.
Key Responsibilities:
- Data Pipeline Development: Design, develop, and maintain robust data pipelines and ETL processes to ingest, transform, and load data from diverse sources into our data warehouse.
- Data Quality and Governance: Implement and monitor data quality checks, ensuring accuracy, consistency, and reliability of data.
- Optimization: Optimize data processing workflows for performance, scalability, and cost-efficiency.
- System Monitoring and Maintenance: Monitor and maintain data systems, responding to SEVs or other urgent issues to ensure continuous operations.
- Collaboration: Work closely with data scientists, analysts, and other engineering teams to understand data requirements and deliver solutions that meet their needs.
- Documentation: Maintain comprehensive documentation for data pipelines, systems architecture, and processes.
Qualifications:
- Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Relevant coursework or projects in data engineering are a plus.
- Experience: Minimum of 1 year of experience in software development.
- Technical Skills:
- Proficiency in programming languages such as Python, Java, or Scala.
- Knowledge of data modeling and schema design.
- Familiarity with SQL skills and relational databases (e.g., PostgreSQL, MySQL).
- Familiarity with at least one cloud platform (e.g., AWS, Azure, Google Cloud) and its data services.
- Analytical Skills: Strong problem-solving skills with a keen eye for detail and a passion for data.
- Communication: Excellent written and verbal communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders.
- Team Player: Ability to work effectively in a collaborative team environment, as well as independently.
Preferred Qualifications:
- Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka).
- Familiarity with AWS and its data services (e.g., S3, Athena, AWS Glue).
- Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake).
- Knowledge of containerization and orchestration tools (e.g., Docker, ECS, Kubernetes).
- Familiarity with data orchestration tools (e.g., Prefect, Apache Airflow).
- Familiarity with CI/CD pipelines and DevOps practices.
- Familiarity with Infrastructure-as-code tools (e.g., Terraform, AWS CDK).
Company Industry: IT - Software Services
Department / Functional Area: IT Software
Keywords: Software Engineer - Data Ingestion And Pipelines
Disclaimer: Naukrigulf.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advise against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@naukrigulf.com