Enable job alerts via email!

Staff/Senior Data Consultant - Snowflake/ETL/AWS/Python/SSIS/SQL

10Pearls

Islamabad

On-site

PKR 40,000 - 75,000

Full time

Today
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a skilled data engineer with expertise in Snowflake and AWS technologies. This role involves developing and optimizing data pipelines, ensuring data quality, and collaborating with cross-functional teams to drive data-driven decisions. The ideal candidate will have a strong programming background, experience with ETL processes, and a passion for leveraging data to enhance business outcomes. Join a dynamic team that values innovation and excellence in data management, and make a significant impact in a fast-paced environment.

Qualifications

  • 5-8 years of programming experience in data technologies.
  • Proficiency in Snowflake, AWS, and ETL pipeline development.
  • Relevant certifications in Snowflake or AWS are desirable.

Responsibilities

  • Develop and maintain optimal data pipeline architecture.
  • Automate tasks and enhance processes using data.
  • Collaborate with teams to address data infrastructure needs.

Skills

Snowflake
AWS
ETL pipelines
SQL
Python
Hadoop
Spark
Java
Data management
Communication skills

Education

Bachelor's degree in Computer Science

Tools

AWS EMR
Apache Hudi
Informatica
Apache Iceberg
DeltaLake
Kafka
Cloudera
Databricks

Job description

Company Overview

10Pearls is an end-to-end digital technology services partner helping businesses utilize technology as a competitive advantage. We assist our clients in digitalizing their existing businesses, building innovative new products, and augmenting their teams with high-performance members. Our expertise spans product management, user experience/design, cloud architecture, software development, data insights and intelligence, cyber security, emerging tech, and quality assurance, ensuring solutions that meet business needs. We serve a diverse clientele including large enterprises, SMBs, and high-growth startups across industries such as healthcare, education, energy, communications, financial services, and hi-tech. Our long-term partnerships are built on trust, integrity, and successful delivery.

Requirements

The ideal candidate should have a Bachelor’s degree in Computer Science and 5-8 years of programming experience in Snowflake, AWS, Glue, ETL pipelines, Athena, Informatica, Apache Hudi, SSIS, SQL, Python, and related technologies like Apache Iceberg and DeltaLake.

Responsibilities
  1. Develop, construct, test, and maintain optimal data pipeline architecture.
  2. Assemble large, complex data sets meeting business requirements.
  3. Improve data reliability, efficiency, and quality.
  4. Prepare data for predictive and prescriptive modeling.
  5. Automate tasks using data to enhance processes.
  6. Build, deploy, and operate ETL pipelines, ensure data quality, monitor, and set up alerting, CI/CD, governance, and access control.
  7. Design and implement scalable data pipelines for ingestion, transformation, and storage using Snowflake and other relevant tech.
  8. Stay current with industry best practices and emerging technologies in data architecture and modeling.
  9. Identify and implement process improvements, automate manual tasks, optimize data delivery, and redesign infrastructure for scalability.
  10. Monitor process performance and recommend improvements.
  11. Collaborate with stakeholders including executive, product, data, and design teams to address data infrastructure needs.
  12. Create data tools to support analytics and product innovation.
  13. Work with data and analytics experts to enhance system functionality.
  14. Demonstrate proficiency in data management and automation in Spark, Hadoop, and HDFS environments.
  15. Possess knowledge in DS/ML, analytics, or data warehousing, including SSIS and Informatica.
Required Skills
  1. Excellent communication skills.
  2. Experience with Snowflake, Hadoop, Hive, Spark, Airflow, Livy, Scala, Java.
  3. Experience with AWS EMR, S3, HBase, Athena, PySpark, and related technologies.
  4. Experience with streaming technologies like Kafka.
  5. Proficiency in programming languages such as Scala, Java, SQL, Python, R.
  6. Experience managing data in relational databases and developing ETL pipelines.
  7. Exposure to enterprise services like Cloudera, Databricks, AWS, SSIS, SQL.
  8. Knowledge of AWS data services such as EC2, Kinesis, Lambda, DynamoDB is a plus.
  9. Relevant certifications in Snowflake, AWS, or data architecture are highly desirable.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.