**Remote Senior Data Engineer: Snowflake, Azure, SaaS, Python**Software EngineerMiddle-LevelSenior-Level# Remote Senior Data Engineer: Snowflake, Azure, SaaS, PythonRemote Senior Data Engineer: Design & run cost-efficient Azure & Snowflake pipelines, automate quality, and drive real-time analytics worldwide.Full-TimeRemote## MoldStud’s Commitment To Growth And ExcellenceAt MoldStud, we're dedicated to nurturing talent and fostering a culture where innovation thrives. As a Middle-Level Remote Senior Data Engineer: Snowflake, Azure, SaaS, Python, you'll find yourself at the heart of a collaborative team that values your growth, supports your professional development, and encourages you to explore new horizons in technology. We believe in pushing the boundaries of digital solutions, and we want passionate individuals like you to join us on this exciting journey.## What You’ll Be Doing As A middlesenior Remote Senior Data Engineer: Snowflake, Azure, SaaS, Python At MoldStud## Senior Data Engineer - RemoteJoin a forward-thinking technology group dedicated to building production-grade data solutions that scale globally. We're looking for an experienced engineer who can architect, build, and maintain end-to-end data pipelines that serve real-time and batch analytics workloads on cloud platforms.## Job OverviewThis full-time remote role focuses on:* Designing scalable data pipelines that ingest, transform, and load data into analytical warehouses.* Automating data quality, monitoring, and alerting to guarantee pipeline reliability.* Optimizing cloud infrastructure to balance cost, performance, and resilience.## Key Responsibilities* Architect and implement serverless data pipelines using Azure Functions, Data Factory, Service Bus, and other Azure services to feed Snowflake data models.* Build ELT/ETL workflows that blend internal and external sources at scale, supporting both near real-time streaming and scheduled batch ingestion.* Develop and maintain CI/CD pipelines for data artifacts, enforce version control, and enable rapid feature rollout.* Define and enforce automated data quality checks, lineage tracking, and observability dashboards.* Conduct performance tuning of Snowflake queries and transform logic to reduce compute costs and accelerate load times.* Collaborate closely with product, analytics, and DevOps teams to translate business requirements into secure, high-available data solutions.* Explore and integrate third-party data sources, APIs, and streaming platforms (e.g., Kafka, Spark) to enrich data environments.* Adapt to a fast-paced, ever-evolving environment, leveraging self-motivation and strong problem-solving skills.## Required Qualifications* Bachelor's degree in Computer Science, Engineering, or related field, Master's preferred.* 5+ years of experience designing, deploying, and operating production data pipelines.* Proficient in Python (or similar) for building reusable ETL components.* Deep expertise with Azure data services (Functions, Data Factory, Service Bus, Storage) and Snowflake.* Solid SQL skills and proven ability to optimize transformations and query performance.* Hands-on experience with CI/CD, infrastructure-as-code, and containerization around data platforms.* Familiarity with data observability tools, dbt, and BI platforms such as Power BI or Grafana.* Experience with streaming ingestion patterns (Kafka, Spark, Databricks) and batch processing.* Strong analytical mindset with a track record of driving business insights from complex datasets.* Excellent communication and collaboration skills.## Desired Skills* Knowledge of multi-tenant SaaS architecture and cloud-native design patterns.* Experience setting up data pipelines for self-service analytics teams.* Passion for continuous learning and staying ahead of emerging data engineering technologies.* Background in incident response and designing for high availability.## Benefits & Perks* Competitive benefits package including health, dental, vision, and life insurance.* Generous paid time off and flexible holiday scheduling.* Remote work flexibility with optional office space available.* Professional development stipend and access to learning resources.* Collaborative culture that rewards innovation, creativity, and ownership.## Our Values* **Innovation:** Constantly improving processes and tooling to stay ahead of industry standards.* **Excellence:** Delivering high-quality, reliable data solutions that empower decision-makers.* **Collaboration:** Working cross-functionally to unify data initiatives across teams.* **Integrity:** Building solutions that prioritize security, compliance, and ethical data use.* **Growth:** Providing ample opportunity for personal and professional development.## How to Apply## We Need You To Have Some Hard Skills* Design scalable pipelines to ingest, transform, and load data into warehouses.* Automate data quality checks, monitoring, and alerting for pipeline reliability.* Optimize Snowflake queries and transforms to reduce compute costs and speed load times.* Implement serverless data pipelines using Azure Functions, Data Factory, Service Bus, and Snowflake.* Develop CI/CD pipelines, enforce version control, and enable rapid feature rollouts.* Write reusable ETL components in Python, maintaining code quality and documentation.* Use dbt for transformation modeling and Power BI/Grafana for observability dashboards.* Integrate streaming sources via Kafka, Spark, or Databricks for near real-time ingestion.* Design high-availability data architectures, balancing cost, performance, and resilience.## We Need You To Have Some Soft Skills* - Communicates complex data concepts clearly to non-technical stakeholders.* - Collaborates closely with product, analytics, and DevOps teams to align goals.* - Leads initiatives to improve pipeline reliability and scalability.* - Adapts quickly to evolving requirements and balances speed with quality.* - Demonstrates ownership, driving solutions from design through production.* - Works transparently, ensuring data quality and security compliance.* - Fosters a culture of continuous learning and knowledge sharing.## When, With Who, And How You’ll Be Working###### UTC+03:00: Your primary working timezone###### 09:00-18:00 UTC+03:00: Your primary working hours###### Agile: Your primary project management methodology###### 75: Your project’s team size## What The Recruitment Process Looks Like###### 30 Minutes \*: A 30-minute introductory conversation with our HR Manager or Hiring Manager to discuss your background, experience, and career goals. We'll also provide an overview of the role, company culture, and answer any initial questions you may have.###### 30 Minutes \*: A 30-minute conversation with our HR Manager to evaluate your English communication skills. This ensures you're comfortable collaborating with our international team and can effectively participate in meetings, documentation, and daily communications.###### 60 Minutes \*: A 60-minute technical assessment with the Hiring Manager or Tech Lead. This session will evaluate your hands-on coding abilities, problem-solving approach, and technical knowledge relevant to the role through practical questions, code reviews, or live coding exercises.###### 60 Minutes \*: A 60-minute in-depth technical discussion with our CTO to assess your technical expertise, architectural thinking, and alignment with our technology stack and engineering standards. This conversation will also explore how you approach complex technical challenges and your potential contribution to our technical direction.## Why You Should ApplyJoining MoldStud as a middle or senior Remote Senior Data Engineer: Snowflake, Azure, SaaS, Python means becoming part of a vibrant team that values innovation and problem-solving. Here, your expertise will contribute