Enable job alerts via email!

DataBricks and Azure Data Factory Developer

Eton Technologies

United States

Remote

USD 90,000 - 120,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company is seeking a DataBricks developer to enhance ETL processes and clean up notebooks. This role involves designing data ingestion pipelines, ensuring data quality, and collaborating with project teams. The ideal candidate will have strong experience with big data and Azure technologies, including DataBricks and Apache Spark. This is a flexible work-from-home opportunity available for both full-time and part-time roles.

Qualifications

  • 2+ years of experience preferred.
  • Experience with big data and DataBricks required.

Responsibilities

  • Design and implement high-performance data ingestion pipelines.
  • Develop scalable frameworks for data ingestion.
  • Collaborate on APIs and search functionalities.

Skills

Data Management
ETL
Data Warehouse Transformation
Agile Delivery

Education

Microsoft Azure Big Data Architecture certification

Tools

Azure Data Factory
Apache Spark
Azure SQL Data Warehouse
Azure Data Lake
Azure Cosmos DB
Azure Stream Analytics

Job description

We are looking for a DataBricks developer to assist in developing ETL processes and cleaning up notebooks. The project involves resolving inconsistencies in notebooks and report design. Solid experience with big data, DataBricks, review, and cleanup of notebook design and coding is required.

Experience: 2+ years preferred. This is a work-from-home opportunity, available for both full-time and part-time roles.


Responsibilities:
  1. Design and implement high-performance data ingestion pipelines from multiple sources using Apache Spark and/or Azure Databricks.
  2. Present proofs of concept of key technology components to stakeholders.
  3. Develop scalable, reusable frameworks for data ingestion.
  4. Ensure data quality and consistency across end-to-end data pipelines from source to target repositories.
  5. Work with event-based/streaming technologies for data ingestion and processing.
  6. Collaborate with the project team to support additional components like APIs and search functionalities.
  7. Support Azure CI/CD pipelines and maintain source control with Azure Dev Ops.
  8. Evaluate tools against customer requirements.
  9. Participate in Agile delivery and DevOps practices for iterative proof of concept and production deployment.
Knowledge and Skills:
  • Strong understanding of Data Management principles.
  • Experience in building ETL and data warehouse transformation processes.
  • Hands-on experience with Azure Data Factory and Apache Spark (preferably Databricks).
  • Experience with geospatial frameworks on Apache Spark.
  • Microsoft Azure Big Data Architecture certification.
  • Experience designing solutions using Azure Data Analytics platform including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, and Azure Stream Analytics.
Minimum Requirements:
  • Experience with big data, DataBricks, and notebook review and cleanup.
  • Proven experience in ETL/data warehouse processes.
  • Microsoft Azure Big Data Architecture certification.
  • Knowledge of Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.