Project Role
Data Platform Architect
Project Role Description
Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.
Must Have Skills
- Databricks Unified Data Analytics Platform
Good to Have Skills
NA
Minimum Experience
5 years of experience in Databricks Unified Data Analytics Platform.
Educational Qualification
15 years full time education.
Summary
We are looking for a highly skilled Senior Databricks Developer with extensive experience in building and managing modern data platforms on Azure using Lakehouse architecture. The ideal candidate will have strong hands‑on experience in PySpark, SQL, and Azure Data Services, and a proven track record in developing scalable and efficient data pipelines.
Roles & Responsibilities
- Design, build, and optimize scalable data pipelines using Databricks, PySpark, and SQL on Azure.
- Implement Lakehouse architecture for structured data ingestion, processing, and storage.
- Build and manage Delta Lake tables and perform schema evolution and versioning.
- Work with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for data integration and transformation.
- Collaborate with data architects, analysts, and business teams to understand requirements and design efficient data solutions.
- Optimize performance of large-scale data pipelines and troubleshoot data quality or latency issues.
- Contribute to best practices around coding, testing, and data engineering workflows.
- Document technical solutions and maintain code repositories.
- Develop and maintain documentation related to the data platform architecture.
Professional & Technical Skills
- Proficiency in Databricks Unified Data Analytics Platform.
- Strong hands‑on experience with PySpark and advanced SQL for large-scale data processing.
- Deep expertise in Databricks platform including notebooks, jobs, and Delta Lake.
- Solid experience with Azure cloud services: ADLS Gen2, ADF, Azure Synapse, Key Vault, etc.
- Knowledge of Lakehouse architecture concepts, implementation, and governance.
- Experience in version control tools like Git and CI/CD pipelines.
- Excellent problem‑solving and debugging skills.
- Strong communication and collaboration abilities across cross‑functional teams.
- Strong understanding of data integration techniques and best practices.
- Experience with cloud‑based data solutions and architectures.
- Familiarity with data governance frameworks and compliance requirements.
- Ability to design scalable and efficient data pipelines.
Good to Have
- Experience in data quality checks and validation frameworks.
- Exposure to DevOps and Infrastructure as Code (IaC) in Azure environments.
- Familiarity with data governance tools like Unity Catalog or Azure Purview.
- Knowledge of Delta Live Tables (DLT) is a plus.
Additional Information
- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.
- This position is based at our Hyderabad office.
- A 15 years full time education is required.