As a Sr. Specialist Solutions Architect (Sr. SSA), you will guide customers in building big data solutions on Databricks that span a large variety of use cases. These are customer-facing roles, working with and supporting the Solution Architects, requiring hands-on production experience with Apache Spark™ and expertise in other data technologies. SSAs help customers through design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Data Intelligence Platform. As a deep go-to-expert reporting to the Sr. Manager, Field Engineering (Specialists), you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be performance tuning, machine learning, industry expertise, or more.
The impact you will have:
- Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
- Architect production level workloads, including end-to-end pipeline load performance testing and optimisation
- Provide technical expertise in an area such as data management, cloud platforms, data science, machine learning, or architecture
- Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures
- Improve community adoption (through tutorials, training, hackathons, conference presentations
- Contribute to the Databricks Community
What we look for:
- You will have experience in a customer-facing technical role with expertise in at least one of the following:
- Software Engineer/Data Engineer: query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions.
- Data Scientist/ML Engineer: model selection, model lifecycle, hyper parameter tuning, model serving, deep learning.
- Data Applications Engineer: Build use cases that use data - such as risk modelling, fraud detection, customer life‑time value.
- Experience with design and implementation of big data technologies such as Spark/Delta, Hadoop, NoSQL, MPP, OLTP, and OLAP.
- Maintain and extend production data systems to evolve with complex needs.
- Production programming experience in Python, R, Scala or Java
- Deep Specialty Expertise in at least one of the following areas:
- Experience scaling big data workloads that are performant and cost-effective.
- Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration, REST API, BI tools and SQL Interfaces.
- Experience designing data solutions on cloud infrastructure and services, such as AWS, Azure, or GCP using best practises in cloud security and networking.
- Experience with ML concepts covering Model Tracking, Model Serving and other aspects of productionizing ML pipelines in distributed data processing environments like Apache Spark, using tools such as MLflow.
- Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)