General Purpose
- We are looking for a highly skilled Machine Learning Engineer to join our team and contribute to the development of AI-powered solutions. The candidate should have expertise in Python programming, REST API development, cloud infrastructure, and AI model integration. This role requires strong problem-solving skills, a deep understanding of AI technologies, and the ability to optimize and deploy AI applications efficiently. The ideal candidate will have hands-on experience in building, deploying, and maintaining machine learning models and GenAI solutions using modern cloud platforms and MLOps practices. You will work closely with cross-functional teams to design scalable AI pipelines and integrate LLM-based solutions into production environments. You will play a critical role in building scalable, resilient, and secure data platforms that power analytics, AI, and data-driven innovation at Wynn.
Nature & Scope
Essential Duties & Tasks
- Design, implement, and maintain end-to-end ML pipelines using Databricks notebooks, Delta Lake, and MLflow for experiment tracking, model versioning, and lifecycle management.
- Leverage Databricks AutoML for rapid prototyping and efficient model selection in early-stage experimentation.
- Implement ML model deployment workflows using Databricks Jobs and manage serving endpoints with low-latency requirements.
- Design, develop, and deploy ML and GenAI solutions using Python and LLM frameworks.
- Build and deploy ML production systems, contributing to their design and ongoing maintenance
- Develop and maintain ML pipelines, manage the data lifecycle, and ensure data quality and consistency throughout
- Assure robust implementation of ML guardrails and manage all aspects of service monitoring
- Develop and deploy accessible endpoints, including web applications and REST APIs, while maintaining steadfast data privacy and adherence to security best practices and regulations
- Embrace agile development practices, valuing constant iteration, improvement, and effective problem-solving in complex and ambiguous scenarios
- Collaborate with cross-functional teams including architects, data engineers, analysts, and business leaders to deliver robust data solutions.
- Write and maintain infrastructure as code using YAML, Implement and manage CI/CD pipelines for ML workflows.
- Ensure model governance and compliance using protocols like Model Context Protocol and A2A Protocol.
- Produce comprehensive documentation including architectural diagrams, data flow maps, runbooks, and lineage tracking artifacts.
- Continuously improve ml pipeline efficiency by identifying performance bottlenecks and reducing compute/storage costs.
- Containerize applications using Docker and manage deployments on Azure
- Implement DevOps best practices in the data lifecycle, including CI/CD for pipelines, automated testing, and version control.
- Facilitate internal knowledge sharing via technical workshops, peer reviews, and training sessions for continuous team development.
Education
- A bachelor's degree in computer science, information technology, or a related field is required; a master's degree is preferred but not mandatory.
- Minimum age 21.
Experience
- A minimum of 3-5 years of hands-on experience in Python software development with a focus on modular, scalable, and efficient coding for AI/LLM-based applications.
- Proven track record of building, scaling, and leading complex ML engineering platforms in enterprise environments.
- Strong experience designing and deploying cloud-native ML solutions on Azure Databricks, with an emphasis on scalable model training, model management using MLflow, and real-time inference.
- Demonstrated success in implementing AI agents or autonomous workflows using tools such as LangGraph, CrewAI, or similar frameworks.
- Experience in real-time or near-real-time inference systems and low-latency model serving.
Skills / Knowledge
- Hands-on experience with Databricks Machine Learning environment, including MLflow tracking, model registry, and production deployment workflows.
- Proficient in using Databricks Feature Store, AutoML, and real-time model inference using Databricks Serving or external endpoints.
- Familiarity with Databricks Unity Catalog and access control for secure ML asset management.
- Advanced proficiency in Python and LLM, with strong skills in performance tuning, modular coding, and automation scripting
- Experience deploying GenAI & related frameworks (e.g., LangChain, LLMOps, PaLM) use cases in Snowflake ecosystem
- Develop, optimize, and maintain scalable machine learning workflows and models using Databricks notebooks and MLflow.
- Understanding of how MCPs can complement structured Snowflake data to deliver rich narrative insights, automated clinical summaries, and patient engagement tools.
- Build AI-driven applications using Snowflake Cortex, Document AI, and Snowpark ML.
- Integrate GenAI tools like OpenAI GPT, Claude (Anthropic), and xAI’s Grok to enhance unstructured data processing and generate intelligent summaries or decision recommendations.
- Experience in experimentation, feature engineering, Model registration, deployment, and monitoring, using AutoML cost and quota management
- Strong written and verbal communication skills in English, with the ability to influence, mentor, and align technical teams and stakeholders.
- Candidates with coursework or academic research in deep learning, reinforcement learning, or natural language processing are highly encouraged.
Certifications Required/Preferred
- Databricks Certified Machine Learning Professional
- Snowflake SnowPro Core Certification
- Azure Data Scientist Associate
Work Conditions
This is an office-based position with regular working hours