Enable job alerts via email!

Technical team Lead

Dew Softech Inc

Harrisburg (Dauphin County)

Remote

USD 140,000 - 200,000

Full time

Today
Be an early applicant

Job summary

A leading tech company is hiring a Technical Capability Owner to drive their AI/ML initiatives. You will design reference architectures and tools, curate solutions on AWS and Databricks, and lead community workshops. The ideal candidate has extensive experience in data engineering and MLOps, strong communication skills, and a commitment to security. This role is fully remote with a 12+ month contract.

Qualifications

  • 8-12+ years in data/ML platform engineering or similar; 3+ years on AWS and Databricks.
  • Proven success in defining reference architectures and reusable accelerators.
  • Track record of organizing large groups and influencing without authority.

Responsibilities

  • Own the technical capability roadmap for AI/ML CoE.
  • Design and maintain reference architectures on AWS and Databricks.
  • Lead cross-functional workshops and create documentation for teams.

Skills

Data/ML platform engineering
MLOps experience
Excellent presentation and communication
Security-by-design mindset

Tools

AWS
Databricks
MLflow
Job description
Overview

Technical team Lead

Location: 100% remote

Duration: 12+ months

Interview: Web

The client is evolving from product-by-product AI delivery to an AI Center of Excellence (CoE) model that democratizes AI/ML for both business and technical users in FY26. We’re hiring a Technical Capability Owner to define our technical “golden paths,” reference architectures, and persona-approved toolsets across AWS and Databricks. You’ll be the connective tissue between enterprise architecture, data science, security, and business units, designing frameworks, enabling scaled adoption, and presenting compellingly to audiences from engineering guilds to executives. You will be democratizing AI/ML for technical users, giving developers the tools, frameworks, guidance, and trainings to develop AI/ML solutions on their own. The AI/ML Technical Capability Owner will also measure value of technical tools and products developed on those tools.

We’re building an AI Center of Excellence to democratize AI/ML across the company. As our AI/ML Technical Capability Owner, you’ll define the architectures, tools, and guardrails that help teams ship reliable ML and GenAI solutions at scale on AWS + Databricks. If you love creating golden paths, enabling large technical communities, and presenting your vision from deep dives to the boardroom,

What you’ll do

Strategy & Ownership

  • Own the technical capability roadmap for the AI/ML CoE; understand technical user needs on AI capabilities, align with the Business Capability Owner on outcomes, funding, chargeback model, governance, and adoption plans.
  • Translate company goals into technical guardrails, accelerators, and “opinionated defaults” for AI/ML delivery.

Reference Architectures & Frameworks

  • Design and maintain end-to-end reference architectures on AWS and Databricks (batch/streaming, feature stores, training/serving, RAG/GenAI, Agentic AI).
  • Publish reusable blueprints (modules, templates, starter repos, CICD pipelines) and define golden paths for each persona (Data Scientist, ML Engineer, Data Engineer, Analytics Engineer, Software Engineer, TE Citizen AI/ML Developer).

Persona-Approved Tools & Platforms

  • Curate the best-fit suite of tools across data, ML, GenAI, and MLOps/LMMOps (e.g., Databricks Lakehouse, Unity Catalog, MLflow, Feature Store, Model Serving; AWS S3, EKS/ECS, Lambda, Step Functions, CloudWatch, IAM/KMS; Bedrock for GenAI; vector tech as appropriate).
  • Run evaluations/POCs and vendor assessments; set selection criteria, SLAs, and TCO models.

Governance, Risk & Compliance

  • Define technical guardrails for data security (Structured and Unstructured Data), lineage, access control, PII handling, and model risk management in accordance with TE’s AI policy.
  • Identifying enhancements or improvements to TE’s AI Policy based on user feedback.
  • Establish standards for experiment tracking, model registry, approvals, monitoring, and incident response.

Enablement & Community

  • Lead large cross-functional workshops; organize engineering guilds, office hours, and “train-the-trainer” programs.
  • Create documentation, hands-on labs, and internal courses to upskill teams on the golden paths.

Delivery Acceleration

  • Partner with platform and product teams to stand up shared services (feature store, model registry, inference gateways, evaluation harnesses).
  • Advise solution teams on architecture reviews; unblock complex programs and ensure alignment to standards.

Evangelism & Communication

  • Present roadmaps and deep-dive tech talks to execs and engineering communities; produce clear decision memos and design docs.
  • Showcase ROI and adoption wins through demos, KPIs, and case studies.
What you’ll bring

Must-have

  • 8-12+ years in data/ML platform engineering, ML architecture, or similar; 3+ years designing on AWS and Databricks at enterprise scale.
  • Proven experience defining reference architectures, golden paths, and reusable accelerators.
  • Strong MLOps experience: experiment tracking (MLflow), CI/CD for ML, feature stores, model serving, observability (data & model), drift/quality, A/B or shadow testing.
  • GenAI experience: RAG patterns, vector search, prompt orchestration, safety/guardrails, evaluation.
  • Security-by-design mindset (IAM/KMS, network segmentation, data classification, secrets, compliance frameworks).
  • Track record organizing large groups (guilds, communities of practice, multi-team workshops) and influencing without authority.
  • Excellent presenter and communicator to both technical and executive audiences.

Nice-to-have

  • AWS certifications (e.g., Solutions Architect, Machine Learning Specialty); Databricks Lakehouse/ML certifications.
  • Experience with Kubernetes/EKS, IaC (Terraform), Delta Live Tables/Workflows, Unity Catalog policies.
  • Background in manufacturing/industrial IoT/edge helpful.
Success metrics (first 12 months)
  • Adoption: 70% of AI/ML initiatives using CoE golden paths and persona-approved tooling.
  • Time-to-value: 30-50% reduction in time to first production model or GenAI workload.
  • Quality & Risk: 90% compliance with model governance controls; measurable reduction in incidents.
  • Enablement: 4+ reusable blueprints and 2+ shared services in production; 6+ enablement sessions/quarter.
30/60/90 plan
  • 30 days: Inventory current tools/initiatives; draft capability heatmap and initial reference architecture; publish near-term guardrails.
  • 60 days: Deliver first golden path (e.g., Databricks-centric MLOps with MLflow/UC); run 2 enablement workshops; select initial GenAI stack (incl. Bedrock stance).
  • 90 days: Launch shared services (feature store/model registry + eval harness); formalize governance checks; publish KPI dashboard and FY26 roadmap.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.