Enable job alerts via email!

1388 - Lead AWS/ AI Engineer

Strategic Data Systems, Inc.

Cincinnati (OH)

Remote

Full time

4 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading software consultancy is seeking a Lead AI AWS Engineer to develop AI/ML applications in the cloud. The role involves hands-on development of retrieval-augmented generation systems and AWS-native microservices, ensuring scalability and governance. Join a dynamic team pushing the boundaries of cloud-native AI, where your contributions are valued. This is not an entry-level position; we seek candidates with proven experience in building production-ready AI services.

Benefits

Medical, dental, and vision insurance coverage
401(k) with company match
Paid vacation time
Training allowance offering
Referral bonuses

Qualifications

  • 7-10 years of software engineering experience.
  • Strong background in document tokenization and embeddings.
  • Hands-on development of RAG and LLM-augmented applications.

Responsibilities

  • Develop and maintain modular AI services on AWS.
  • Contribute to end-to-end development of RAG pipelines.
  • Create and maintain technical documentation.

Skills

AWS services
Python
Golang
Natural Language Processing
Retrieval Augmented Generation

Tools

Terraform
PyTorch
TensorFlow
LangChain

Job description

W-2 Only Remote. $85 -$92/hr plus benefits. Sponsorship Available

09th May, 2025

For more than three decades, Strategic Data Systems (SDS) has been a software consultancy firm specializing in strategy, technology, and business transformation for Fortune 100 companies, mid-sized firms, and startups. At SDS, we empower our development teams to address our clients’ critical business challenges by leveraging cutting edge technologies. If you seek a workplace where your contributions are truly appreciated, then SDS is the company for you. Join us today to work alongside fellow development specialists and become a crucial part of our dynamic and cohesive community.

What You’ll Do
TECHNICAL SKILLS

Must Have

  • AWS services- Bedrock, SageMaker, ECS and Lambda
  • Demonstrated proficiency in Python and Golang coding languages
  • LLM
  • Natural Language Processing (NLP)
  • Retrieval Augmented Generation

Nice To Have
  • Exposure to FinOps and cloud cost optimization
  • Node.js
  • Policy as Code development (I.e. Terraform Sentinel)
We are hiring a Lead AI AWS Engineer who has actually built AI/ML applications in cloud—not just read about them. This role centers on hands-on development of retrieval-augmented generation (RAG) systems, fine-tuning LLMs, and AWS-native microservices that drive automation, insight, and governance in an enterprise environment. You’ll design and deliver scalable, secure services that bring large language models into real operational use—connecting them to live infrastructure data, internal documentation, and system telemetry. You’ll be part of a high-impact team pushing the boundaries of cloud-native AI in a real-world enterprise setting. This is not a prompt-engineering sandbox or a resume keyword trap. If you’ve merely dabbled in SageMaker, mentioned RAG on LinkedIn, or read about vector search—this isn’t the right fit. We’re looking for candidates who have architected, developed, and supported AI/ML services in production environments. This is a builder’s role within our Public Cloud AWS Engineering team. We aren’t hiring buzzword lists or conference attendees. If you’ve built something you’re proud of—especially if it involved real infrastructure, real data, and real users—we’d love to talk. If you’re still learning, that’s great too—but this isn’t an entry-level role or a theory-only position.
DUTIES AND RESPONSIBILITIES:
  • Develop and maintain modular AI services on AWS using Lambda, SageMaker, Bedrock, S3, and related components—built for scale, governance, and cost-efficiency.
  • Contribute to the end-to-end development of RAG pipelines that connect internal datasets (e.g., logs, S3 docs, structured records) to inference endpoints using vector embeddings.
  • Fine-tune LLM-based applications, including Retrieval-Augmented Generation (RAG) using LangChain and other frameworks.
  • Tune retrieval performance using semantic search techniques, proper metadata handling, and prompt injection patterns.
  • Work within the software release lifecycle, including CI/CD pipelines, GitHub-based SDLC, and infrastructure as code (Terraform).
  • Support the development and evolution of reusable platform components for AI/ML operations.
  • Create and maintain technical documentation for the team to reference and share with our internal customers.
  • Excellent verbal and written communication skills in English.
SUPERVISORY RESPONSIBILITIES:None REQUIRED KNOWLEDGE, SKILLS, AND ABILITIES:
  • 7-10 years of proven software engineering experience with a strong focus on Python andGoLang.
  • Must have a strong background in document tokenization, embeddings, various word models (such as Word2Vec,FastText, TF-IDF, BERT, GPT,ELMo, LDA, Transformers), and experience with NLP pipelines.
  • Direct, hands-on development of RAG, semantic search, or LLM-augmented applications, and using frameworks and ML tooling like Transformers, PyTorch, TensorFlow, and LangChain—not just experimentation in a notebook.
  • Deep expertise with AWS services, especially Bedrock, SageMaker, ECS, and Lambda.
  • Proven experience fine-tuning large language models, building datasets, and deploying ML models to production.
  • Demonstrated experience with AWS organizations and policy guardrails (SCP, AWS Config).
  • Demonstrated experience in Infrastructure as Code best practices and experience with building Terraform modules for AWS cloud.
  • Strong background in Git-based version control, code reviews, and DevOps workflows.
  • Demonstrated success delivering production-ready software with release pipeline integration.
NICE-TO-HAVES:
  • AWS or relevant cloud certifications.
  • Policy as Code development (i.e., Terraform Sentinel).
  • Experience optimizing cost-performance in AI systems (FinOps mindset).
  • Data science background or experience working with structured/unstructured data.
  • Awareness of data privacy and compliance best practices (e.g., PII handling, secure model deployment).
  • Experience with Node.js.
09th May, 2025

For more than three decades, Strategic Data Systems (SDS) has been a software consultancy firm specializing in strategy, technology, and business transformation for Fortune 100 companies, mid-sized firms, and startups. At SDS, we empower our development teams to address our clients’ critical business challenges by leveraging cutting edge technologies. If you seek a workplace where your contributions are truly appreciated, then SDS is the company for you. Join us today to work alongside fellow development specialists and become a crucial part of our dynamic and cohesive community.

What You’ll Do
TECHNICAL SKILLS

Must Have
  • AWS services- Bedrock, SageMaker, ECS and Lambda
  • Demonstrated proficiency in Python and Golang coding languages
  • LLM
  • Natural Language Processing (NLP)
  • Retrieval Augmented Generation
Nice To Have
  • Exposure to FinOps and cloud cost optimization
  • Node.js
  • Policy as Code development (I.e. Terraform Sentinel)
We are hiring a Lead AI AWS Engineer who has actually built AI/ML applications in cloud—not just read about them. This role centers on hands-on development of retrieval-augmented generation (RAG) systems, fine-tuning LLMs, and AWS-native microservices that drive automation, insight, and governance in an enterprise environment. You’ll design and deliver scalable, secure services that bring large language models into real operational use—connecting them to live infrastructure data, internal documentation, and system telemetry. You’ll be part of a high-impact team pushing the boundaries of cloud-native AI in a real-world enterprise setting. This is not a prompt-engineering sandbox or a resume keyword trap. If you’ve merely dabbled in SageMaker, mentioned RAG on LinkedIn, or read about vector search—this isn’t the right fit. We’re looking for candidates who have architected, developed, and supported AI/ML services in production environments. This is a builder’s role within our Public Cloud AWS Engineering team. We aren’t hiring buzzword lists or conference attendees. If you’ve built something you’re proud of—especially if it involved real infrastructure, real data, and real users—we’d love to talk. If you’re still learning, that’s great too—but this isn’t an entry-level role or a theory-only position.
DUTIES AND RESPONSIBILITIES:
  • Develop and maintain modular AI services on AWS using Lambda, SageMaker, Bedrock, S3, and related components—built for scale, governance, and cost-efficiency.
  • Contribute to the end-to-end development of RAG pipelines that connect internal datasets (e.g., logs, S3 docs, structured records) to inference endpoints using vector embeddings.
  • Fine-tune LLM-based applications, including Retrieval-Augmented Generation (RAG) using LangChain and other frameworks.
  • Tune retrieval performance using semantic search techniques, proper metadata handling, and prompt injection patterns.
  • Work within the software release lifecycle, including CI/CD pipelines, GitHub-based SDLC, and infrastructure as code (Terraform).
  • Support the development and evolution of reusable platform components for AI/ML operations.
  • Create and maintain technical documentation for the team to reference and share with our internal customers.
  • Excellent verbal and written communication skills in English.
SUPERVISORY RESPONSIBILITIES:None REQUIRED KNOWLEDGE, SKILLS, AND ABILITIES:
  • 7-10 years of proven software engineering experience with a strong focus on Python andGoLang.
  • Must have a strong background in document tokenization, embeddings, various word models (such as Word2Vec,FastText, TF-IDF, BERT, GPT,ELMo, LDA, Transformers), and experience with NLP pipelines.
  • Direct, hands-on development of RAG, semantic search, or LLM-augmented applications, and using frameworks and ML tooling like Transformers, PyTorch, TensorFlow, and LangChain—not just experimentation in a notebook.
  • Deep expertise with AWS services, especially Bedrock, SageMaker, ECS, and Lambda.
  • Proven experience fine-tuning large language models, building datasets, and deploying ML models to production.
  • Demonstrated experience with AWS organizations and policy guardrails (SCP, AWS Config).
  • Demonstrated experience in Infrastructure as Code best practices and experience with building Terraform modules for AWS cloud.
  • Strong background in Git-based version control, code reviews, and DevOps workflows.
  • Demonstrated success delivering production-ready software with release pipeline integration.
NICE-TO-HAVES:
  • AWS or relevant cloud certifications.
  • Policy as Code development (i.e., Terraform Sentinel).
  • Experience optimizing cost-performance in AI systems (FinOps mindset).
  • Data science background or experience working with structured/unstructured data.
  • Awareness of data privacy and compliance best practices (e.g., PII handling, secure model deployment).
  • Experience with Node.js.

What You’ll Get

SDS, Inc. provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, gender, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state, and local laws.

  • Competitive base salary
  • Medical, dental, and vision insurance coverage
  • Optional life and disability insurance provided
  • 401(k) with a company match and optional profit sharing
  • Paid vacation time
  • Paid Bench time
  • Training allowance offering
  • You’ll be eligible to earn referral bonuses!
Apply For Job
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.