Description
We are seeking an experienced Edge AI / CV Orchestration Engineer to enhance and scale our core engine that manages AI workloads on resource-constrained edge devices.
Your primary mission is to improve the "brain" of our system : the orchestration engine that decides which AI tasks run, when they run, and how they perform on hardware like the NVIDIA Jetson Orin Nano . You will be the bridge between our backend infrastructure and our deployed physical hardware, ensuring our edge devices operate efficiently, reliably, and at scale.
Requirements
Required Qualifications (Must-Haves)
- Proven Experience: 3-5+ years of experience in backend development, systems programming, or DevOps, with a focus on distributed or embedded systems.
- Expert Python: Strong proficiency in Python, especially for building backend services, asynchronous programming (e.g., asyncio), and system-level scripting.
- NVIDIA Jetson Expertise: Demonstrable, hands‑on experience working with the NVIDIA Jetson platform (e.g., Orin Nano, Xavier NX, AGX). You should be comfortable with the JetPack SDK and its tools.
- Deep Linux Knowledge: Strong command of the Linux operating system. You must understand system administration, process management, shell scripting, networking, and performance tuning (e.g., using top, htop, perf).
- Backend Systems: Solid experience designing, building, and maintaining APIs and microservices.
⭐ Preferred Qualifications (Nice-to-Haves)
- Containerization: Experience with Docker, containerd, and / or edge‑focused Kubernetes (K3s, MicroK8s).
- NVIDIA Stack: Direct experience with the NVIDIA stack is a major plus, especially NVIDIA DeepStream, TensorRT, or Triton Inference Server.
- AI / CV Exposure: A good understanding of the challenges in deploying computer vision or machine learning models.
- Other Languages: Experience with C++ or Go for performance‑critical components.
- Fleet Management: Experience with IoT fleet management tools (e.g., AWS IoT Greengrass, Azure IoT Edge, or Balena).
Key Responsibilities
- Engine Enhancement: Design, develop, and optimize new features for our Python‑based backend orchestration engine.
- System Management: Implement robust solutions for resource allocation (CPU, GPU, memory), task scheduling, and priority management across a fleet of edge devices.
- Performance Optimization: Profile and fine‑tune the performance of AI / CV pipelines on the Jetson platform, diving deep into the Linux environment to resolve bottlenecks.
- Backend & API Development: Build and maintain scalable backend services and APIs (e.g., REST, gRPC) that allow for remote deployment, monitoring, and updating of tasks.
- Deployment & CI / CD: Improve automated deployment pipelines for pushing new AI models and orchestration logic to devices in the field.
- Troubleshooting: Act as the key troubleshooter for complex system‑level issues that span AI models, our orchestration code, and the underlying Linux OS on the Jetson hardware.