AI Model Serving Specialist
United States - Remote••December 18, 2025
Role PurposeEnable enterprise customers to operationalize AI workloads by deploying and optimizing model-serving platforms (e.g., NVIDIA Triton, vLLM, KServe) within Rackspace’s Private Cloud and Hybrid environments. This role bridges AI engineering and platform operations, ensuring secure, scalable, and cost-efficient inference services.Key Responsibilities : -Model Deployment & Optimization Package and deploy ML/LLM models on Triton, vLLM, or KServe within Kubernetes clusters. Tune performance (batching, KV-cache, TensorRT optimizations) for latency and throughput SLAs.Platform Integration Work with VMware VCF9, NSX-T, and vSAN ESA to ensure GPU resource allocation and multi-tenancy. Implement RBAC, encryption, and compliance controls for sovereign/private cloud customers.API & Service Enablement Integrate models with Rackspace’s Unified Inference API and API Gateway for multi-tenant routing. Support RAG and agentic workflows by connecting to vector databases and context stores.Observability & FinOps Configure telemetry for GPU utilization, request tracing, and error monitoring. Collaborate with FinOps to enable usage metering and chargeback reporting.Customer Engineering Support Assist solution architects in onboarding customers, creating reference patterns for BFSI, Healthcare, and other verticals. Provide troubleshooting and performance benchmarking guidance.Continuous Improvement Stay current with emerging model-serving frameworks and GPU acceleration techniques. Contribute to reusable Helm charts, operators, and automation scripts.
Required Skills & Experience
- Hands-on experience with NVIDIA Triton, vLLM, or similar serving stacks.
- Strong knowledge of Kubernetes, GPU scheduling, and CUDA/MIG.
- Familiarity with VMware VCF9, NSX-T networking, and vSAN storage classes.
- Proficiency in Python and containerization (Docker).
- Understanding of observability stacks (Prometheus, Grafana) and FinOps principles.
- Exposure to RAG architectures, vector DBs, and secure multi-tenant environments.
- Excellent problem-solving and customer-facing communication skills.
Preferred Certifications
- NVIDIA Certified Professional (AI/ML)
- Kubernetes Administrator (CKA)
- VMware VCF Specialist
- Rackspace AI Foundations (internal)
KPI's
- Model deployment success rate and SLA compliance.
- Latency/throughput benchmarks per SKU.
- Customer satisfaction (NPS) for AI services.
- Efficiency in GPU utilization and cost optimization.
Physical Demands
- General office environment: no special physical demands required.
- May require long periods of sitting and viewing a computer monitor.
- Schedule flexibility to include working weekends and/or evenings and holidays as required by the business for 24/7 operations.
Travel
- As per business needs
Sponsorship
- This role is not sponsorship eligible
- Candidates need to be legally allowed to work in the US for any employer
