Remote Opportunity: MLOPS Engineer at Remote, Remote, USA |
Email: [email protected] |
From: Satnam Singh, SPAR Information Systems [email protected] Reply to: [email protected] Hello All, Hope you are doing great Please go through the job description and let me know your interest. Role: ML OPS Engineer Location: Remote Duration: Long Term Contract Skills: Tensorflow, TFServing, Cuda Mandatory Areas Must Have Skills Kubernetes (On Prem/Cloud) 5+ Yrs of Exp Docker 5+ Yrs of Exp Programming Lang (Python, Node, Golang, or bash) 5+ Yrs of Exp (At least 2) Seldon core, MLFlow, Istio, Jaeger, Ambassador, Triton, PyTorch, Tensorflow/TFserving or similar tools ( 4 of these) 4+ Yrs of Exp ML Ops Engineer in eComm Analytics Description: Client is looking for a highly energetic and collaborative ML Ops Engineer with experience building enterprise solutions on web / cloud platforms. The ideal candidate should have experience with Seldon core, MLFlow, Istio, Jaeger, Ambassador, Triton, PyTorch, Tensorflow/TFserving and Experience with distributed computing and deep learning technologies such as Apache MXNet, CUDA, cuDNN, TensorRT. The candidate should be a proven self-starter with demonstrated ability to make decisions and accept responsibility and risk. Excellent written and verbal communication skills with the ability to collaborate effectively with domain experts and IT leadership team is key to be successful in this role. We are using Kubernetes (K8) for their MLOps pipeline orchestration, and this is a powerful and intricate system that involves many moving parts and requires knowledge of related technologies such as Docker, container networking, load balancing, and more. Hands-on practice is essential, as it requires deploying and managing containerized applications, creating Kubernetes objects, configuring networking and storage, and troubleshooting issues that arise in the system. Key Responsibilities: Work with Client's AI/ML Platform Enablement team within the eCommerce Analytics team. The broader team is currently on a transformation path, and this role will be instrumental in enabling the broader team's vision. Work closely with data scientists to help with production models and maintain them in production. Deploy and configure Kubernetes components for production cluster, including API Gateway, Ingress, Model Serving, Logging, Monitoring, Cron Jobs, etc. Improve the model deployment process for MLE for faster builds and simplified workflows Be a technical leader on various projects across platforms and a hands-on contributor of the entire platform's architecture System administration, security compliance, and internal tech audits Responsible for leading operational excellence initiatives in the AI/ML space which includes efficient use of resources, identifying optimization opportunities, forecasting capacity, etc. Design and implement different flavors of architecture to deliver better system performance and resiliency. Develop capability requirements and transition plan for the next generation of AI/ML enablement technology, tools, and processes to enable Walmart to efficiently improve performance with scale. Tools/Skills (hands-on experience is must): Administering Kubernetes. Ability to create, maintain, scale, and debug production Kubernetes clusters as a Kubernetes administrator and In-depth knowledge of Docker. Ability to transform designs ground up and lead innovation in system design Deep understanding of data center architectures, networking, storage solutions, and scale system performance Have worked on at least one Kubernetes cloud offering (EKS/GKE/AKS) or on-prem Kubernetes (native Kubernetes, Gravity, MetalK8s) Programming experience in Python, Node, Golang, or bash Ability to use observability tools (Splunk, Prometheus, and Grafana ) to look at logs and metrics to diagnose issues within the system. Experience with Seldon core, MLFlow, Istio, Jaeger, Ambassador, Triton, PyTorch, Tensorflow/TFserving is a plus. Experience with distributed computing and deep learning technologies such as Apache MXNet, CUDA, cuDNN, TensorRT Experience hardening a production-level Kubernetes environment (memory/CPU/GPU limits, node taints, annotations/labels, etc.) Experience with Kubernetes cluster networking and Linux host networking Experience scaling infrastructure to support high-throughput data-intensive applications Background with automation and monitoring platforms, MLOps ,and configuration management platforms Education & Experience: - 5+ years relevant experience in roles with responsibility over data platforms and data operations Thanks & Regards, Satnam Singh Direct: 201 623 3660 Email : [email protected] Keywords: artificial intelligence machine learning information technology golang |
[email protected] View all |
Mon Jan 15 18:55:00 UTC 2024 |