Machine Learning Operations Engineer at Remote, Remote, USA |
Email: [email protected] |
From: Sandeep Bisht, Key Infotek [email protected] Reply to: [email protected] MLOps Engineer Level 2 Ohio, Remote Job Description Client is currently looking for a MLOps Engineer to take a pivotal role in managing the installation, modification, and support of Linux applications within their supernode platform. This platform comprises a cluster of high-performance computers, designed to streamline AI model training and testing. The ideal candidate will bring their expertise in Kubernetes, Slurm, JupyterHub, and Linux administration to ensure the smooth operation of this environment. Candidates with experience in AI development, video analytics, and other tech disciplines will find this role particularly engaging, as the platform's capabilities extend beyond AI. This position offers a unique opportunity to contribute to the broader technological advancements of the organization. Requirements: Proficiency in Linux administration, with a strong preference for candidates with deep expertise in Linux environments. Windows experience is acceptable, but a solid grasp of Linux is essential. Demonstrated ability to install, modify, and provide support for Linux applications. Experience with JupyterHub is a plus. Familiarity with cluster management, particularly in negotiating resources across multiple computers simultaneously. Knowledge of the Bright software is highly desirable. Proficiency in Slurm for job scheduling, with any prior experience being an advantage. Competence in container management, including expertise with Docker for containerization, pushing, and pulling containers. Knowledge of maintaining High-Performance Computing (HPC) systems, encompassing various components that make up this sophisticated infrastructure. Key Responsibilities Collaborate with the AI team to customize the environment, ensuring it is optimized for AI development. Work closely with the infrastructure team to configure and manage physical hardware and the underlying operating system. Implement and manage partitioning on the supernode, allocating resources for different environments (Jupyter, Slurm, Linux shell, Docker containers, etc.). Provide support and administration for Kubernetes, aiding in the integration of various providers. Continuously evolve processes and ways of working to maximize the platform's efficiency, ultimately reducing the need for external support. Keywords: artificial intelligence information technology |
[email protected] View all |
Mon Sep 18 20:19:00 UTC 2023 |