Permanent

MLOPs engineer

Posted on 09 April 25 by Vitalii Moudrak

  • Remote, Canada
  • $90000 - $120000 per Year
Logo

Powered by Tracker

Job Description

About the Role
Our client is a leader in AI-driven media, building cutting-edge solutions in lip synchronization and synthetic voice. Their real-time systems rely on advanced machine learning and deep learning infrastructure, and they are actively seeking an MLOps Engineer to help scale and operationalize their AI models across production environments.

The Ideal Candidate
This role is ideal for someone with a DevOps background who has transitioned into machine learning, or an MLOps engineer who sits at the intersection of ML workflows and operations. You should be well-versed in building scalable, cloud-native solutions and deploying models via APIs. If you enjoy working on real-time pipelines, model optimization, and cloud automation, this role is a great fit.

What You’ll Be Doing

  • Develop and manage full ML lifecycle workflows using MLflow, Kubeflow, or similar tools.

  • Deploy and monitor machine learning models with platforms like SageMaker, Vertex AI, or other model tracking solutions.

  • Build automated pipelines for training, validation, and production inference.

  • Containerize and deploy models using Docker, Kubernetes, and serverless frameworks.

  • Work on API-based model deployment and potentially ingesting and processing real-time time series data.

  • Optimize real-time model performance using ONNX Runtime, TensorRT, and other inference tools.

  • Implement CI/CD pipelines (GitHub Actions, Jenkins, ArgoCD) to automate the ML workflow.

  • Monitor models for drift, data integrity, and performance issues in production.

  • Collaborate with cross-functional teams including ML researchers, software engineers, and product teams.

  • Ensure secure and ethical handling of data and models across environments.

What You’ll Bring

  • 3+ years of experience in MLOps, DevOps with ML focus, or AI infrastructure.

  • Strong proficiency in Python, with experience developing ML pipelines and working with ML frameworks like PyTorch and TensorFlow.

  • Solid hands-on experience with SageMaker, Vertex AI, or any model deployment and tracking platform.

  • Expertise in ML pipeline orchestration using MLflow, Kubeflow, or similar tools.

  • Deep understanding of containerized deployments, Kubernetes, and cloud-native architectures.

  • Strong experience with AWS (S3, ECS, SageMaker, Lambda, etc.); experience with Azure and/or GCP is a bonus.

  • Understanding of real-time inference systems, including data ingestion and latency optimization.

  • Experience with GPU acceleration, distributed systems, and model optimization workflows.

  • Knowledge of CI/CD practices and automated testing for ML models.

  • Strong troubleshooting, automation, and problem-solving abilities.

  • Self-directed and comfortable managing infrastructure at scale.

Bonus Points For

  • Experience with multi-cloud deployments or migration between platforms.

  • Familiarity with LLMs or generative AI models.

  • Understanding of networking fundamentals and edge AI deployment (e.g., Triton, CoreML, TFLite).

  • Exposure to distributed computing frameworks (e.g., Ray, Spark, Horovod).

Job Information

Rate / Salary

$90000 - $120000 per Year

Sector

Media and Publishing

Category

Not Specified

Skills / Experience

Not Specified

Benefits

Not Specified

Our Reference

JOB-21538

Job Location