MLOps isn't that hard: Modular Stack with Open-Source Tools

This talk is part of the AI Engineering Job Fair. Signing up for this event will give you access to all the talks on the agenda, including the job fair.

About the talk

Picture this: a machine learning system that's less of a riddle and more of an open book.

With the inherent intricacies of code, models, and data, these systems are a unique challenge. Building, managing, and deploying require a dance between processes, practices, and tools that span across the machine learning solutions lifecycle.

This talk is taking it back to basics and building an end-to-end ML pipeline from the ground up. But we won't stop there. We're going further than the blueprint and scaling up to a production-ready stack.

This talk will cover

  • Creating a complete ML pipeline, handling code, models, and data intricacies, and overcoming deployment challenges.
  • Highlighting data validators, experiment trackers, and model serving tools to build an efficient production-ready ML stack.
  • Effortlessly scaling from local to cloud infrastructure for seamless machine learning system expansion.
Job Fair

About the speaker

Hamza Tahir

Based on his learnings from deploying ML in production for predictive maintenance use-cases in his previous startup, he co-created ZenML, an open-source MLOps framework for easily creating production grade ML pipelines on any infrastructure stack.

High-income, remote careers for developers like you

Join Arc and receive offers from high-growth tech startups paying from $60,000 to 175,000 USD!

Discussion 

Loading...