7 re:Invent 2022 sessions for MLOps focused folks
The biggest annual AWS conference re:Invent 2022 is starting soon. Are you well prepared?
MLOps is an engineering culture that facilitates rapid iteration and stable ML workloads. If you’re building machine learning systems, changing your mindset to adhere to MLOps principles will quickly pay off. Your experiments will become repeatable and shareable, your models reproducible and explainable, your deployments scalable and reliable and your team productive and business-focused.
AWS re:Invent covers a very broad range of topics and, for a couple years now, MLOps is one of them. Here’s my guide on how you can maximise your re:Invent MLOps learnings by attending these carefully picked re:Invent sessions. Starting with a history walk, through experimentation and automation related sessions, ending at deep-dive deployment related sessions, you will get a detailed overview of how AWS services streamline implementing MLOps principles.
Evolution of the machine learning development environment (BOA203-R)
Those of you that have been building machine learning applications for a few years now definitely remember how immature the ML ecosystem was back in the days. Everything we had was notebooks, scripts and some big data Hadoop-related frameworks to automate them. MLOps was not even a thing. This talk will give you an interesting history walk and outline all the great improvements that have been made over the recent years. Thankfully, the days of Hadoop-related pipelines are over!
Presented at Wednesday, November 30, 1:00 PM - 2:00 PM
Transform ML development through team-based collaboration (AIM323)
Facilitating rapid iteration in ML projects means letting your team collaborate and share their insights in a safe, reproducible environment. This session might be a great introduction to SageMaker Studio and SageMaker Studio Notebooks, tools designed precisely to streamline the experimentation phase of every machine learning project. Folks from Amazon will certainly share their real-world experiences and emphasize how crucial the collaboration in a team is.
Presented on Wednesday, November 30, 5:30 PM - 6:30 PM
Productionize ML workloads using Amazon SageMaker MLOps, feat. NatWest (AIM321)
This is a no brainer for anyone that wants to see SageMaker MLOps capabilities in action. Witness how leveraging SageMaker Pipelines, SageMaker Projects, SageMaker Experiments, SageMaker Model Registry and SageMaker Model Monitor lets you build reliable, production-grade ML systems in days. See how companies as large as banks such as NatWest use SageMaker to standardize their ML development process across many teams.
Presented on Wednesday, November 30, 4:45 PM - 5:45 PM
Scalable Kubernetes ML applications with Kubeflow on AWS & Amazon SageMaker (AIM335)
Alternatively to the session above, you may rather want to see how running an open-source MLOps platform on AWS (an alternative to SageMaker goodies), lets you remain vendor-independent while still leveraging a solid AWS backbone. Kubeflow is an established, leading open source ML platform. With its notebooks, pipelines and deployment capabilities adhering to MLOps guidelines with Kubeflow is straightforward. This talk shows how using AWS own distribution called Kubeflow on AWS lets you get Kubeflow up and running in minutes.
Presented on Wednesday, November 30, 4:45 PM - 5:45 PM
Deploy ML models for inference at high performance & low cost, feat. AT&T (AIM302)
Making machine learning models deployment a reliable, scalable and repeatable process is one of the most crucial MLOps goals. SageMaker Inference has a plethora of options to help you with it. Going through Serverless Inference through Asynchronous Inference and Batch Transform all the way to Real-time Inference (with all its flavours such as Multi Model or Multi Container deployment options), even the most sophisticated deployment requirements can be satisfied. Join this session to get an overview of all these features and learn how giants such as AT&T use SageMaker Inference.
Presented on Monday, November 28, 4:45 PM - 5:45 PM
Minimizing the production impact of ML model updates with shadow testing (AIM343)
If AIM323 seems too simple for you, you might want to dive deeper into deployment problems instead. When other systems start depending on your deployed models, making even the tiniest change in their behaviour might have a drastic impact on downstream solutions. This session will show you how you can safely test new versions of your ML models without impacting other systems. A must-see, complimentary session to AIM302!
Presented on Wednesday, November 30, 5:30 PM - 6:30 PM
Deploying neural networks at scale on AWS (AIM409)
Last but not least, an advanced session for the MLOps experts. This one covers a broad range of AWS services and shows an example implementation of a MLOps platform for multimodal data and large neural networks training. If AWS and SageMaker are no strangers to you, you can not miss this exciting deep-dive.
Presented on Thursday, December 1, 1:15 PM - 2:15 PM
There are also workshops such as AIM308 or STP303-R you might attend to not only listen about AWS MLOps capabilities, but also build them yourself and see them in action.
And as a foreword, don’t forget to attend both the Adam Selipsky and Swami Sivasubramanian keynotes! I personally can’t wait to see all the freshly revealed AWS and SageMaker improvements.