Amazon SageMaker Pipelines: model building made easy, deployment not included
...and that's essentially the thing to keep in mind when evaluating this SageMaker service.
SageMaker Pipelines, revealed at re:Invent 2020, allow you to automate all the steps necessary to produce a machine learning model. You use usual building blocks (like Processing or Training Jobs) and wrap them into Steps from sagemaker.workflow
package, resulting in a DAG similar to ones created in technologies such as Kubeflow, Airflow or Prefect.
Afterwards, to rebuild your model, all you need is one click or one CLI command. SageMaker Pipelines will then automatically spin all the necessary jobs in a proper order passing data and artifacts from one step to another.
However, their full name is SageMaker Model Building Pipelines (really), meaning they are not meant to provide a deployment mechanism. In most cases, the end step is supposed to be ModelStep, which registers your model to SageMaker Model Registry. Afterwards, a CD system such as GitLab CI/CD, GitHubActions, Jenkins, AWS CodePipeline or similar should pick your model up and deploy it to an environment of your choice.
This kind of setup can be seen when you use SageMaker Projects. The examples provided by AWS use CodePipeline, Jenkins or other 3rd party providers to perform the deployment.
Two notes - this applies only to SageMaker Inference, because TransformStep can be used to run Batch Transform. There is also a LambdaStep available, allowing you to write arbitrary code. Theoretically, you could just invoke AWS commands via AWS Lambda that would deploy your model while still being in SageMaker Pipelines workflow. But that’s hardly a native solution - rather a workaround. It seems like AWS deliberately did not include the deployment in them and doesn’t want you to do it via SageMaker Pipelines.
Fighting the cloud’s design is rarely a good idea.