Skip to content

Deployment models on AWS SageMaker with ML_DSL

Anna Safonova edited this page Jun 29, 2020 · 1 revision

There are two ways to deploy a model on AWS SageMaker: using API or jupyter magic functions.

Using API

from com.griddynamics.dsl.ml.executors.executors import SageMakerExecutor
from com.griddynamics.dsl.ml.settings.profiles import SageMakerProfile
from com.griddynamics.dsl.ml.jobs.builder import JobBuilder
from com.griddynamics.dsl.ml.sessions import SessionFactory
from com.griddynamics.dsl.ml.settings.description import Platform
from com.griddynamics.dsl.ml.settings.arguments import Arguments
from sagemaker.pytorch import PyTorchModel

define Profile for job

profile = SageMakerProfile(bucket='test-bucket', cluster='test-cluster',
                           region='us-east-1', job_prefix='mldsl_test',
                           container=PyTorchModel,       
                           framework_version='1.4.0',         
                           instance_type='ml.m4.xlarge')

set python script

profile.model_data = 's3://sagemaker-.../pytorch-model/model.tar.gz'
script_name='pytorch_script.py'

Session instance for submitting deployment job to SageMaker

session = SessionFactory(platform=Platform.AWS)
                        .build_session(job_bucket=profile.bucket,
                                       job_region=profile.region,                                                                             
                                       cluster=profile.cluster,
                                       job_project_id=profile.region,                                                                    
                                       ml_region=profile.ai_region) 

Executor instance for submitting deployment job to SageMaker

executor = SageMakerExecutor(session, profile, mode='deploy', 
                             py_script_name=script_name, args={})

Return predictor instance and information about endpoint

predictor, response = executor.submit_deploy_model_job()

After deployment, a predictor can be used for deployment on SageMaker.

Using Magic Functions

from com.griddynamics.dsl.ml.settings.profiles import SageMakerProfile
from com.griddynamics.dsl.ml.settings.description import Platform

define Profile for job

profile = SageMakerProfile(bucket='test-bucket', cluster='test-cluster',
                           region='us-east-1', job_prefix='mldsl_test',
                           container=PyTorch, root_path='scripts/',framework_version='1.4.0',         
                           instance_type='ml.m4.xlarge', use_cloud_engine_credentials='SageMakerRoleMlDsl')

profile.model_data = 's3://sagemaker-.../pytorch-model/model.tar.gz'

set profile and platform

Profile.set('SMProfile', profile)
platform = Platform.AWS

Open or load task script using magic functions %py_script, %py_script_open or %py_load:

%%py_script_open --name pytorch_script.py --path scripts

Deployment using magic function %py_deploy:

%py_deploy -n pytorch_script.py -p SMProfile -pm $platform