Deploy Model with Conda Runtime ******************************* Once you have ADS Model object, you can call ``deploy`` function to deploy the model and generate the endpoint. Here is an example of deploying LightGBM model: .. code-block:: python3 import lightgbm as lgb import tempfile from ads.common.model_metadata import UseCaseType from ads.model.framework.lightgbm_model import LightGBMModel from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split # Load dataset and Prepare train and test split iris = load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) # Train a LightGBM Classifier model train = lgb.Dataset(X_train, label=y_train) param = { 'objective': 'multiclass', 'num_class': 3, } lightgbm_estimator = lgb.train(param, train) # Instantiate ads.model.LightGBMModel using the trained LGBM Model lightgbm_model = LightGBMModel(estimator=lightgbm_estimator, artifact_dir=tempfile.mkdtemp()) # Autogenerate score.py, pickled model, runtime.yaml, input_schema.json and output_schema.json lightgbm_model.prepare( inference_conda_env="generalml_p38_cpu_v1", X_sample=X_train, y_sample=y_train, use_case_type=UseCaseType.BINARY_CLASSIFICATION, ) # Verify generated artifacts lightgbm_model.verify(X_test) # Register LightGBM model model_id = lightgbm_model.save() # Deploy LightGBM model lightgbm_model.deploy( display_name="LightGBM Model", deployment_log_group_id="ocid1.loggroup.oc1.xxx.xxxxx", deployment_access_log_id="ocid1.log.oc1.xxx.xxxxx", deployment_predict_log_id="ocid1.log.oc1.xxx.xxxxx", # Shape config details mandatory for flexible shapes: # deployment_instance_shape="VM.Standard.E4.Flex", # deployment_ocpus=, # deployment_memory_in_gbs=, ) # Get endpoint of deployed model model_deployment_url = lightgbm_model.model_deployment.url # Generate prediction by invoking the deployed endpoint lightgbm_model.predict(X_test)["prediction"] Here example retrieve predictions from model deployment endpoint using oci-cli: .. code-block:: bash export model_deployment_url=/predict oci raw-request --http-method POST \ --target-uri $model_deployment_url \ --request-body '{"data": [[5.6, 2.7, 4.2, 1.3]]}' Find more information about oci raw-request command `here `_. Deploy ------ .. include:: _template/deploy.rst Predict ------- .. include:: _template/predict.rst Observability ------------- ``tail`` or ``head`` logs generated by the model deployment instances - .. code-block:: python3 lightgbm_model.model_deployment.logs().tail() You cal also call the ``.watch()`` from model deployment instance to stream the logs .. code-block:: python3 lightgbm_model.model_deployment.watch() Update Model Deployment ------------------------- You can update the existing Model Deployment by using ``.update_deployment()`` method. See `API documentation <../../ads.model.html#ads.model.generic_model.GenericModel.update_deployment>`__ for more details. .. code-block:: python3 lightgbm_model.update_deployment( wait_for_completion = True, access_log={ log_id="ocid1.log.oc1.xxx.xxxxx", }, description="Description for Custom Model", freeform_tags={"key": "value"}, )