ads.model package#

Subpackages#

Submodules#

ads.model.artifact module#

exception ads.model.artifact.AritfactFolderStructureError(required_files: Tuple[str])[source]#

Bases: Exception

exception ads.model.artifact.ArtifactNestedFolderError(folder: str)[source]#

Bases: Exception

exception ads.model.artifact.ArtifactRequiredFilesError(required_files: Tuple[str])[source]#

Bases: Exception

class ads.model.artifact.ModelArtifact(artifact_dir: str, model_file_name: str | None = None, reload: bool | None = False, ignore_conda_error: bool | None = False, local_copy_dir: str | None = None, auth: dict | None = None)[source]#

Bases: object

The class that represents model artifacts. It is designed to help to generate and manage model artifacts.

Initializes a ModelArtifact instance.

Parameters:
  • artifact_dir (str) – The artifact folder to store the files needed for deployment.

  • model_file_name ((str, optional). Defaults to None.) – The file name of the serialized model.

  • reload ((bool, optional). Defaults to False.) – Determine whether will reload the Model into the env.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • local_copy_dir ((str, optional). Defaults to None.) – The local back up directory of the model artifacts.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

Returns:

A ModelArtifact instance.

Return type:

ModelArtifact

Raises:

ValueError – If artifact_dir not provided.

classmethod from_uri(uri: str, artifact_dir: str, model_file_name: str | None = None, force_overwrite: bool | None = False, auth: Dict | None = None, ignore_conda_error: bool | None = False)[source]#

Constructs a ModelArtifact object from the existing model artifacts.

Parameters:
  • uri (str) – The URI of source artifact folder or achive. Can be local path or OCI object storage URI.

  • artifact_dir (str) – The local artifact folder to store the files needed for deployment.

  • model_file_name ((str, optional). Defaults to None) – The file name of the serialized model.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

Returns:

A ModelArtifact instance

Return type:

ModelArtifact

Raises:

ValueError – If uri is equal to artifact_dir, and it not exists. If artifact_dir is not provided.

prepare_runtime_yaml(inference_conda_env: str, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', bucketname: str = 'service-conda-packs', auth: dict | None = None, ignore_conda_error: bool = False) None[source]#

Generate a runtime yaml file and save it to the artifact directory.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – The object storage path of conda pack which will be used in deployment. Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – The python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – The object storage path of conda pack used during training. Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • training_python_version ((str, optional). Defaults to None.) – The python version used during training.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional)) – The namespace of region. Defaults to environment variable CONDA_BUCKET_NS.

  • bucketname ((str, optional)) – The bucketname of service pack. Defaults to environment variable CONDA_BUCKET_NAME.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

Raises:

ValueError – If neither slug or conda_env_uri is provided.

Returns:

A RuntimeInfo instance.

Return type:

RuntimeInfo

prepare_score_py(jinja_template_filename: str, model_file_name: str | None = None, **kwargs)[source]#

Prepares score.py file.

Parameters:
  • jinja_template_filename (str.) – The jinja template file name.

  • model_file_name ((str, optional). Defaults to None.) – The file name of the serialized model.

  • **kwargs ((dict)) – use_torch_script: bool data_deserializer: str

Return type:

None

Raises:

ValueError – If model_file_name not provided.

reload()[source]#

Syncs the score.py to reload the model and predict function.

Returns:

Nothing

Return type:

None

ads.model.artifact_downloader module#

class ads.model.artifact_downloader.ArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, force_overwrite: bool | None = False)[source]#

Bases: ABC

The abstract class to download model artifacts.

Initializes ArtifactDownloader instance.

Parameters:
  • dsc_model (OCIDataScienceModel) – The data scince model instance.

  • target_dir (str) – The target location of model after download.

  • force_overwrite (bool) – Overwrite target_dir if exists.

PROGRESS_STEPS_COUNT = 1#
download()[source]#

Downloads model artifacts.

Return type:

None

Raises:

ValueError – If target directory does not exist.

class ads.model.artifact_downloader.LargeArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, auth: Dict | None = None, force_overwrite: bool | None = False, region: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True)[source]#

Bases: ArtifactDownloader

Initializes LargeArtifactDownloader instance.

Parameters:
  • dsc_model (OCIDataScienceModel) – The data scince model instance.

  • target_dir (str) – The target location of model after download.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Overwrite target_dir if exists.

  • region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

PROGRESS_STEPS_COUNT = 4#
class ads.model.artifact_downloader.SmallArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, force_overwrite: bool | None = False)[source]#

Bases: ArtifactDownloader

Initializes ArtifactDownloader instance.

Parameters:
  • dsc_model (OCIDataScienceModel) – The data scince model instance.

  • target_dir (str) – The target location of model after download.

  • force_overwrite (bool) – Overwrite target_dir if exists.

PROGRESS_STEPS_COUNT = 3#

ads.model.artifact_uploader module#

class ads.model.artifact_uploader.ArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str)[source]#

Bases: ABC

The abstract class to upload model artifacts.

Initializes ArtifactUploader instance.

Parameters:
  • dsc_model (OCIDataScienceModel) – The data scince model instance.

  • artifact_path (str) – The model artifact location.

PROGRESS_STEPS_COUNT = 3#
upload()[source]#

Uploads model artifacts.

class ads.model.artifact_uploader.LargeArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str, bucket_uri: str, auth: Dict | None = None, region: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True)[source]#

Bases: ArtifactUploader

Initializes LargeArtifactUploader instance.

Parameters:
  • dsc_model (OCIDataScienceModel) – The data scince model instance.

  • artifact_path (str) – The model artifact location.

  • bucket_uri (str) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

PROGRESS_STEPS_COUNT = 4#
class ads.model.artifact_uploader.SmallArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str)[source]#

Bases: ArtifactUploader

Initializes ArtifactUploader instance.

Parameters:
  • dsc_model (OCIDataScienceModel) – The data scince model instance.

  • artifact_path (str) – The model artifact location.

PROGRESS_STEPS_COUNT = 1#

ads.model.base_properties module#

class ads.model.base_properties.BaseProperties[source]#

Bases: Serializable

Represents base properties class.

with_prop(name: str, value: Any) BaseProperties[source]#

Sets property value.

with_dict(obj_dict: Dict) BaseProperties[source]#

Populates properties values from dict.

with_env() BaseProperties[source]#

Populates properties values from environment variables.

to_dict() Dict[source]#

Serializes instance of class into a dictionary.

with_config(config: ads.config.ConfigSection) BaseProperties[source]#

Sets properties values from the config profile.

from_dict(obj_dict: Dict[str, Any]) 'BaseProperties'[source]#

Creates an instance of the properties class from a dictionary.

from_config(uri: str, profile: str, auth: Dict | None = None) "BaseProperties":[source]#

Loads properties from the config file.

to_config(uri: str, profile: str, force_overwrite: bool | None = False, auth: Dict | None = None) None[source]#

Saves properties to the config file.

classmethod from_config(uri: str, profile: str, auth: Dict | None = None) BaseProperties[source]#

Loads properties from the config file.

Parameters:
  • uri (str) – The URI of the config file. Can be local path or OCI object storage URI.

  • profile (str) – The config profile name.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

Returns:

Instance of the BaseProperties.

Return type:

BaseProperties

classmethod from_dict(obj_dict: Dict[str, Any]) BaseProperties[source]#

Creates an instance of the properties class from a dictionary.

Parameters:

obj_dict (Dict[str, Any]) – List of properties and values in dictionary format.

Returns:

Instance of the BaseProperties.

Return type:

BaseProperties

to_config(uri: str, profile: str, force_overwrite: bool | None = False, auth: Dict | None = None) None[source]#

Saves properties to the config file.

Parameters:
  • uri (str) – The URI of the config file. Can be local path or OCI object storage URI.

  • profile (str) – The config profile name.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

Returns:

Nothing

Return type:

None

to_dict(**kwargs)[source]#

Serializes instance of class into a dictionary.

Returns:

A dictionary.

Return type:

Dict

with_config(config: ConfigSection) BaseProperties[source]#

Sets properties values from the config profile.

Returns:

Instance of the BaseProperties.

Return type:

BaseProperties

with_dict(obj_dict: Dict[str, Any]) BaseProperties[source]#

Sets properties from a dict.

Parameters:

obj_dict (Dict[str, Any]) – List of properties and values in dictionary format.

Returns:

Instance of the BaseProperties.

Return type:

BaseProperties

Raises:

TypeError – If input object has a wrong type.

with_env() BaseProperties[source]#

Sets properties values from environment variables.

Returns:

Instance of the BaseProperties.

Return type:

BaseProperties

with_prop(name: str, value: Any) BaseProperties[source]#

Sets property value.

Parameters:
  • name (str) – Property name.

  • value – Property value.

Returns:

Instance of the BaseProperties.

Return type:

BaseProperties

class ads.model.generic_model.DataScienceModelType[source]#

Bases: str

MODEL = 'datasciencemodel'#
MODEL_DEPLOYMENT = 'datasciencemodeldeployment'#
class ads.model.generic_model.FrameworkSpecificModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]#

Bases: GenericModel

GenericModel Constructor.

Parameters:
  • estimator ((Callable).) – Trained model.

  • artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any][source]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any][source]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

class ads.model.generic_model.GenericModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]#

Bases: MetadataMixin, Introspectable, EvaluatorMixin

Generic Model class which is the base class for all the frameworks including the unsupported frameworks.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

Any model object generated by sklearn framework

Type:

Callable

framework#

The framework of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

model_input_serializer#

Instance of ads.model.SERDE. Used for serialize/deserialize data.

Type:

SERDE

properties#

ModelProperties object required to save and deploy model.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)[source]#

Deletes the current model deployment.

deploy(..., \*\*kwargs)[source]#

Deploys a model.

from_model_artifact(uri, ..., \*\*kwargs)[source]#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, ..., \*\*kwargs)[source]#

Loads model from model catalog.

from_model_deployment(model_deployment_id, ..., \*\*kwargs)[source]#

Loads model from model deployment.

update_deployment(model_deployment_id, ..., \*\*kwargs)[source]#

Updates a model deployment.

from_id(ocid, ..., \*\*kwargs)[source]#

Loads model from model OCID or model deployment OCID.

introspect(...)[source]#

Runs model introspection.

predict(data, ...)[source]#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)[source]#

Prepare and save the score.py, serialized model and runtime.yaml file.

prepare_save_deploy(..., \*\*kwargs)[source]#

Shortcut for prepare, save and deploy steps.

reload(...)[source]#

Reloads the model artifact files: score.py and the runtime.yaml.

restart_deployment(...)[source]#

Restarts the model deployment.

save(..., \*\*kwargs)[source]#

Saves model artifacts to the model catalog.

set_model_input_serializer(serde)[source]#

Registers serializer used for serializing data passed in verify/predict.

summary_status(...)[source]#

Gets a summary table of the current status.

verify(data, ...)[source]#

Tests if deployment works in local environment.

upload_artifact(...)[source]#

Uploads model artifacts to the provided uri.

Examples

>>> import tempfile
>>> from ads.model.generic_model import GenericModel
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> estimator = Toy()
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     inference_conda_env="dbexp_p38_cpu_v1",
...     inference_python_version="3.8",
...     model_file_name="toy_model.pkl",
...     training_id=None,
...     force_overwrite=True
... )
>>> model.verify(2)
>>> model.save()
>>> model.deploy()
>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
>>>     properties=ModelDeploymentProperties(
>>>         access_log_id=<log_ocid>,
>>>         description="Description for Custom Model",
>>>         freeform_tags={"key": "value"},
>>>     )
>>> )
>>> model.predict(2)
>>> # Uncomment the line below to delete the model and the associated model deployment
>>> # model.delete(delete_associated_model_deployment = True)

GenericModel Constructor.

Parameters:
  • estimator ((Callable).) – Trained model.

  • artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.

classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None[source]#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None[source]#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment[source]#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, **kwargs) Self[source]#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self[source]#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, **kwargs) Self[source]#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, **kwargs) Self[source]#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()[source]#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()[source]#

Gets model serializer.

introspect() DataFrame[source]#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of ModelSerializerType

predict(data: Any | None = None, auto_serialize_data: bool = False, local: bool = False, **kwargs) Dict[str, Any][source]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • local (bool.) – Whether to invoke the prediction locally. Default to False.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
  • NotActiveDeploymentError – If model deployment process was not started or not finished yet.

  • ValueError – If model is not deployed yet or the endpoint information is not available.

prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel[source]#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, **kwargs: Dict) ModelDeployment[source]#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel[source]#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None[source]#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment[source]#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(display_name: str | None = None, description: str | None = None, freeform_tags: dict | None = None, defined_tags: dict | None = None, ignore_introspection: bool | None = False, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, **kwargs) str[source]#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: any | None = None, **kwargs)[source]#

Serialize and save model using ONNX or model specific method.

Parameters:
  • as_onnx ((boolean, optional)) – If set as True, convert into ONNX model.

  • initial_types ((List[Tuple], optional)) – a python list. Each element is a tuple of a variable name and a data type.

  • force_overwrite ((boolean, optional)) – If set as True, overwrite serialized model if exists.

  • X_sample ((any, optional). Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model, used to valid model input type.

Returns:

Nothing

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)[source]#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)[source]#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame[source]#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel[source]#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment[source]#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
>>>     properties=ModelDeploymentProperties(
>>>         access_log_id=<log_ocid>,
>>>         description="Description for Custom Model",
>>>         freeform_tags={"key": "value"},
>>>     )
>>> )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False) None[source]#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = False, **kwargs) Dict[str, Any][source]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • is_json_payload (bool) – Defaults to False. Indicate whether to send data with a application/json MIME TYPE.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

class ads.model.generic_model.ModelDeploymentRuntimeType[source]#

Bases: object

CONDA = 'conda'#
CONTAINER = 'container'#
class ads.model.generic_model.ModelState(value)[source]#

Bases: Enum

An enumeration.

AVAILABLE = 'Available'#
DONE = 'Done'#
NEEDSACTION = 'Needs Action'#
NOTAVAILABLE = 'Not Available'#
exception ads.model.generic_model.NotActiveDeploymentError(state: str)[source]#

Bases: Exception

exception ads.model.generic_model.RuntimeInfoInconsistencyError[source]#

Bases: Exception

exception ads.model.generic_model.SerializeInputNotImplementedError[source]#

Bases: NotImplementedError

exception ads.model.generic_model.SerializeModelNotImplementedError[source]#

Bases: NotImplementedError

class ads.model.generic_model.SummaryStatus[source]#

Bases: object

SummaryStatus class which track the status of the Model frameworks.

update_action(detail: str, action: str) None[source]#

Updates the action of the summary status table of the corresponding detail.

Parameters:
  • detail ((str)) – Value of the detail in the Details column. Used to locate which row to update.

  • status ((str)) – New status to be updated for the row specified by detail.

Returns:

Nothing.

Return type:

None

update_status(detail: str, status: str) None[source]#

Updates the status of the summary status table of the corresponding detail.

Parameters:
  • detail ((str)) – value of the detail in the Details column. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Returns:

Nothing.

Return type:

None

ads.model.model_introspect module#

The module that helps to minimize the number of errors of the model post-deployment process. The model provides a simple testing harness to ensure that model artifacts are thoroughly tested before being saved to the model catalog.

Classes#

ModelIntrospect

Class to introspect model artifacts.

Examples

>>> model_introspect = ModelIntrospect(artifact=model_artifact)
>>> model_introspect()
... Test key         Test name            Result              Message
... ----------------------------------------------------------------------------
... test_key_1       test_name_1          Passed              test passed
... test_key_2       test_name_2          Not passed          some error occured
>>> model_introspect.status
... Passed
class ads.model.model_introspect.Introspectable[source]#

Bases: ABC

Base class that represents an introspectable object.

exception ads.model.model_introspect.IntrospectionNotPassed[source]#

Bases: ValueError

class ads.model.model_introspect.ModelIntrospect(artifact: Introspectable)[source]#

Bases: object

Class to introspect model artifacts.

Parameters:
  • status (str) – Returns the current status of model introspection. The possible variants: Passed, Not passed, Not tested.

  • failures (int) – Returns the number of failures of introspection result.

run(self) None[source]#

Invokes model artifacts introspection.

to_dataframe(self) pd.DataFrame[source]#

Serializes model introspection result into a DataFrame.

Examples

>>> model_introspect = ModelIntrospect(artifact=model_artifact)
>>> result = model_introspect()
... Test key         Test name            Result              Message
... ----------------------------------------------------------------------------
... test_key_1       test_name_1          Passed              test passed
... test_key_2       test_name_2          Not passed          some error occured

Initializes the Model Introspect.

Parameters:

artifact (Introspectable) – The instance of ModelArtifact object.

Raises:
  • ValueError – If model artifact object not provided.:

  • TypeError – If provided input paramater not a ModelArtifact instance.:

property failures: int#

Calculates the number of failures.

Returns:

The number of failures.

Return type:

int

run() DataFrame[source]#

Invokes introspection.

Returns:

The introspection result in a DataFrame format.

Return type:

pd.DataFrame

property status: str#

Gets the current status of model introspection.

to_dataframe() DataFrame[source]#

Serializes model introspection result into a DataFrame.

Returns:

The model introspection result in a DataFrame representation.

Return type:

pandas.DataFrame

class ads.model.model_introspect.PrintItem(key: str = '', case: str = '', result: str = '', message: str = '')[source]#

Bases: object

Class represents the model introspection print item.

case: str = ''#
key: str = ''#
message: str = ''#
result: str = ''#
to_list() List[str][source]#

Converts instance to a list representation.

Returns:

The instance in a list representation.

Return type:

List[str]

class ads.model.model_introspect.TEST_STATUS[source]#

Bases: str

NOT_PASSED = 'Failed'#
NOT_TESTED = 'Skipped'#
PASSED = 'Passed'#

ads.model.model_metadata module#

class ads.model.model_metadata.Framework[source]#

Bases: str

BERT = 'bert'#
CUML = 'cuml'#
EMCEE = 'emcee'#
ENSEMBLE = 'ensemble'#
FLAIR = 'flair'#
GENSIM = 'gensim'#
H20 = 'h2o'#
KERAS = 'keras'#
LIGHT_GBM = 'lightgbm'#
MXNET = 'mxnet'#
NLTK = 'nltk'#
ORACLE_AUTOML = 'oracle_automl'#
OTHER = 'other'#
PROPHET = 'prophet'#
PYMC3 = 'pymc3'#
PYOD = 'pyod'#
PYSTAN = 'pystan'#
PYTORCH = 'pytorch'#
SCIKIT_LEARN = 'scikit-learn'#
SKTIME = 'sktime'#
SPACY = 'spacy'#
SPARK = 'pyspark'#
STATSMODELS = 'statsmodels'#
TENSORFLOW = 'tensorflow'#
TRANSFORMERS = 'transformers'#
WORD2VEC = 'word2vec'#
XGBOOST = 'xgboost'#
class ads.model.model_metadata.MetadataCustomCategory[source]#

Bases: str

OTHER = 'Other'#
PERFORMANCE = 'Performance'#
TRAINING_AND_VALIDATION_DATASETS = 'Training and Validation Datasets'#
TRAINING_ENV = 'Training Environment'#
TRAINING_PROFILE = 'Training Profile'#
class ads.model.model_metadata.MetadataCustomKeys[source]#

Bases: str

CLIENT_LIBRARY = 'ClientLibrary'#
CONDA_ENVIRONMENT = 'CondaEnvironment'#
CONDA_ENVIRONMENT_PATH = 'CondaEnvironmentPath'#
ENVIRONMENT_TYPE = 'EnvironmentType'#
MODEL_ARTIFACTS = 'ModelArtifacts'#
MODEL_FILE_NAME = 'ModelFileName'#
MODEL_SERIALIZATION_FORMAT = 'ModelSerializationFormat'#
SLUG_NAME = 'SlugName'#
TRAINING_DATASET = 'TrainingDataset'#
TRAINING_DATASET_NUMBER_OF_COLS = 'TrainingDatasetNumberOfCols'#
TRAINING_DATASET_NUMBER_OF_ROWS = 'TrainingDatasetNumberOfRows'#
TRAINING_DATASET_SIZE = 'TrainingDatasetSize'#
VALIDATION_DATASET = 'ValidationDataset'#
VALIDATION_DATASET_NUMBER_OF_COLS = 'ValidationDataSetNumberOfCols'#
VALIDATION_DATASET_NUMBER_OF_ROWS = 'ValidationDatasetNumberOfRows'#
VALIDATION_DATASET_SIZE = 'ValidationDatasetSize'#
class ads.model.model_metadata.MetadataCustomPrintColumns[source]#

Bases: str

CATEGORY = 'Category'#
DESCRIPTION = 'Description'#
KEY = 'Key'#
VALUE = 'Value'#
exception ads.model.model_metadata.MetadataDescriptionTooLong(key: str, length: int)[source]#

Bases: ValueError

Maximum allowed length of metadata description has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.

exception ads.model.model_metadata.MetadataSizeTooLarge(size: int)[source]#

Bases: ValueError

Maximum allowed size for model metadata has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.

class ads.model.model_metadata.MetadataTaxonomyKeys[source]#

Bases: str

ALGORITHM = 'Algorithm'#
ARTIFACT_TEST_RESULT = 'ArtifactTestResults'#
FRAMEWORK = 'Framework'#
FRAMEWORK_VERSION = 'FrameworkVersion'#
HYPERPARAMETERS = 'Hyperparameters'#
USE_CASE_TYPE = 'UseCaseType'#
class ads.model.model_metadata.MetadataTaxonomyPrintColumns[source]#

Bases: str

KEY = 'Key'#
VALUE = 'Value'#
exception ads.model.model_metadata.MetadataValueTooLong(key: str, length: int)[source]#

Bases: ValueError

Maximum allowed length of metadata value has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.

class ads.model.model_metadata.ModelCustomMetadata[source]#

Bases: ModelMetadata

Class that represents Model Custom Metadata.

get(self, key: str) ModelCustomMetadataItem#

Returns the model metadata item by provided key.

reset(self) None#

Resets all model metadata items to empty values.

to_dataframe(self) pd.DataFrame[source]#

Returns the model metadata list in a data frame format.

size(self) int#

Returns the size of the model metadata in bytes.

validate(self) bool#

Validates metadata.

to_dict(self)#

Serializes model metadata into a dictionary.

from_dict(cls) ModelCustomMetadata[source]#

Constructs model metadata from dictionary.

to_yaml(self)#

Serializes model metadata into a YAML.

add(self, key: str, value: str, description: str = '', category: str = MetadataCustomCategory.OTHER, replace: bool = False) None:[source]#

Adds a new model metadata item. Replaces existing one if replace flag is True.

remove(self, key: str) None[source]#

Removes a model metadata item by key.

clear(self) None[source]#

Removes all metadata items.

isempty(self) bool[source]#

Checks if metadata is empty.

to_json(self)#

Serializes model metadata into a JSON.

to_json_file(self, file_path: str, storage_options: dict = None) None#

Saves the metadata to a local file or object storage.

Examples

>>> metadata_custom = ModelCustomMetadata()
>>> metadata_custom.add(key="format", value="pickle")
>>> metadata_custom.add(key="note", value="important note", description="some description")
>>> metadata_custom["format"].description = "some description"
>>> metadata_custom.to_dataframe()
                    Key              Value         Description      Category
----------------------------------------------------------------------------
0                format             pickle    some description  user defined
1                  note     important note    some description  user defined
>>> metadata_custom
    metadata:
    - category: user defined
      description: some description
      key: format
      value: pickle
    - category: user defined
      description: some description
      key: note
      value: important note
>>> metadata_custom.remove("format")
>>> metadata_custom
    metadata:
    - category: user defined
      description: some description
      key: note
      value: important note
>>> metadata_custom.to_dict()
    {'metadata': [{
            'key': 'note',
            'value': 'important note',
            'category': 'user defined',
            'description': 'some description'
        }]}
>>> metadata_custom.reset()
>>> metadata_custom
    metadata:
    - category: None
      description: None
      key: note
      value: None
>>> metadata_custom.clear()
>>> metadata_custom.to_dataframe()
                    Key              Value         Description      Category
----------------------------------------------------------------------------

Initializes custom model metadata.

add(key: str, value: str, description: str = '', category: str = 'Other', replace: bool = False) None[source]#

Adds a new model metadata item. Overrides the existing one if replace flag is True.

Parameters:
  • key (str) – The metadata item key.

  • value (str) – The metadata item value.

  • description (str) – The metadata item description.

  • category (str) – The metadata item category.

  • replace (bool) – Overrides the existing metadata item if replace flag is True.

Returns:

Nothing.

Return type:

None

Raises:
  • TypeError – If provided key is not a string. If provided description not a string.

  • ValueError – If provided key is empty. If provided value is empty. If provided value cannot be serialized to JSON. If item with provided key is already registered and replace flag is False. If provided category is not supported.

  • MetadataValueTooLong – If the length of provided value exceeds 255 charracters.

  • MetadataDescriptionTooLong – If the length of provided description exceeds 255 charracters.

clear() None[source]#

Removes all metadata items.

Returns:

Nothing.

Return type:

None

classmethod from_dict(data: Dict) ModelCustomMetadata[source]#

Constructs an instance of ModelCustomMetadata from a dictionary.

Parameters:

data (Dict) – Model metadata in a dictionary format.

Returns:

An instance of model custom metadata.

Return type:

ModelCustomMetadata

Raises:

ValueError – In case of the wrong input data format.

isempty() bool[source]#

Checks if metadata is empty.

Returns:

True if metadata is empty, False otherwise.

Return type:

bool

remove(key: str) None[source]#

Removes a model metadata item.

Parameters:

key (str) – The key of the metadata item that should be removed.

Returns:

Nothing.

Return type:

None

set_training_data(path: str, data_size: str | None = None)[source]#

Adds training_data path and data size information into model custom metadata.

Parameters:
  • path (str) – The path where the training_data is stored.

  • data_size (str) – The size of the training_data.

Returns:

Nothing.

Return type:

None

set_validation_data(path: str, data_size: str | None = None)[source]#

Adds validation_data path and data size information into model custom metadata.

Parameters:
  • path (str) – The path where the validation_data is stored.

  • data_size (str) – The size of the validation_data.

Returns:

Nothing.

Return type:

None

to_dataframe() DataFrame[source]#

Returns the model metadata list in a data frame format.

Returns:

The model metadata in a dataframe format.

Return type:

pandas.DataFrame

class ads.model.model_metadata.ModelCustomMetadataItem(key: str, value: str | None = None, description: str | None = None, category: str | None = None)[source]#

Bases: ModelTaxonomyMetadataItem

Class that represents model custom metadata item.

key#

The model metadata item key.

Type:

str

value#

The model metadata item value.

Type:

str

description#

The model metadata item description.

Type:

str

category#

The model metadata item category.

Type:

str

reset(self) None[source]#

Resets model metadata item.

to_dict(self) dict#

Serializes model metadata item to dictionary.

from_dict(cls) ModelCustomMetadataItem#

Constructs model metadata item from dictionary.

to_yaml(self)#

Serializes model metadata item to YAML.

size(self) int#

Returns the size of the metadata in bytes.

update(self, value: str = '', description: str = '', category: str = '') None[source]#

Updates metadata item information.

to_json(self) JSON#

Serializes metadata item into a JSON.

to_json_file(self, file_path: str, storage_options: dict = None) None#

Saves the metadata item value to a local file or object storage.

validate(self) bool[source]#

Validates metadata item.

property category: str#
property description: str#
reset() None[source]#

Resets model metadata item.

Resets value, description and category to None.

Returns:

Nothing.

Return type:

None

update(value: str, description: str, category: str) None[source]#

Updates metadata item.

Parameters:
  • value (str) – The value of model metadata item.

  • description (str) – The description of model metadata item.

  • category (str) – The category of model metadata item.

Returns:

Nothing.

Return type:

None

validate() bool[source]#

Validates metadata item.

Returns:

True if validation passed.

Return type:

bool

Raises:
class ads.model.model_metadata.ModelMetadata[source]#

Bases: ABC

The base abstract class representing model metadata.

get(self, key: str) ModelMetadataItem[source]#

Returns the model metadata item by provided key.

reset(self) None[source]#

Resets all model metadata items to empty values.

to_dataframe(self) pd.DataFrame[source]#

Returns the model metadata list in a data frame format.

size(self) int[source]#

Returns the size of the model metadata in bytes.

validate(self) bool[source]#

Validates metadata.

to_dict(self)[source]#

Serializes model metadata into a dictionary.

from_dict(cls) ModelMetadata[source]#

Constructs model metadata from dictionary.

to_yaml(self)[source]#

Serializes model metadata into a YAML.

to_json(self)[source]#

Serializes model metadata into a JSON.

to_json_file(self, file_path: str, storage_options: dict = None) None[source]#

Saves the metadata to a local file or object storage.

Initializes Model Metadata.

abstract classmethod from_dict(data: Dict) ModelMetadata[source]#

Constructs an instance of ModelMetadata from a dictionary.

Parameters:

data (Dict) – Model metadata in a dictionary format.

Returns:

An instance of model metadata.

Return type:

ModelMetadata

get(key: str) ModelMetadataItem[source]#

Returns the model metadata item by provided key.

Parameters:

key (str) – The key of model metadata item.

Returns:

The model metadata item.

Return type:

ModelMetadataItem

Raises:

ValueError – If provided key is empty or metadata item not found.

property keys: Tuple[str]#

Returns all registered metadata keys.

Returns:

The list of metadata keys.

Return type:

Tuple[str]

reset() None[source]#

Resets all model metadata items to empty values.

Resets value, description and category to None for every metadata item.

size() int[source]#

Returns the size of the model metadata in bytes.

Returns:

The size of model metadata in bytes.

Return type:

int

abstract to_dataframe() DataFrame[source]#

Returns the model metadata list in a data frame format.

Returns:

The model metadata in a dataframe format.

Return type:

pandas.DataFrame

to_dict()[source]#

Serializes model metadata into a dictionary.

Returns:

The model metadata in a dictionary representation.

Return type:

Dict

to_json()[source]#

Serializes model metadata into a JSON.

Returns:

The model metadata in a JSON representation.

Return type:

JSON

to_json_file(file_path: str, storage_options: dict | None = None) None[source]#

Saves the metadata to a local file or object storage.

Parameters:
  • file_path (str) – The file path to store the data. “oci://bucket_name@namespace/folder_name/” “oci://bucket_name@namespace/folder_name/metadata.json” “path/to/local/folder” “path/to/local/folder/metadata.json”

  • storage_options (dict. Default None) – Parameters passed on to the backend filesystem class. Defaults to options set using DatasetFactory.set_default_storage().

Returns:

Nothing.

Return type:

None

Raises:

Examples

>>> metadata = ModelTaxonomyMetadataItem()
>>> storage_options = {"config": oci.config.from_file(os.path.join("~/.oci", "config"))}
>>> storage_options
{'log_requests': False,
    'additional_user_agent': '',
    'pass_phrase': None,
    'user': '<user-id>',
    'fingerprint': '05:15:2b:b1:46:8a:32:ec:e2:69:5b:32:01:**:**:**)',
    'tenancy': '<tenancy-id>',
    'region': 'us-ashburn-1',
    'key_file': '/home/datascience/.oci/oci_api_key.pem'}
>>> metadata.to_json_file(file_path = 'oci://bucket_name@namespace/folder_name/metadata_taxonomy.json', storage_options=storage_options)
>>> metadata_item.to_json_file("path/to/local/folder/metadata_taxonomy.json")
to_yaml()[source]#

Serializes model metadata into a YAML.

Returns:

The model metadata in a YAML representation.

Return type:

Yaml

validate() bool[source]#

Validates model metadata.

Returns:

True if metadata is valid.

Return type:

bool

validate_size() bool[source]#

Validates model metadata size.

Validates the size of metadata. Throws an error if the size of the metadata exceeds expected value.

Returns:

True if metadata size is valid.

Return type:

bool

Raises:

MetadataSizeTooLarge – If the size of the metadata exceeds expected value.

class ads.model.model_metadata.ModelMetadataItem[source]#

Bases: ABC

The base abstract class representing model metadata item.

to_dict(self) Dict[source]#

Serializes model metadata item to dictionary.

from_dict(cls, data: Dict) ModelMetadataItem[source]#

Constructs an instance of ModelMetadataItem from a dictionary.

to_yaml(self)[source]#

Serializes model metadata item to YAML.

size(self) int[source]#

Returns the size of the metadata in bytes.

to_json(self) JSON[source]#

Serializes metadata item to JSON.

to_json_file(self, file_path: str, storage_options: dict = None) None[source]#

Saves the metadata item value to a local file or object storage.

validate(self) bool[source]#

Validates metadata item.

classmethod from_dict(data: Dict) ModelMetadataItem[source]#

Constructs an instance of ModelMetadataItem from a dictionary.

Parameters:

data (Dict) – Metadata item in a dictionary format.

Returns:

An instance of model metadata item.

Return type:

ModelMetadataItem

size() int[source]#

Returns the size of the model metadata in bytes.

Returns:

The size of model metadata in bytes.

Return type:

int

to_dict() dict[source]#

Serializes model metadata item to dictionary.

Returns:

The dictionary representation of model metadata item.

Return type:

dict

to_json()[source]#

Serializes metadata item into a JSON.

Returns:

The metadata item in a JSON representation.

Return type:

JSON

to_json_file(file_path: str, storage_options: dict | None = None) None[source]#

Saves the metadata item value to a local file or object storage.

Parameters:
  • file_path (str) – The file path to store the data. “oci://bucket_name@namespace/folder_name/” “oci://bucket_name@namespace/folder_name/result.json” “path/to/local/folder” “path/to/local/folder/result.json”

  • storage_options (dict. Default None) – Parameters passed on to the backend filesystem class. Defaults to options set using DatasetFactory.set_default_storage().

Returns:

Nothing.

Return type:

None

Raises:

Examples

>>> metadata_item = ModelCustomMetadataItem(key="key1", value="value1")
>>> storage_options = {"config": oci.config.from_file(os.path.join("~/.oci", "config"))}
>>> storage_options
{'log_requests': False,
    'additional_user_agent': '',
    'pass_phrase': None,
    'user': '<user-id>',
    'fingerprint': '05:15:2b:b1:46:8a:32:ec:e2:69:5b:32:01:**:**:**)',
    'tenancy': '<tenency-id>',
    'region': 'us-ashburn-1',
    'key_file': '/home/datascience/.oci/oci_api_key.pem'}
>>> metadata_item.to_json_file(file_path = 'oci://bucket_name@namespace/folder_name/file.json', storage_options=storage_options)
>>> metadata_item.to_json_file("path/to/local/folder/file.json")
to_yaml()[source]#

Serializes model metadata item to YAML.

Returns:

The model metadata item in a YAML representation.

Return type:

Yaml

abstract validate() bool[source]#

Validates metadata item.

Returns:

True if validation passed.

Return type:

bool

class ads.model.model_metadata.ModelProvenanceMetadata(repo: str | None = None, git_branch: str | None = None, git_commit: str | None = None, repository_url: str | None = None, training_script_path: str | None = None, training_id: str | None = None, artifact_dir: str | None = None)[source]#

Bases: DataClassSerializable

ModelProvenanceMetadata class.

Examples

>>> provenance_metadata = ModelProvenanceMetadata.fetch_training_code_details()
ModelProvenanceMetadata(repo=<git.repo.base.Repo '/home/datascience/.git'>, git_branch='master', git_commit='99ad04c31803f1d4ffcc3bf4afbd6bcf69a06af2', repository_url='file:///home/datascience', "", "")
>>> provenance_metadata.assert_path_not_dirty("your_path", ignore=False)
artifact_dir: str = None#
assert_path_not_dirty(path: str, ignore: bool)[source]#

Checks if all the changes in this path has been commited.

Parameters:
  • path ((str)) – path.

  • (bool) (ignore) – whether to ignore the changes or not.

Raises:

ChangesNotCommitted – if there are changes not being commited.:

Returns:

Nothing.

Return type:

None

classmethod fetch_training_code_details(training_script_path: str | None = None, training_id: str | None = None, artifact_dir: str | None = None)[source]#

Fetches the training code details: repo, git_branch, git_commit, repository_url, training_script_path and training_id.

Parameters:
  • training_script_path ((str, optional). Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training OCID for model.

  • artifact_dir (str) – artifact directory to store the files needed for deployment.

Returns:

A ModelProvenanceMetadata instance.

Return type:

ModelProvenanceMetadata

classmethod from_dict(data: Dict[str, str]) ModelProvenanceMetadata[source]#

Constructs an instance of ModelProvenanceMetadata from a dictionary.

Parameters:

data (Dict[str,str]) – Model provenance metadata in dictionary format.

Returns:

An instance of ModelProvenanceMetadata.

Return type:

ModelProvenanceMetadata

git_branch: str = None#
git_commit: str = None#
repo: str = None#
repository_url: str = None#
to_dict() dict[source]#

Serializes model provenance metadata into a dictionary.

Returns:

The dictionary representation of the model provenance metadata.

Return type:

Dict

training_id: str = None#
training_script_path: str = None#
class ads.model.model_metadata.ModelTaxonomyMetadata[source]#

Bases: ModelMetadata

Class that represents Model Taxonomy Metadata.

get(self, key: str) ModelTaxonomyMetadataItem#

Returns the model metadata item by provided key.

reset(self) None#

Resets all model metadata items to empty values.

to_dataframe(self) pd.DataFrame[source]#

Returns the model metadata list in a data frame format.

size(self) int#

Returns the size of the model metadata in bytes.

validate(self) bool#

Validates metadata.

to_dict(self)#

Serializes model metadata into a dictionary.

from_dict(cls) ModelTaxonomyMetadata[source]#

Constructs model metadata from dictionary.

to_yaml(self)#

Serializes model metadata into a YAML.

to_json(self)#

Serializes model metadata into a JSON.

to_json_file(self, file_path: str, storage_options: dict = None) None#

Saves the metadata to a local file or object storage.

Examples

>>> metadata_taxonomy = ModelTaxonomyMetadata()
>>> metadata_taxonomy.to_dataframe()
                Key                   Value
--------------------------------------------
0        UseCaseType   binary_classification
1          Framework                 sklearn
2   FrameworkVersion                   0.2.2
3          Algorithm               algorithm
4    Hyperparameters                      {}
>>> metadata_taxonomy.reset()
>>> metadata_taxonomy.to_dataframe()
                Key                    Value
--------------------------------------------
0        UseCaseType                    None
1          Framework                    None
2   FrameworkVersion                    None
3          Algorithm                    None
4    Hyperparameters                    None
>>> metadata_taxonomy
    metadata:
    - key: UseCaseType
      category: None
      description: None
      value: None

Initializes Model Metadata.

classmethod from_dict(data: Dict) ModelTaxonomyMetadata[source]#

Constructs an instance of ModelTaxonomyMetadata from a dictionary.

Parameters:

data (Dict) – Model metadata in a dictionary format.

Returns:

An instance of model taxonomy metadata.

Return type:

ModelTaxonomyMetadata

Raises:

ValueError – In case of the wrong input data format.

to_dataframe() DataFrame[source]#

Returns the model metadata list in a data frame format.

Returns:

The model metadata in a dataframe format.

Return type:

pandas.DataFrame

class ads.model.model_metadata.ModelTaxonomyMetadataItem(key: str, value: str | None = None)[source]#

Bases: ModelMetadataItem

Class that represents model taxonomy metadata item.

key#

The model metadata item key.

Type:

str

value#

The model metadata item value.

Type:

str

reset(self) None[source]#

Resets model metadata item.

to_dict(self) Dict#

Serializes model metadata item to dictionary.

from_dict(cls) ModelTaxonomyMetadataItem#

Constructs model metadata item from dictionary.

to_yaml(self)#

Serializes model metadata item to YAML.

size(self) int#

Returns the size of the metadata in bytes.

update(self, value: str = '') None[source]#

Updates metadata item information.

to_json(self) JSON#

Serializes metadata item into a JSON.

to_json_file(self, file_path: str, storage_options: dict = None) None#

Saves the metadata item value to a local file or object storage.

validate(self) bool[source]#

Validates metadata item.

property key: str#
reset() None[source]#

Resets model metadata item.

Resets value to None.

Returns:

Nothing.

Return type:

None

update(value: str) None[source]#

Updates metadata item value.

Parameters:

value (str) – The value of model metadata item.

Returns:

Nothing.

Return type:

None

validate() bool[source]#

Validates metadata item.

Returns:

True if validation passed.

Return type:

bool

Raises:

ValueError – If invalid UseCaseType provided. If invalid Framework provided.

property value: str#
class ads.model.model_metadata.UseCaseType[source]#

Bases: str

ANOMALY_DETECTION = 'anomaly_detection'#
BINARY_CLASSIFICATION = 'binary_classification'#
CLUSTERING = 'clustering'#
DIMENSIONALITY_REDUCTION = 'dimensionality_reduction/representation'#
IMAGE_CLASSIFICATION = 'image_classification'#
MULTINOMIAL_CLASSIFICATION = 'multinomial_classification'#
NER = 'ner'#
OBJECT_LOCALIZATION = 'object_localization'#
OTHER = 'other'#
RECOMMENDER = 'recommender'#
REGRESSION = 'regression'#
SENTIMENT_ANALYSIS = 'sentiment_analysis'#
TIME_SERIES_FORECASTING = 'time_series_forecasting'#
TOPIC_MODELING = 'topic_modeling'#

ads.model.model_metadata_mixin module#

class ads.model.model_metadata_mixin.MetadataMixin[source]#

Bases: object

MetadataMixin class which populates the custom metadata, taxonomy metadata, input/output schema and provenance metadata.

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)[source]#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)[source]#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

ads.model.model_properties module#

class ads.model.model_properties.ModelProperties(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, training_resource_id: str | None = None, training_script_path: str | None = None, training_id: str | None = None, compartment_id: str | None = None, project_id: str | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = None, overwrite_existing_artifact: bool | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | int | None = None, deployment_ocpus: float | int | None = None, deployment_image: str | None = None)[source]#

Bases: BaseProperties

Represents properties required to save and deploy model.

bucket_uri: str = None#
compartment_id: str = None#
deployment_access_log_id: str = None#
deployment_bandwidth_mbps: int = None#
deployment_image: str = None#
deployment_instance_count: int = None#
deployment_instance_shape: str = None#
deployment_instance_subnet_id: str = None#
deployment_log_group_id: str = None#
deployment_memory_in_gbs: float | int = None#
deployment_ocpus: float | int = None#
deployment_predict_log_id: str = None#
inference_conda_env: str = None#
inference_python_version: str = None#
overwrite_existing_artifact: bool = None#
project_id: str = None#
remove_existing_artifact: bool = None#
training_conda_env: str = None#
training_id: str = None#
training_python_version: str = None#
training_resource_id: str = None#
training_script_path: str = None#

ads.model.model_version_set module#

class ads.model.model_version_set.ModelVersionSet(spec: Dict | None = None, **kwargs)[source]#

Bases: Builder

Represents Model Version Set.

id#

Model version set OCID.

Type:

str

project_id#

Project OCID.

Type:

str

compartment_id#

Compartment OCID.

Type:

str

name#

Model version set name.

Type:

str

description#

Model version set description.

Type:

str

freeform_tags#

Model version set freeform tags.

Type:

Dict[str, str]

defined_tags#

Model version set defined tags.

Type:

Dict[str, Dict[str, object]]

Link to details page in OCI console.

Type:

str

create(self, \*\*kwargs) 'ModelVersionSet'[source]#

Creates a model version set.

update(self, \*\*kwargs) 'ModelVersionSet'[source]#

Updates a model version set.

delete(self, delete_model: bool | None = False) "ModelVersionSet":[source]#

Removes a model version set.

to_dict(self) dict[source]#

Serializes model version set to a dictionary.

from_id(cls, id: str) 'ModelVersionSet'[source]#

Gets an existing model version set by OCID.

from_ocid(cls, ocid: str) 'ModelVersionSet'[source]#

Gets an existing model version set by OCID.

from_name(cls, name: str) 'ModelVersionSet'[source]#

Gets an existing model version set by name.

from_dict(cls, config: dict) 'ModelVersionSet'[source]#

Load a model version set instance from a dictionary of configurations.

Examples

>>> mvs = (ModelVersionSet()
...    .with_compartment_id(os.environ["PROJECT_COMPARTMENT_OCID"])
...    .with_project_id(os.environ["PROJECT_OCID"])
...    .with_name("test_experiment")
...    .with_description("Experiment number one"))
>>> mvs.create()
>>> mvs.model_add(model_ocid, version_label="Version label 1")
>>> mvs.model_list()
>>> mvs.details_link
... https://console.<region>.oraclecloud.com/data-science/model-version-sets/<ocid>
>>> mvs.delete()

Initializes a model version set.

Parameters:
  • spec ((Dict, optional). Defaults to None.) – Object specification.

  • kwargs (Dict) –

    Specification as keyword arguments. If ‘spec’ contains the same key as the one in kwargs, the value from kwargs will be used.

    • project_id: str

    • compartment_id: str

    • name: str

    • description: str

    • defined_tags: Dict[str, Dict[str, object]]

    • freeform_tags: Dict[str, str]

CONST_COMPARTMENT_ID = 'compartmentId'#
CONST_DEFINED_TAG = 'definedTags'#
CONST_DESCRIPTION = 'description'#
CONST_FREEFORM_TAG = 'freeformTags'#
CONST_ID = 'id'#
CONST_NAME = 'name'#
CONST_PROJECT_ID = 'projectId'#
LIFECYCLE_STATE_ACTIVE = 'ACTIVE'#
LIFECYCLE_STATE_DELETED = 'DELETED'#
LIFECYCLE_STATE_DELETING = 'DELETING'#
LIFECYCLE_STATE_FAILED = 'FAILED'#
attribute_map = {'compartmentId': 'compartment_id', 'definedTags': 'defined_tags', 'description': 'description', 'freeformTags': 'freeform_tags', 'id': 'id', 'name': 'name', 'projectId': 'project_id'}#
property compartment_id: str#
create(**kwargs) ModelVersionSet[source]#

Creates a model version set.

Parameters:

kwargs – Additional keyword arguments.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

property defined_tags: Dict[str, Dict[str, object]]#
delete(delete_model: bool | None = False) ModelVersionSet[source]#

Removes a model version set.

Parameters:

delete_model ((bool, optional). Defaults to False.) – By default, this parameter is false. A model version set can only be deleted if all the models associate with it are already in the DELETED state. You can optionally specify the deleteRelatedModels boolean query parameters to true, which deletes all associated models for you.

Returns:

The ModelVersionSet instance (self).

Return type:

ModelVersionSet

property description: str#
property details_link: str#

Link to details page in OCI console.

Returns:

Link to details page in OCI console.

Return type:

str

property freeform_tags: Dict[str, str]#
classmethod from_dict(config: dict) ModelVersionSet[source]#

Load a model version set instance from a dictionary of configurations.

Parameters:

config (dict) – A dictionary of configurations.

Returns:

The model version set instance.

Return type:

ModelVersionSet

classmethod from_dsc_model_version_set(dsc_model_version_set: DataScienceModelVersionSet) ModelVersionSet[source]#

Initialize a ModelVersionSet instance from a DataScienceModelVersionSet.

Parameters:

dsc_model_version_set (DataScienceModelVersionSet) – An instance of DataScienceModelVersionSet.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

classmethod from_id(id: str) ModelVersionSet[source]#

Gets an existing model version set by OCID.

Parameters:

id (str) – The model version set OCID.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

classmethod from_name(name: str, compartment_id: str | None = None) ModelVersionSet[source]#

Gets an existing model version set by name.

Parameters:
  • name (str) – The model version set name.

  • compartment_id ((str, optional). Defaults to None.) – Compartment OCID of the OCI resources. If compartment_id is not specified, the value will be taken from environment variables.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

classmethod from_ocid(ocid: str) ModelVersionSet[source]#

Gets an existing model version set by OCID.

Parameters:

id (str) – The model version set OCID.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

property id: str | None#

The OCID of the model version set.

property kind: str#

The kind of the object as showing in YAML.

Returns:

“modelVersionSet”

Return type:

str

classmethod list(compartment_id: str | None = None, **kwargs) List[ModelVersionSet][source]#

List model version sets in a given compartment.

Parameters:
  • compartment_id (str) – The OCID of compartment.

  • kwargs – Additional keyword arguments for filtering model version sets.

Returns:

The list of model version sets.

Return type:

List[ModelVersionSet]

model_add(model_id: str, version_label: str | None = None, **kwargs) None[source]#

Adds new model to model version set.

Parameters:
  • model_id (str) – The OCID of the model which needs to be associated with the model version set.

  • version_label (str) – The model version label.

  • kwargs – Additional keyword arguments.

Returns:

Nothing.

Return type:

None

Raises:

ModelVersionSetNotSaved – If model version set has not been saved yet.:

models(**kwargs) List[DataScienceModel][source]#

Gets list of models associated with a model version set.

Parameters:

kwargs

project_id: str

Project OCID.

lifecycle_state: str

Filter results by the specified lifecycle state. Must be a valid state for the resource type. Allowed values are: “ACTIVE”, “DELETED”, “FAILED”, “INACTIVE”

Can be any attribute that oci.data_science.data_science_client.DataScienceClient.list_models. accepts.

Returns:

List of models associated with the model version set.

Return type:

List[DataScienceModel]

Raises:

ModelVersionSetNotSaved – If model version set has not been saved yet.:

property name: str#
property project_id: str#
property status: str | None#

Status of the model version set.

Returns:

Status of the model version set.

Return type:

str

to_dict() dict[source]#

Serializes model version set to a dictionary.

Returns:

The model version set serialized as a dictionary.

Return type:

dict

update() ModelVersionSet[source]#

Updates a model version set.

Returns:

The ModelVersionSet instance (self).

Return type:

ModelVersionSet

with_compartment_id(compartment_id: str) ModelVersionSet[source]#

Sets the compartment OCID.

Parameters:

compartment_id (str) – The compartment OCID.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_defined_tags(**kwargs: Dict[str, Dict[str, object]]) ModelVersionSet[source]#

Sets defined tags.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_description(description: str) ModelVersionSet[source]#

Sets the description.

Parameters:

description (str) – The description of the model version set.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_freeform_tags(**kwargs: Dict[str, str]) ModelVersionSet[source]#

Sets freeform tags.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_name(name: str) ModelVersionSet[source]#

Sets the name of the model version set.

Parameters:

name (str) – The name of the model version set.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_project_id(project_id: str) ModelVersionSet[source]#

Sets the project OCID.

Parameters:

project_id (str) – The project OCID.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

ads.model.model_version_set.experiment(name: str, create_if_not_exists: bool | None = True, **kwargs: Dict)[source]#

Context manager helping to operate with model version set.

Parameters:
  • name (str) – The name of the model version set.

  • create_if_not_exists ((bool, optional). Defaults to True.) – Creates model version set if not exists.

  • kwargs ((Dict, optional).) –

    compartment_id: (str, optional). Defaults to value from the environment variables.

    The compartment OCID.

    project_id: (str, optional). Defaults to value from the environment variables.

    The project OCID.

    description: (str, optional). Defaults to None.

    The description of the model version set.

Yields:

ModelVersionSet – The model version set object.

Module contents#

class ads.model.AutoMLModel(estimator: Callable, artifact_dir: str, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

AutoMLModel class for estimators from AutoML framework.

algorithm#

“ensemble”, the algorithm name of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained automl estimator/model using oracle automl.

Type:

Callable

framework#

“oracle_automl”, the framework name of the estimator.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model. Default to “model.pkl”.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import tempfile
>>> import logging
>>> import warnings
>>> from ads.automl.driver import AutoML
>>> from ads.automl.provider import OracleAutoMLProvider
>>> from ads.dataset.dataset_browser import DatasetBrowser
>>> from ads.model.framework.automl_model import AutoMLModel
>>> from ads.model.model_metadata import UseCaseType
>>> ds = DatasetBrowser.sklearn().open("wine").set_target("target")
>>> train, test = ds.train_test_split(test_size=0.1, random_state = 42)
>>> ml_engine = OracleAutoMLProvider(n_jobs=-1, loglevel=logging.ERROR)
>>> oracle_automl = AutoML(train, provider=ml_engine)
>>> model, baseline = oracle_automl.train(
...                model_list=['LogisticRegression', 'DecisionTreeClassifier'],
...                random_state = 42,
...                time_budget = 500
...        )
>>> automl_model.prepare(inference_conda_env=inference_conda_env, force_overwrite=True)
>>> automl_model.verify(...)
>>> automl_model.save()
>>> model_deployment = automl_model.deploy(wait_for_completion=False)

Initiates a AutoMLModel instance.

Parameters:
  • estimator (Callable) – Any model object generated by automl framework.

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

AutoMLModel instance.

Return type:

AutoMLModel

Raises:

TypeError – If the input model is not an AutoML model.

class ads.model.DataScienceModel(spec: Dict | None = None, **kwargs)[source]#

Bases: Builder

Represents a Data Science Model.

id#

Model ID.

Type:

str

project_id#

Project OCID.

Type:

str

compartment_id#

Compartment OCID.

Type:

str

name#

Model name.

Type:

str

description#

Model description.

Type:

str

freeform_tags#

Model freeform tags.

Type:

Dict[str, str]

defined_tags#

Model defined tags.

Type:

Dict[str, Dict[str, object]]

input_schema#

Model input schema.

Type:

ads.feature_engineering.Schema

output_schema#

Model output schema.

Type:

ads.feature_engineering.Schema, Dict

defined_metadata_list#

Model defined metadata.

Type:

ModelTaxonomyMetadata

custom_metadata_list#

Model custom metadata.

Type:

ModelCustomMetadata

provenance_metadata#

Model provenance metadata.

Type:

ModelProvenanceMetadata

artifact#

The artifact location. Can be either path to folder with artifacts or path to zip archive.

Type:

str

status#

Model status.

Type:

Union[str, None]

model_version_set_id#

Model version set ID

Type:

str

version_label#

Model version label

Type:

str

create(self, \*\*kwargs) 'DataScienceModel'[source]#

Creates model.

delete(self, delete_associated_model_deployment: bool | None = False) "DataScienceModel":[source]#

Removes model.

to_dict(self) dict[source]#

Serializes model to a dictionary.

from_id(cls, id: str) 'DataScienceModel'[source]#

Gets an existing model by OCID.

from_dict(cls, config: dict) 'DataScienceModel'[source]#

Loads model instance from a dictionary of configurations.

upload_artifact(self, ...) None[source]#

Uploads model artifacts to the model catalog.

download_artifact(self, ...) None[source]#

Downloads model artifacts from the model catalog.

update(self, \*\*kwargs) 'DataScienceModel'[source]#

Updates datascience model in model catalog.

list(cls, compartment_id: str = None, \*\*kwargs) List['DataScienceModel'][source]#

Lists datascience models in a given compartment.

sync(self):

Sync up a datascience model with OCI datascience model.

with_project_id(self, project_id: str) 'DataScienceModel'[source]#

Sets the project ID.

with_description(self, description: str) 'DataScienceModel'[source]#

Sets the description.

with_compartment_id(self, compartment_id: str) 'DataScienceModel'[source]#

Sets the compartment ID.

with_display_name(self, name: str) 'DataScienceModel'[source]#

Sets the name.

with_freeform_tags(self, \*\*kwargs: Dict[str, str]) 'DataScienceModel'[source]#

Sets freeform tags.

with_defined_tags(self, \*\*kwargs: Dict[str, Dict[str, object]]) 'DataScienceModel'[source]#

Sets defined tags.

with_input_schema(self, schema: Schema | Dict) 'DataScienceModel'[source]#

Sets the model input schema.

with_output_schema(self, schema: Schema | Dict) 'DataScienceModel'[source]#

Sets the model output schema.

with_defined_metadata_list(self, metadata: ModelTaxonomyMetadata | Dict) 'DataScienceModel'[source]#

Sets model taxonomy (defined) metadata.

with_custom_metadata_list(self, metadata: ModelCustomMetadata | Dict) 'DataScienceModel'[source]#

Sets model custom metadata.

with_provenance_metadata(self, metadata: ModelProvenanceMetadata | Dict) 'DataScienceModel'[source]#

Sets model provenance metadata.

with_artifact(self, uri: str)[source]#

Sets the artifact location. Can be a local.

with_model_version_set_id(self, model_version_set_id: str):

Sets the model version set ID.

with_version_label(self, version_label: str):

Sets the model version label.

Examples

>>> ds_model = (DataScienceModel()
...    .with_compartment_id(os.environ["NB_SESSION_COMPARTMENT_OCID"])
...    .with_project_id(os.environ["PROJECT_OCID"])
...    .with_display_name("TestModel")
...    .with_description("Testing the test model")
...    .with_freeform_tags(tag1="val1", tag2="val2")
...    .with_artifact("/path/to/the/model/artifacts/"))
>>> ds_model.create()
>>> ds_model.status()
>>> ds_model.with_description("new description").update()
>>> ds_model.download_artifact("/path/to/dst/folder/")
>>> ds_model.delete()
>>> DataScienceModel.list()

Initializes datascience model.

Parameters:
  • spec ((Dict, optional). Defaults to None.) – Object specification.

  • kwargs (Dict) –

    Specification as keyword arguments. If ‘spec’ contains the same key as the one in kwargs, the value from kwargs will be used.

    • project_id: str

    • compartment_id: str

    • name: str

    • description: str

    • defined_tags: Dict[str, Dict[str, object]]

    • freeform_tags: Dict[str, str]

    • input_schema: Union[ads.feature_engineering.Schema, Dict]

    • output_schema: Union[ads.feature_engineering.Schema, Dict]

    • defined_metadata_list: Union[ModelTaxonomyMetadata, Dict]

    • custom_metadata_list: Union[ModelCustomMetadata, Dict]

    • provenance_metadata: Union[ModelProvenanceMetadata, Dict]

    • artifact: str

CONST_ARTIFACT = 'artifact'#
CONST_COMPARTMENT_ID = 'compartmentId'#
CONST_CUSTOM_METADATA = 'customMetadataList'#
CONST_DEFINED_METADATA = 'definedMetadataList'#
CONST_DEFINED_TAG = 'definedTags'#
CONST_DESCRIPTION = 'description'#
CONST_DISPLAY_NAME = 'displayName'#
CONST_FREEFORM_TAG = 'freeformTags'#
CONST_ID = 'id'#
CONST_INPUT_SCHEMA = 'inputSchema'#
CONST_MODEL_VERSION_LABEL = 'versionLabel'#
CONST_MODEL_VERSION_SET_ID = 'modelVersionSetId'#
CONST_OUTPUT_SCHEMA = 'outputSchema'#
CONST_PROJECT_ID = 'projectId'#
CONST_PROVENANCE_METADATA = 'provenanceMetadata'#
property artifact: str#
attribute_map = {'artifact': 'artifact', 'compartmentId': 'compartment_id', 'customMetadataList': 'custom_metadata_list', 'definedMetadataList': 'defined_metadata_list', 'definedTags': 'defined_tags', 'description': 'description', 'displayName': 'display_name', 'freeformTags': 'freeform_tags', 'id': 'id', 'inputSchema': 'input_schema', 'modelVersionSetId': 'model_version_set_id', 'outputSchema': 'output_schema', 'projectId': 'project_id', 'provenanceMetadata': 'provenance_metadata', 'versionLabel': 'version_label'}#
property compartment_id: str#
create(**kwargs) DataScienceModel[source]#

Creates datascience model.

Parameters:

kwargs

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

In addition can be also provided the attributes listed below.

bucket_uri: (str, optional). Defaults to None.

The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

overwrite_existing_artifact: (bool, optional). Defaults to True.

Overwrite target bucket artifact if exists.

remove_existing_artifact: (bool, optional). Defaults to True.

Wether artifacts uploaded to object storage bucket need to be removed or not.

region: (str, optional). Defaults to None.

The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variable.

auth: (Dict, optional). Defaults to None.

The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

timeout: (int, optional). Defaults to 10 seconds.

The connection timeout in seconds for the client.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

Raises:

ValueError – If compartment id not provided. If project id not provided.

property custom_metadata_list: ModelCustomMetadata#

Returns model custom metadatda.

property defined_metadata_list: ModelTaxonomyMetadata#

Returns model taxonomy (defined) metadatda.

property defined_tags: Dict[str, Dict[str, object]]#
delete(delete_associated_model_deployment: bool | None = False) DataScienceModel[source]#

Removes model from the model catalog.

Parameters:

delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

Returns:

The DataScienceModel instance (self).

Return type:

DataScienceModel

property description: str#
property display_name: str#
download_artifact(target_dir: str, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, region: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, timeout: int | None = None)[source]#

Downloads model artifacts from the model catalog.

Parameters:
  • target_dir (str) – The target location of model artifacts.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Overwrite target directory if exists.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • timeout ((int, optional). Defaults to 10 seconds.) – The connection timeout in seconds for the client.

Raises:

ModelArtifactSizeError – If model artifacts size greater than 2GB and temporary OS bucket uri not provided.

property freeform_tags: Dict[str, str]#
classmethod from_dict(config: Dict) DataScienceModel[source]#

Loads model instance from a dictionary of configurations.

Parameters:

config (Dict) – A dictionary of configurations.

Returns:

The model instance.

Return type:

DataScienceModel

classmethod from_id(id: str) DataScienceModel[source]#

Gets an existing model by OCID.

Parameters:

id (str) – The model OCID.

Returns:

An instance of DataScienceModel.

Return type:

DataScienceModel

property id: str | None#

The model OCID.

property input_schema: Schema#

Returns model input schema.

Returns:

Model input schema.

Return type:

ads.feature_engineering.Schema

property kind: str#

The kind of the object as showing in a YAML.

classmethod list(compartment_id: str | None = None, project_id: str | None = None, **kwargs) List[DataScienceModel][source]#

Lists datascience models in a given compartment.

Parameters:
  • compartment_id ((str, optional). Defaults to None.) – The compartment OCID.

  • project_id ((str, optional). Defaults to None.) – The project OCID.

  • kwargs – Additional keyword arguments for filtering models.

Returns:

The list of the datascience models.

Return type:

List[DataScienceModel]

classmethod list_df(compartment_id: str | None = None, project_id: str | None = None, **kwargs) DataFrame[source]#

Lists datascience models in a given compartment.

Parameters:
  • compartment_id ((str, optional). Defaults to None.) – The compartment OCID.

  • project_id ((str, optional). Defaults to None.) – The project OCID.

  • kwargs – Additional keyword arguments for filtering models.

Returns:

The list of the datascience models in a pandas dataframe format.

Return type:

pandas.DataFrame

property model_version_set_id: str#
property output_schema: Schema#

Returns model output schema.

Returns:

Model output schema.

Return type:

ads.feature_engineering.Schema

property project_id: str#
property provenance_metadata: ModelProvenanceMetadata#

Returns model provenance metadatda.

property status: str | None#

Status of the model.

Returns:

Status of the model.

Return type:

str

sync()[source]#

Sync up a datascience model with OCI datascience model.

to_dict() Dict[source]#

Serializes model to a dictionary.

Returns:

The model serialized as a dictionary.

Return type:

dict

update(**kwargs) DataScienceModel[source]#

Updates datascience model in model catalog.

Parameters:

kwargs – Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

The DataScienceModel instance (self).

Return type:

DataScienceModel

upload_artifact(bucket_uri: str | None = None, auth: Dict | None = None, region: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, timeout: int | None = None) None[source]#

Uploads model artifacts to the model catalog.

Parameters:
  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • timeout ((int, optional). Defaults to 10 seconds.) – The connection timeout in seconds for the client.

property version_label: str#
with_artifact(uri: str)[source]#

Sets the artifact location. Can be a local.

Parameters:

uri (str) – Path to artifact directory or to the ZIP archive. It could contain a serialized model(required) as well as any files needed for deployment. The content of the source folder will be zipped and uploaded to the model catalog.

Examples

>>> .with_artifact(uri="./model1/")
>>> .with_artifact(uri="./model1.zip")
with_compartment_id(compartment_id: str) DataScienceModel[source]#

Sets the compartment ID.

Parameters:

compartment_id (str) – The compartment ID.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_custom_metadata_list(metadata: ModelCustomMetadata | Dict) DataScienceModel[source]#

Sets model custom metadata.

Parameters:

metadata (Union[ModelCustomMetadata, Dict]) – The custom metadata.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_defined_metadata_list(metadata: ModelTaxonomyMetadata | Dict) DataScienceModel[source]#

Sets model taxonomy (defined) metadata.

Parameters:

metadata (Union[ModelTaxonomyMetadata, Dict]) – The defined metadata.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_defined_tags(**kwargs: Dict[str, Dict[str, object]]) DataScienceModel[source]#

Sets defined tags.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_description(description: str) DataScienceModel[source]#

Sets the description.

Parameters:

description (str) – The description of the model.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_display_name(name: str) DataScienceModel[source]#

Sets the name.

Parameters:

name (str) – The name.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_freeform_tags(**kwargs: Dict[str, str]) DataScienceModel[source]#

Sets freeform tags.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_input_schema(schema: Schema | Dict) DataScienceModel[source]#

Sets the model input schema.

Parameters:

schema (Union[ads.feature_engineering.Schema, Dict]) – The model input schema.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_model_version_set_id(model_version_set_id: str)[source]#

Sets the model version set ID.

Parameters:

urmodel_version_set_idi (str) – The Model version set OCID.

with_output_schema(schema: Schema | Dict) DataScienceModel[source]#

Sets the model output schema.

Parameters:

schema (Union[ads.feature_engineering.Schema, Dict]) – The model output schema.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_project_id(project_id: str) DataScienceModel[source]#

Sets the project ID.

Parameters:

project_id (str) – The project ID.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_provenance_metadata(metadata: ModelProvenanceMetadata | Dict) DataScienceModel[source]#

Sets model provenance metadata.

Parameters:

provenance_metadata (Union[ModelProvenanceMetadata, Dict]) – The provenance metadata.

Returns:

The DataScienceModel instance (self)

Return type:

DataScienceModel

with_version_label(version_label: str)[source]#

Sets the model version label.

Parameters:

version_label (str) – The model version label.

class ads.model.GenericModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]#

Bases: MetadataMixin, Introspectable, EvaluatorMixin

Generic Model class which is the base class for all the frameworks including the unsupported frameworks.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

Any model object generated by sklearn framework

Type:

Callable

framework#

The framework of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

model_input_serializer#

Instance of ads.model.SERDE. Used for serialize/deserialize data.

Type:

SERDE

properties#

ModelProperties object required to save and deploy model.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)[source]#

Deletes the current model deployment.

deploy(..., \*\*kwargs)[source]#

Deploys a model.

from_model_artifact(uri, ..., \*\*kwargs)[source]#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, ..., \*\*kwargs)[source]#

Loads model from model catalog.

from_model_deployment(model_deployment_id, ..., \*\*kwargs)[source]#

Loads model from model deployment.

update_deployment(model_deployment_id, ..., \*\*kwargs)[source]#

Updates a model deployment.

from_id(ocid, ..., \*\*kwargs)[source]#

Loads model from model OCID or model deployment OCID.

introspect(...)[source]#

Runs model introspection.

predict(data, ...)[source]#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)[source]#

Prepare and save the score.py, serialized model and runtime.yaml file.

prepare_save_deploy(..., \*\*kwargs)[source]#

Shortcut for prepare, save and deploy steps.

reload(...)[source]#

Reloads the model artifact files: score.py and the runtime.yaml.

restart_deployment(...)[source]#

Restarts the model deployment.

save(..., \*\*kwargs)[source]#

Saves model artifacts to the model catalog.

set_model_input_serializer(serde)[source]#

Registers serializer used for serializing data passed in verify/predict.

summary_status(...)[source]#

Gets a summary table of the current status.

verify(data, ...)[source]#

Tests if deployment works in local environment.

upload_artifact(...)[source]#

Uploads model artifacts to the provided uri.

Examples

>>> import tempfile
>>> from ads.model.generic_model import GenericModel
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> estimator = Toy()
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     inference_conda_env="dbexp_p38_cpu_v1",
...     inference_python_version="3.8",
...     model_file_name="toy_model.pkl",
...     training_id=None,
...     force_overwrite=True
... )
>>> model.verify(2)
>>> model.save()
>>> model.deploy()
>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
>>>     properties=ModelDeploymentProperties(
>>>         access_log_id=<log_ocid>,
>>>         description="Description for Custom Model",
>>>         freeform_tags={"key": "value"},
>>>     )
>>> )
>>> model.predict(2)
>>> # Uncomment the line below to delete the model and the associated model deployment
>>> # model.delete(delete_associated_model_deployment = True)

GenericModel Constructor.

Parameters:
  • estimator ((Callable).) – Trained model.

  • artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.

classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None[source]#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None[source]#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment[source]#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, **kwargs) Self[source]#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self[source]#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, **kwargs) Self[source]#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, **kwargs) Self[source]#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()[source]#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()[source]#

Gets model serializer.

introspect() DataFrame[source]#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of ModelSerializerType

predict(data: Any | None = None, auto_serialize_data: bool = False, local: bool = False, **kwargs) Dict[str, Any][source]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • local (bool.) – Whether to invoke the prediction locally. Default to False.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
  • NotActiveDeploymentError – If model deployment process was not started or not finished yet.

  • ValueError – If model is not deployed yet or the endpoint information is not available.

prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel[source]#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, **kwargs: Dict) ModelDeployment[source]#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel[source]#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None[source]#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment[source]#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(display_name: str | None = None, description: str | None = None, freeform_tags: dict | None = None, defined_tags: dict | None = None, ignore_introspection: bool | None = False, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, **kwargs) str[source]#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: any | None = None, **kwargs)[source]#

Serialize and save model using ONNX or model specific method.

Parameters:
  • as_onnx ((boolean, optional)) – If set as True, convert into ONNX model.

  • initial_types ((List[Tuple], optional)) – a python list. Each element is a tuple of a variable name and a data type.

  • force_overwrite ((boolean, optional)) – If set as True, overwrite serialized model if exists.

  • X_sample ((any, optional). Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model, used to valid model input type.

Returns:

Nothing

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)[source]#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)[source]#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame[source]#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel[source]#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment[source]#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
>>>     properties=ModelDeploymentProperties(
>>>         access_log_id=<log_ocid>,
>>>         description="Description for Custom Model",
>>>         freeform_tags={"key": "value"},
>>>     )
>>> )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False) None[source]#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = False, **kwargs) Dict[str, Any][source]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • is_json_payload (bool) – Defaults to False. Indicate whether to send data with a application/json MIME TYPE.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

class ads.model.HuggingFacePipelineModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'huggingface', model_input_serializer: SERDE | None = 'cloudpickle', **kwargs)[source]#

Bases: FrameworkSpecificModel

HuggingFacePipelineModel class for estimators from HuggingFace framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained HuggingFace Pipeline using transformers.

Type:

Callable

framework#

“transformers”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> # Image Classification
>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## Download image data
>>> image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image) # convert image to bytes
>>> ## Download a pretrained model
>>> vision_classifier = pipeline(model="google/vit-base-patch16-224")
>>> preds = vision_classifier(images=image)
>>> ## Initiate a HuggingFacePipelineModel instance
>>> vision_model = HuggingFacePipelineModel(vision_classifier, artifact_dir=tempfile.mkdtemp())
>>> ## Prepare
>>> vision_model.prepare(inference_conda_env="pytorch110_p38_cpu_v1", force_overwrite=True)
>>> ## Verify
>>> vision_model.verify(image)
>>> vision_model.verify(image_bytes)
>>> ## Save
>>> vision_model.save()
>>> ## Deploy
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> vision_model.deploy(deployment_bandwidth_mbps=1000,
...                wait_for_completion=False,
...                deployment_log_group_id = log_group_id,
...                deployment_access_log_id = log_id,
...                deployment_predict_log_id = log_id)
>>> ## Predict from endpoint
>>> vision_model.predict(image)
>>> vision_model.predict(image_bytes)
>>> ### Invoke the model
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = vision_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=image_bytes, auth=auth, headers=headers).json()

Examples

>>> # Image Segmentation
>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## Download image data
>>> image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image) # convert image to bytes
>>> ## Download pretrained model
>>> segmenter = pipeline(task="image-segmentation")
>>> preds = segmenter(image)
>>> ## Initiate a HuggingFacePipelineModel instance
>>> segmentation_model = HuggingFacePipelineModel(segmenter, artifact_dir=empfile.mkdtemp())
>>> ## Prepare
>>> conda = "oci://bucket@namespace/path/to/conda/pack"
>>> python_version = "3.8"
>>> segmentation_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
>>> ## Verify
>>> segmentation_model.verify(data=image)
>>> segmentation_model.verify(data=image_bytes)
>>> ## Save
>>> segmentation_model.save()
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> ## Deploy
>>> segmentation_model.deploy(deployment_bandwidth_mbps=1000,
                wait_for_completion=False,
                deployment_log_group_id = log_group_id,
                deployment_access_log_id = log_id,
                deployment_predict_log_id = log_id)
>>> ## Predict from endpoint
>>> segmentation_model.predict(image)
>>> segmentation_model.predict(image_bytes)
>>> ## Invoke the model
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = segmentation_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=image_bytes, auth=auth, headers=headers).json()

Examples

>>> # Zero Shot Image Classification
>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## Download the image data
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image)
>>> ## Download a pretrained model
>>> classifier = pipeline(model="openai/clip-vit-large-patch14")
>>> classifier(
        images=image,
        candidate_labels=["animals", "humans", "landscape"],
    )
>>> ## Initiate a HuggingFacePipelineModel instance
>>> zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp())
>>> conda = "oci://bucket@namespace/path/to/conda/pack"
>>> python_version = "3.8"
>>> ## Prepare
>>> zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
>>> data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]}
>>> body = cloudpickle.dumps(data) # convert image to bytes
>>> ## Verify
>>> zero_shot_image_classification_model.verify(data=data)
>>> zero_shot_image_classification_model.verify(data=body)
>>> ## Save
>>> zero_shot_image_classification_model.save()
>>> ## Deploy
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=1000,
                wait_for_completion=False,
                deployment_log_group_id = log_group_id,
                deployment_access_log_id = log_id,
                deployment_predict_log_id = log_id)
>>> ## Predict from endpoint
>>> zero_shot_image_classification_model.predict(image)
>>> zero_shot_image_classification_model.predict(body)
>>> ### Invoke the model
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=body, auth=auth, headers=headers).json()

Initiates a HuggingFacePipelineModel instance.

Parameters:
  • estimator (Callable) – HuggingFacePipeline Model

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

HuggingFacePipelineModel instance.

Return type:

HuggingFacePipelineModel

Examples

>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## download the image
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image)
>>> ## download the pretrained model
>>> classifier = pipeline(model="openai/clip-vit-large-patch14")
>>> classifier(
        images=image,
        candidate_labels=["animals", "humans", "landscape"],
    )
>>> ## Initiate a HuggingFacePipelineModel instance
>>> zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp())
>>> ## Prepare a model artifact
>>> conda = "oci://bucket@namespace/path/to/conda/pack"
>>> python_version = "3.8"
>>> zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
>>> ## Test data
>>> data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]}
>>> body = cloudpickle.dumps(data) # convert image to bytes
>>> ## Verify
>>> zero_shot_image_classification_model.verify(data=data)
>>> zero_shot_image_classification_model.verify(data=body)
>>> ## Save
>>> zero_shot_image_classification_model.save()
>>> ## Deploy
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=100,
                wait_for_completion=False,
                deployment_log_group_id = log_group_id,
                deployment_access_log_id = log_id,
                deployment_predict_log_id = log_id)
>>> zero_shot_image_classification_model.predict(image)
>>> zero_shot_image_classification_model.predict(body)
>>> ### Invoke the model by sending bytes
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=body, auth=auth, headers=headers).json()
model_save_serializer_type#

alias of HuggingFaceSerializerType

serialize_model(as_onnx: bool = False, force_overwrite: bool = False, X_sample: Dict | str | List | Image | None = None, **kwargs) None[source]#

Serialize and save HuggingFace model using model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, PIL.Image.Image]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect onnx_args.

Returns:

Nothing.

Return type:

None

class ads.model.LightGBMModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

LightGBMModel class for estimators from Lightgbm framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained lightgbm estimator/model using Lightgbm.

Type:

Callable

framework#

“lightgbm”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import lightgbm as lgb
>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.lightgbm_model import LightGBMModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> train = lgb.Dataset(X_train, label=y_train)
>>> param = {
...        'objective': 'multiclass', 'num_class': 3,
...        }
>>> lightgbm_estimator = lgb.train(param, train)
>>> lightgbm_model = LightGBMModel(estimator=lightgbm_estimator,
... artifact_dir=tempfile.mkdtemp())
>>> lightgbm_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> lightgbm_model.reload()
>>> lightgbm_model.verify(X_test)
>>> lightgbm_model.save()
>>> model_deployment = lightgbm_model.deploy(wait_for_completion=False)
>>> lightgbm_model.predict(X_test)

Initiates a LightGBMModel instance. This class wraps the Lightgbm model as estimator. It’s primary purpose is to hold the trained model and do serialization.

Parameters:
  • estimator – any model object generated by Lightgbm framework

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

LightGBMModel instance.

Return type:

LightGBMModel

Raises:

TypeError – If the input model is not a Lightgbm model or not supported for serialization.:

Examples

>>> import lightgbm as lgb
>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.lightgbm_model import LightGBMModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> train = lgb.Dataset(X_train, label=y_train)
>>> param = {
... 'objective': 'multiclass', 'num_class': 3,
... }
>>> lightgbm_estimator = lgb.train(param, train)
>>> lightgbm_model = LightGBMModel(estimator=lightgbm_estimator, artifact_dir=tempfile.mkdtemp())
>>> lightgbm_model.prepare(inference_conda_env="generalml_p37_cpu_v1")
>>> lightgbm_model.verify(X_test)
>>> lightgbm_model.save()
>>> model_deployment = lightgbm_model.deploy()
>>> lightgbm_model.predict(X_test)
>>> lightgbm_model.delete_deployment()
model_save_serializer_type#

alias of LightGBMModelSerializerType

serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs: Dict)[source]#

Serialize and save Lightgbm model.

Parameters:
  • as_onnx ((boolean, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.

  • initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.

  • force_overwrite ((boolean, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.

Returns:

Nothing.

Return type:

None

class ads.model.ModelDeployer(config: dict | None = None, ds_client: DataScienceClient | None = None)[source]#

Bases: object

ModelDeployer is the class responsible for deploying the ModelDeployment

config#

ADS auth dictionary for OCI authentication.

Type:

dict

ds_client#

data science client

Type:

DataScienceClient

ds_composite_client#

composite data science client

Type:

DataScienceCompositeClient

deploy(model_deployment_details, \*\*kwargs)[source]#

Deploy the model specified by model_deployment_details.

get_model_deployment(model_deployment_id: str)[source]#

Get the ModelDeployment specified by model_deployment_id.

get_model_deployment_state(model_deployment_id)[source]#

Get the state of the current deployment specified by id.

delete(model_deployment_id, \*\*kwargs)[source]#

Remove the model deployment specified by the id or Model Deployment Object

list_deployments(status)[source]#

lists the model deployments associated with current compartment and data science client

show_deployments(status)[source]#

shows the deployments filtered by status in a Dataframe

Initializes model deployer.

Parameters:

config (dict, optional) –

ADS auth dictionary for OCI authentication.

This can be generated by calling ads.common.auth.api_keys() or ads.common.auth.resource_principal(). If this is None, ads.common.default_signer(client_kwargs) will be used.

ds_client: oci.data_science.data_science_client.DataScienceClient

The Oracle DataScience client.

delete(model_deployment_id, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment[source]#

Deletes the model deployment specified by OCID.

Parameters:
  • model_deployment_id (str) – Model deployment OCID.

  • wait_for_completion (bool) – Wait for deletion to complete. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 600). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 60).

Return type:

A ModelDeployment instance that was deleted

deploy(properties: ModelDeploymentProperties | Dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment[source]#

Deploys a model.

Parameters:
  • properties (ModelDeploymentProperties or dict) – Properties to deploy the model. Properties can be None when kwargs are used for specifying properties.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Optional, defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds. Optional, defaults to 1200. Negative value implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds. Optional, defaults to 30.

  • kwargs – Keyword arguments for initializing ModelDeploymentProperties. See ModelDeploymentProperties() for details.

Returns:

A ModelDeployment instance.

Return type:

ModelDeployment

deploy_from_model_uri(model_uri: str, properties: ModelDeploymentProperties | Dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment[source]#

Deploys a model.

Parameters:
  • model_uri (str) – uri to model files, can be local or in cloud storage

  • properties (ModelDeploymentProperties or dict) – Properties to deploy the model. Properties can be None when kwargs are used for specifying properties.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 30).

  • kwargs – Keyword arguments for initializing ModelDeploymentProperties

Returns:

A ModelDeployment instance

Return type:

ModelDeployment

get_model_deployment(model_deployment_id: str) ModelDeployment[source]#

Gets a ModelDeployment by OCID.

Parameters:

model_deployment_id (str) – Model deployment OCID

Returns:

A ModelDeployment instance

Return type:

ModelDeployment

get_model_deployment_state(model_deployment_id: str) State[source]#

Gets the state of a deployment specified by OCID

Parameters:

model_deployment_id (str) – Model deployment OCID

Returns:

The state of the deployment

Return type:

str

list_deployments(status=None, compartment_id=None, **kwargs) list[source]#

Lists the model deployments associated with current compartment and data science client

Parameters:
  • status (str) – Status of deployment. Defaults to None.

  • compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.

  • kwargs – The values are passed to oci.data_science.DataScienceClient.list_model_deployments.

Returns:

A list of ModelDeployment objects.

Return type:

list

Raises:

ValueError – If compartment_id is not specified and cannot be determined from the environment.

show_deployments(status=None, compartment_id=None) DataFrame[source]#
Returns the model deployments associated with current compartment and data science client

as a Dataframe that can be easily visualized

Parameters:
  • status (str) – Status of deployment. Defaults to None.

  • compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.

Returns:

pandas Dataframe containing information about the ModelDeployments

Return type:

DataFrame

Raises:

ValueError – If compartment_id is not specified and cannot be determined from the environment.

update(model_deployment_id: str, properties: ModelDeploymentProperties | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment[source]#

Updates an existing model deployment.

Parameters:
  • model_deployment_id (str) – Model deployment OCID.

  • properties (ModelDeploymentProperties) – An instance of ModelDeploymentProperties or dict to initialize the ModelDeploymentProperties. Defaults to None.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200).

  • poll_interval (int) – Poll interval in seconds (Defaults to 30).

  • kwargs – Keyword arguments for initializing ModelDeploymentProperties.

Returns:

A ModelDeployment instance

Return type:

ModelDeployment

class ads.model.ModelDeployment(properties: ModelDeploymentProperties | Dict | None = None, config: Dict | None = None, model_deployment_id: str | None = None, model_deployment_url: str = '', spec: Dict | None = None, **kwargs)[source]#

Bases: Builder

A class used to represent a Model Deployment.

config#

Deployment configuration parameters

Type:

(dict)

properties#

ModelDeploymentProperties object

Type:

(ModelDeploymentProperties)

workflow_state_progress#

Workflow request id

Type:

(str)

workflow_steps#

The number of steps in the workflow

Type:

(int)

dsc_model_deployment#

The OCIDataScienceModelDeployment instance.

Type:

(OCIDataScienceModelDeployment)

state#

Returns the deployment state of the current Model Deployment object

Type:

(State)

created_by#

The user that creates the model deployment

Type:

(str)

lifecycle_state#

Model deployment lifecycle state

Type:

(str)

lifecycle_details#

Model deployment lifecycle details

Type:

(str)

time_created#

The time when the model deployment is created

Type:

(datetime)

display_name#

Model deployment display name

Type:

(str)

description#

Model deployment description

Type:

(str)

freeform_tags#

Model deployment freeform tags

Type:

(dict)

defined_tags#

Model deployment defined tags

Type:

(dict)

runtime#

Model deployment runtime

Type:

(ModelDeploymentRuntime)

infrastructure#

Model deployment infrastructure

Type:

(ModelDeploymentInfrastructure)

deploy(wait_for_completion, \*\*kwargs)[source]#

Deploy the current Model Deployment object

delete(wait_for_completion, \*\*kwargs)[source]#

Deletes the current Model Deployment object

update(wait_for_completion, \*\*kwargs)[source]#

Updates a model deployment

activate(wait_for_completion, max_wait_time, poll_interval)[source]#

Activates a model deployment

deactivate(wait_for_completion, max_wait_time, poll_interval)[source]#

Deactivates a model deployment

list(status, compartment_id, project_id, \*\*kwargs)[source]#

List model deployment within given compartment and project.

with_display_name(display_name)[source]#

Sets model deployment display name

with_description(description)[source]#

Sets model deployment description

with_freeform_tags(freeform_tags)[source]#

Sets model deployment freeform tags

with_defined_tags(defined_tags)[source]#

Sets model deployment defined tags

with_runtime(self, runtime)[source]#

Sets model deployment runtime

with_infrastructure(self, infrastructure)[source]#

Sets model deployment infrastructure

from_dict(obj_dict)[source]#

Deserializes model deployment instance from dict

from_id(id)[source]#

Loads model deployment instance from ocid

sync()[source]#

Updates the model deployment instance from backend

Examples

>>> # Build model deployment from builder apis:
>>> ds_model_deployment = (ModelDeployment()
...    .with_display_name("TestModelDeployment")
...    .with_description("Testing the test model deployment")
...    .with_freeform_tags(tag1="val1", tag2="val2")
...    .with_infrastructure(
...        (ModelDeploymentInfrastructure()
...        .with_project_id(<project_id>)
...        .with_compartment_id(<compartment_id>)
...        .with_shape_name("VM.Standard.E4.Flex")
...        .with_shape_config_details(
...            ocpus=1,
...            memory_in_gbs=16
...        )
...        .with_replica(1)
...        .with_bandwidth_mbps(10)
...        .with_web_concurrency(10)
...        .with_access_log(
...            log_group_id=<log_group_id>,
...            log_id=<log_id>
...        )
...        .with_predict_log(
...            log_group_id=<log_group_id>,
...            log_id=<log_id>
...        ))
...    )
...    .with_runtime(
...        (ModelDeploymentContainerRuntime()
...        .with_image(<image>)
...        .with_image_digest(<image_digest>)
...        .with_entrypoint(<entrypoint>)
...        .with_server_port(<server_port>)
...        .with_health_check_port(<health_check_port>)
...        .with_env({"key":"value"})
...        .with_deployment_mode("HTTPS_ONLY")
...        .with_model_uri(<model_uri>))
...    )
... )
>>> ds_model_deployment.deploy()
>>> ds_model_deployment.status
>>> ds_model_deployment.with_display_name("new name").update()
>>> ds_model_deployment.deactivate()
>>> ds_model_deployment.sync()
>>> ds_model_deployment.list(status="ACTIVE")
>>> ds_model_deployment.delete()
>>> # Build model deployment from yaml
>>> ds_model_deployment = ModelDeployment.from_yaml(uri=<path_to_yaml>)

Initializes a ModelDeployment object.

Parameters:
  • properties ((Union[ModelDeploymentProperties, Dict], optional). Defaults to None.) – Object containing deployment properties. The properties can be None when kwargs are used for specifying properties.

  • config ((Dict, optional). Defaults to None.) – ADS auth dictionary for OCI authentication. This can be generated by calling ads.common.auth.api_keys() or ads.common.auth.resource_principal(). If this is None then the ads.common.default_signer(client_kwargs) will be used.

  • model_deployment_id ((str, optional). Defaults to None.) – Model deployment OCID.

  • model_deployment_url ((str, optional). Defaults to empty string.) – Model deployment url.

  • spec ((dict, optional). Defaults to None.) – Model deployment spec.

  • kwargs – Keyword arguments for initializing ModelDeploymentProperties or ModelDeployment.

CONST_CREATED_BY = 'createdBy'#
CONST_DEFINED_TAG = 'definedTags'#
CONST_DESCRIPTION = 'description'#
CONST_DISPLAY_NAME = 'displayName'#
CONST_FREEFORM_TAG = 'freeformTags'#
CONST_ID = 'id'#
CONST_INFRASTRUCTURE = 'infrastructure'#
CONST_LIFECYCLE_DETAILS = 'lifecycleDetails'#
CONST_LIFECYCLE_STATE = 'lifecycleState'#
CONST_MODEL_DEPLOYMENT_URL = 'modelDeploymentUrl'#
CONST_RUNTIME = 'runtime'#
CONST_TIME_CREATED = 'timeCreated'#
property access_log: OCILog#

Gets the model deployment access logs object.

Returns:

The OCILog object containing the access logs.

Return type:

OCILog

activate(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment[source]#

Activates a model deployment

Parameters:
  • wait_for_completion (bool) – Flag set for whether to wait for deployment to be activated before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

Returns:

The instance of ModelDeployment.

Return type:

ModelDeployment

attribute_map = {'createdBy': 'created_by', 'definedTags': 'defined_tags', 'description': 'description', 'displayName': 'display_name', 'freeformTags': 'freeform_tags', 'id': 'id', 'infrastructure': 'infrastructure', 'lifecycleDetails': 'lifecycle_details', 'lifecycleState': 'lifecycle_state', 'modelDeploymentUrl': 'model_deployment_url', 'runtime': 'runtime', 'timeCreated': 'time_created'}#
build() ModelDeployment[source]#

Load default values from the environment for the job infrastructure.

property created_by: str#

The user that creates the model deployment.

Returns:

The user that creates the model deployment.

Return type:

str

deactivate(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment[source]#

Deactivates a model deployment

Parameters:
  • wait_for_completion (bool) – Flag set for whether to wait for deployment to be deactivated before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

Returns:

The instance of ModelDeployment.

Return type:

ModelDeployment

property defined_tags: Dict#

Model deployment defined tags.

Returns:

Model deployment defined tags.

Return type:

Dict

delete(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10)[source]#

Deletes the ModelDeployment

Parameters:
  • wait_for_completion (bool) – Flag set for whether to wait for deployment to be deleted before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

Returns:

The instance of ModelDeployment.

Return type:

ModelDeployment

deploy(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10)[source]#

Deploys the current ModelDeployment object

Parameters:
  • wait_for_completion (bool) – Flag set for whether to wait for deployment to be deployed before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

Returns:

The instance of ModelDeployment.

Return type:

ModelDeployment

Raises:

ModelDeploymentFailedError – If model deployment fails to deploy

property description: str#

Model deployment description.

Returns:

Model deployment description.

Return type:

str

property display_name: str#

Model deployment display name.

Returns:

Model deployment display name.

Return type:

str

property freeform_tags: Dict#

Model deployment freeform tags.

Returns:

Model deployment freeform tags.

Return type:

Dict

classmethod from_dict(obj_dict: Dict) ModelDeployment[source]#

Loads model deployment instance from a dictionary of configurations.

Parameters:

obj_dict (Dict) – A dictionary of configurations.

Returns:

The model deployment instance.

Return type:

ModelDeployment

classmethod from_id(id: str) ModelDeployment[source]#

Loads the model deployment instance from ocid.

Parameters:

id (str) – The ocid of model deployment.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

property infrastructure: ModelDeploymentInfrastructure#

Model deployment infrastructure.

Returns:

Model deployment infrastructure.

Return type:

ModelDeploymentInfrastructure

initialize_spec_attributes = ['display_name', 'description', 'freeform_tags', 'defined_tags', 'infrastructure', 'runtime']#
property kind: str#

The kind of the object as showing in YAML.

Returns:

deployment

Return type:

str

property lifecycle_details: str#

Model deployment lifecycle details.

Returns:

Model deployment lifecycle details.

Return type:

str

property lifecycle_state: str#

Model deployment lifecycle state.

Returns:

Model deployment lifecycle state.

Return type:

str

classmethod list(status: str | None = None, compartment_id: str | None = None, project_id: str | None = None, **kwargs) List[ModelDeployment][source]#

Lists the model deployments associated with current compartment id and status

Parameters:
  • status (str) – Status of deployment. Defaults to None. Allowed values: ACTIVE, CREATING, DELETED, DELETING, FAILED, INACTIVE and UPDATING.

  • compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.

  • project_id (str) – Target project to list deployments from. Defaults to the project id in the environment variable “PROJECT_OCID”.

  • kwargs – The values are passed to oci.data_science.DataScienceClient.list_model_deployments.

Returns:

A list of ModelDeployment objects.

Return type:

list

classmethod list_df(status: str | None = None, compartment_id: str | None = None, project_id: str | None = None) DataFrame[source]#
Returns the model deployments associated with current compartment and status

as a Dataframe that can be easily visualized

Parameters:
  • status (str) – Status of deployment. Defaults to None. Allowed values: ACTIVE, CREATING, DELETED, DELETING, FAILED, INACTIVE and UPDATING.

  • compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.

  • project_id (str) – Target project to list deployments from. Defaults to the project id in the environment variable “PROJECT_OCID”.

Returns:

pandas Dataframe containing information about the ModelDeployments

Return type:

DataFrame

logs(log_type: str | None = None) ConsolidatedLog[source]#

Gets the access or predict logs.

Parameters:

log_type ((str, optional). Defaults to None.) – The log type. Can be “access”, “predict” or None.

Returns:

The ConsolidatedLog object containing the logs.

Return type:

ConsolidatedLog

property model_deployment_id: str#

The model deployment ocid.

Returns:

The model deployment ocid.

Return type:

str

model_input_serializer = <ads.model.serde.model_input.JsonModelInputSERDE object>#
predict(json_input=None, data: ~typing.Any = None, serializer: ads.model.ModelInputSerializer = <ads.model.serde.model_input.JsonModelInputSERDE object>, auto_serialize_data: bool = False, model_name: str = None, model_version: str = None, **kwargs) dict[source]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> import numpy as np
>>> from ads.model import ModelInputSerializer
>>> class MySerializer(ModelInputSerializer):
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
>>> model_deployment = ModelDeployment.from_id(<model_deployment_id>)
>>> prediction = model_deployment.predict(
...        data=np.array([1, 2, 3]),
...        serializer=MySerializer(),
...        auto_serialize_data=True,
... )['prediction']
Parameters:
  • json_input (Json serializable) – JSON payload for the prediction.

  • data (Any) – Data for the prediction.

  • serializer (ads.model.ModelInputSerializer) – Defaults to ads.model.JsonModelInputSerializer.

  • auto_serialize_data (bool) – Defaults to False. Indicate whether to auto serialize input data using serializer. If auto_serialize_data=False, data required to be bytes or json serializable and json_input required to be json serializable. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • model_name (str) – Defaults to None. When the inference_server=”triton”, the name of the model to invoke.

  • model_version (str) – Defaults to None. When the inference_server=”triton”, the version of the model to invoke.

  • kwargs

    content_type: str

    Used to indicate the media type of the resource. By default, it will be application/octet-stream for bytes input and application/json otherwise. The content-type header will be set to this value when calling the model deployment endpoint.

Returns:

Prediction results.

Return type:

dict

property predict_log: OCILog#

Gets the model deployment predict logs object.

Returns:

The OCILog object containing the predict logs.

Return type:

OCILog

property runtime: ModelDeploymentRuntime#

Model deployment runtime.

Returns:

Model deployment runtime.

Return type:

ModelDeploymentRuntime

show_logs(time_start: datetime | None = None, time_end: datetime | None = None, limit: int = 100, log_type: str | None = None)[source]#

Shows deployment logs as a pandas dataframe.

Parameters:
  • time_start ((datetime.datetime, optional). Defaults to None.) – Starting date and time in RFC3339 format for retrieving logs. Defaults to None. Logs will be retrieved 14 days from now.

  • time_end ((datetime.datetime, optional). Defaults to None.) – Ending date and time in RFC3339 format for retrieving logs. Defaults to None. Logs will be retrieved until now.

  • limit ((int, optional). Defaults to 100.) – The maximum number of items to return.

  • log_type ((str, optional). Defaults to None.) – The log type. Can be “access”, “predict” or None.

Return type:

A pandas DataFrame containing logs.

property state: State#

Returns the deployment state of the current Model Deployment object

property status: State#

Returns the deployment state of the current Model Deployment object

sync() ModelDeployment[source]#

Updates the model deployment instance from backend.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

property time_created: <module 'datetime' from '/home/docs/.asdf/installs/python/3.9.15/lib/python3.9/datetime.py'>#

The time when the model deployment is created.

Returns:

The time when the model deployment is created.

Return type:

datetime

to_dict(**kwargs) Dict[source]#

Serializes model deployment to a dictionary.

Returns:

The model deployment serialized as a dictionary.

Return type:

dict

property type: str#

The type of the object as showing in YAML.

Returns:

deployment

Return type:

str

update(properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs)[source]#

Updates a model deployment

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Parameters:
  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to be updated before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs – dict

Returns:

The instance of ModelDeployment.

Return type:

ModelDeployment

property url: str#

Model deployment url.

Returns:

Model deployment url.

Return type:

str

watch(log_type: str = 'access', time_start: <module 'datetime' from '/home/docs/.asdf/installs/python/3.9.15/lib/python3.9/datetime.py'> = None, interval: int = 3, log_filter: str = None) ModelDeployment[source]#

Streams the access and/or predict logs of model deployment.

Parameters:
  • log_type (str, optional) – The log type. Can be access, predict or None. Defaults to access.

  • time_start (datetime.datetime, optional) – Starting time for the log query. Defaults to None.

  • interval (int, optional) – The time interval between sending each request to pull logs from OCI logging service. Defaults to 3.

  • log_filter (str, optional) – Expression for filtering the logs. This will be the WHERE clause of the query. Defaults to None.

Returns:

The instance of ModelDeployment.

Return type:

ModelDeployment

with_defined_tags(**kwargs) ModelDeployment[source]#

Sets the defined tags of model deployment.

Parameters:

kwargs – The defined tags of model deployment.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

with_description(description: str) ModelDeployment[source]#

Sets the description of model deployment.

Parameters:

description (str) – The description of model deployment.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

with_display_name(display_name: str) ModelDeployment[source]#

Sets the name of model deployment.

Parameters:

display_name (str) – The name of model deployment.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

with_freeform_tags(**kwargs) ModelDeployment[source]#

Sets the freeform tags of model deployment.

Parameters:

kwargs – The freeform tags of model deployment.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

with_infrastructure(infrastructure: ModelDeploymentInfrastructure) ModelDeployment[source]#

Sets the infrastructure of model deployment.

Parameters:

infrastructure (ModelDeploymentInfrastructure) – The infrastructure of model deployment.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

with_runtime(runtime: ModelDeploymentRuntime) ModelDeployment[source]#

Sets the runtime of model deployment.

Parameters:

runtime (ModelDeploymentRuntime) – The runtime of model deployment.

Returns:

The ModelDeployment instance (self).

Return type:

ModelDeployment

class ads.model.ModelDeploymentProperties(model_id: str | None = None, model_uri: str | None = None, oci_model_deployment: ModelDeployment | CreateModelDeploymentDetails | UpdateModelDeploymentDetails | Dict | None = None, config: dict | None = None, **kwargs)[source]#

Bases: OCIDataScienceMixin, ModelDeployment

Represents the details for a model deployment

swagger_types#

The property names and the corresponding types of OCI ModelDeployment model.

Type:

dict

model_id#

The model artifact OCID in model catalog.

Type:

str

model_uri#

uri to model files, can be local or in cloud storage.

Type:

str

with_prop(property_name, value)[source]#

Set the model deployment details property_name attribute to value

with_instance_configuration(config)[source]#

Set the configuration of VM instance.

with_access_log(log_group_id, log_id)[source]#

Config the access log with OCI logging service

with_predict_log(log_group_id, log_id)[source]#

Config the predict log with OCI logging service

build()[source]#

Return an instance of CreateModelDeploymentDetails for creating the deployment.

Initialize a ModelDeploymentProperties object by specifying one of the followings:

Parameters:
  • model_id ((str, optiona). Defaults to None.) – Model Artifact OCID. The model_id must be specified either explicitly or as an attribute of the OCI object.

  • model_uri ((str, optiona). Defaults to None.) – Uri to model files, can be local or in cloud storage.

  • oci_model_deployment ((Union[ModelDeployment, CreateModelDeploymentDetails, UpdateModelDeploymentDetails, Dict], optional). Defaults to None.) – An OCI model or Dict containing model deployment details. The OCI model can be an instance of either ModelDeployment, CreateModelDeploymentDetails or UpdateModelConfigurationDetails.

  • config ((Dict, optional). Defaults to None.) – ADS auth dictionary for OCI authentication. This can be generated by calling ads.common.auth.api_keys() or ads.common.auth.resource_principal(). If this is None, ads.common.default_signer(client_kwargs) will be used.

  • kwargs

    Users can also initialize the object by using keyword arguments. The following keyword arguments are supported by oci.data_science.models.data_science_models.ModelDeployment:

    • display_name,

    • description,

    • project_id,

    • compartment_id,

    • model_deployment_configuration_details,

    • category_log_details,

    • freeform_tags,

    • defined_tags.

    If display_name is not specified, a randomly generated easy to remember name will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

    ModelDeploymentProperties also supports the following additional keyward arguments:

    • instance_shape,

    • instance_count,

    • bandwidth_mbps,

    • access_log_group_id,

    • access_log_id,

    • predict_log_group_id,

    • predict_log_id,

    • memory_in_gbs,

    • ocpus.

    These additional arguments will be saved into appropriate properties in the OCI model.

Raises:

ValueError – model_id is None AND not specified in oci_model_deployment.model_deployment_configuration_details.model_configuration_details.

build() CreateModelDeploymentDetails[source]#

Converts the deployment properties to OCI CreateModelDeploymentDetails object. Converts a model URI into a model OCID if user passed in a URI.

Returns:

A CreateModelDeploymentDetails instance ready for OCI API.

Return type:

CreateModelDeploymentDetails

sub_properties = ['instance_shape', 'instance_count', 'bandwidth_mbps', 'access_log_group_id', 'access_log_id', 'predict_log_group_id', 'predict_log_id', 'memory_in_gbs', 'ocpus']#
to_oci_model(oci_model)[source]#

Convert properties into an OCI data model

Parameters:

oci_model (class) – The class of OCI data model, e.g., oci.data_science_models.CreateModelDeploymentDetails

to_update_deployment() UpdateModelDeploymentDetails[source]#

Converts the deployment properties to OCI UpdateModelDeploymentDetails object.

Returns:

An UpdateModelDeploymentDetails instance ready for OCI API.

Return type:

CreateModelDeploymentDetails

with_access_log(log_group_id: str, log_id: str)[source]#

Adds access log config

Parameters:
  • group_id (str) – Log group ID of OCI logging service

  • log_id (str) – Log ID of OCI logging service

Returns:

self

Return type:

ModelDeploymentProperties

with_category_log(log_type: str, group_id: str, log_id: str)[source]#

Adds category log configuration

Parameters:
  • log_type (str) – The type of logging to be configured. Must be “access” or “predict”

  • group_id (str) – Log group ID of OCI logging service

  • log_id (str) – Log ID of OCI logging service

Returns:

self

Return type:

ModelDeploymentProperties

Raises:

ValueError – When log_type is invalid

with_instance_configuration(config)[source]#

with_instance_configuration creates a ModelDeploymentDetails object with a specific config

Parameters:

config (dict) –

dictionary containing instance configuration about the deployment. The following keys are supported:

  • instance_shape: str,

  • instance_count: int,

  • bandwidth_mbps: int,

  • memory_in_gbs: float,

  • ocpus: float

The instance_shape and instance_count are required when creating a new deployment. They are optional when updating an existing deployment.

Returns:

self

Return type:

ModelDeploymentProperties

with_logging_configuration(access_log_group_id: str, access_log_id: str, predict_log_group_id: str | None = None, predict_log_id: str | None = None)[source]#

Adds OCI logging configurations for OCI logging service

Parameters:
  • access_log_group_id (str) – Log group ID of OCI logging service for access log

  • access_log_id (str) – Log ID of OCI logging service for access log

  • predict_log_group_id (str) – Log group ID of OCI logging service for predict log

  • predict_log_id (str) – Log ID of OCI logging service for predict log

Returns:

self

Return type:

ModelDeploymentProperties

with_predict_log(log_group_id: str, log_id: str)[source]#

Adds predict log config

Parameters:
  • group_id (str) – Log group ID of OCI logging service

  • log_id (str) – Log ID of OCI logging service

Returns:

self

Return type:

ModelDeploymentProperties

with_prop(property_name: str, value: Any)[source]#

Sets model deployment’s property_name attribute to value

Parameters:
  • property_name (str) – Name of a model deployment property.

  • value – New value for property attribute.

Returns:

self

Return type:

ModelDeploymentProperties

class ads.model.ModelInputSerializer[source]#

Bases: Serializer

Abstract base class for creation of new data serializers.

serialize(data)[source]#

Serialize data/model into specific type.

Returns:

object

Return type:

Serialized data/model.

class ads.model.ModelProperties(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, training_resource_id: str | None = None, training_script_path: str | None = None, training_id: str | None = None, compartment_id: str | None = None, project_id: str | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = None, overwrite_existing_artifact: bool | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | int | None = None, deployment_ocpus: float | int | None = None, deployment_image: str | None = None)[source]#

Bases: BaseProperties

Represents properties required to save and deploy model.

bucket_uri: str = None#
compartment_id: str = None#
deployment_access_log_id: str = None#
deployment_bandwidth_mbps: int = None#
deployment_image: str = None#
deployment_instance_count: int = None#
deployment_instance_shape: str = None#
deployment_instance_subnet_id: str = None#
deployment_log_group_id: str = None#
deployment_memory_in_gbs: float | int = None#
deployment_ocpus: float | int = None#
deployment_predict_log_id: str = None#
inference_conda_env: str = None#
inference_python_version: str = None#
overwrite_existing_artifact: bool = None#
project_id: str = None#
remove_existing_artifact: bool = None#
training_conda_env: str = None#
training_id: str = None#
training_python_version: str = None#
training_resource_id: str = None#
training_script_path: str = None#
class ads.model.ModelState(value)[source]#

Bases: Enum

An enumeration.

AVAILABLE = 'Available'#
DONE = 'Done'#
NEEDSACTION = 'Needs Action'#
NOTAVAILABLE = 'Not Available'#
class ads.model.ModelVersionSet(spec: Dict | None = None, **kwargs)[source]#

Bases: Builder

Represents Model Version Set.

id#

Model version set OCID.

Type:

str

project_id#

Project OCID.

Type:

str

compartment_id#

Compartment OCID.

Type:

str

name#

Model version set name.

Type:

str

description#

Model version set description.

Type:

str

freeform_tags#

Model version set freeform tags.

Type:

Dict[str, str]

defined_tags#

Model version set defined tags.

Type:

Dict[str, Dict[str, object]]

Link to details page in OCI console.

Type:

str

create(self, \*\*kwargs) 'ModelVersionSet'[source]#

Creates a model version set.

update(self, \*\*kwargs) 'ModelVersionSet'[source]#

Updates a model version set.

delete(self, delete_model: bool | None = False) "ModelVersionSet":[source]#

Removes a model version set.

to_dict(self) dict[source]#

Serializes model version set to a dictionary.

from_id(cls, id: str) 'ModelVersionSet'[source]#

Gets an existing model version set by OCID.

from_ocid(cls, ocid: str) 'ModelVersionSet'[source]#

Gets an existing model version set by OCID.

from_name(cls, name: str) 'ModelVersionSet'[source]#

Gets an existing model version set by name.

from_dict(cls, config: dict) 'ModelVersionSet'[source]#

Load a model version set instance from a dictionary of configurations.

Examples

>>> mvs = (ModelVersionSet()
...    .with_compartment_id(os.environ["PROJECT_COMPARTMENT_OCID"])
...    .with_project_id(os.environ["PROJECT_OCID"])
...    .with_name("test_experiment")
...    .with_description("Experiment number one"))
>>> mvs.create()
>>> mvs.model_add(model_ocid, version_label="Version label 1")
>>> mvs.model_list()
>>> mvs.details_link
... https://console.<region>.oraclecloud.com/data-science/model-version-sets/<ocid>
>>> mvs.delete()

Initializes a model version set.

Parameters:
  • spec ((Dict, optional). Defaults to None.) – Object specification.

  • kwargs (Dict) –

    Specification as keyword arguments. If ‘spec’ contains the same key as the one in kwargs, the value from kwargs will be used.

    • project_id: str

    • compartment_id: str

    • name: str

    • description: str

    • defined_tags: Dict[str, Dict[str, object]]

    • freeform_tags: Dict[str, str]

CONST_COMPARTMENT_ID = 'compartmentId'#
CONST_DEFINED_TAG = 'definedTags'#
CONST_DESCRIPTION = 'description'#
CONST_FREEFORM_TAG = 'freeformTags'#
CONST_ID = 'id'#
CONST_NAME = 'name'#
CONST_PROJECT_ID = 'projectId'#
LIFECYCLE_STATE_ACTIVE = 'ACTIVE'#
LIFECYCLE_STATE_DELETED = 'DELETED'#
LIFECYCLE_STATE_DELETING = 'DELETING'#
LIFECYCLE_STATE_FAILED = 'FAILED'#
attribute_map = {'compartmentId': 'compartment_id', 'definedTags': 'defined_tags', 'description': 'description', 'freeformTags': 'freeform_tags', 'id': 'id', 'name': 'name', 'projectId': 'project_id'}#
property compartment_id: str#
create(**kwargs) ModelVersionSet[source]#

Creates a model version set.

Parameters:

kwargs – Additional keyword arguments.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

property defined_tags: Dict[str, Dict[str, object]]#
delete(delete_model: bool | None = False) ModelVersionSet[source]#

Removes a model version set.

Parameters:

delete_model ((bool, optional). Defaults to False.) – By default, this parameter is false. A model version set can only be deleted if all the models associate with it are already in the DELETED state. You can optionally specify the deleteRelatedModels boolean query parameters to true, which deletes all associated models for you.

Returns:

The ModelVersionSet instance (self).

Return type:

ModelVersionSet

property description: str#
property details_link: str#

Link to details page in OCI console.

Returns:

Link to details page in OCI console.

Return type:

str

property freeform_tags: Dict[str, str]#
classmethod from_dict(config: dict) ModelVersionSet[source]#

Load a model version set instance from a dictionary of configurations.

Parameters:

config (dict) – A dictionary of configurations.

Returns:

The model version set instance.

Return type:

ModelVersionSet

classmethod from_dsc_model_version_set(dsc_model_version_set: DataScienceModelVersionSet) ModelVersionSet[source]#

Initialize a ModelVersionSet instance from a DataScienceModelVersionSet.

Parameters:

dsc_model_version_set (DataScienceModelVersionSet) – An instance of DataScienceModelVersionSet.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

classmethod from_id(id: str) ModelVersionSet[source]#

Gets an existing model version set by OCID.

Parameters:

id (str) – The model version set OCID.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

classmethod from_name(name: str, compartment_id: str | None = None) ModelVersionSet[source]#

Gets an existing model version set by name.

Parameters:
  • name (str) – The model version set name.

  • compartment_id ((str, optional). Defaults to None.) – Compartment OCID of the OCI resources. If compartment_id is not specified, the value will be taken from environment variables.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

classmethod from_ocid(ocid: str) ModelVersionSet[source]#

Gets an existing model version set by OCID.

Parameters:

id (str) – The model version set OCID.

Returns:

An instance of ModelVersionSet.

Return type:

ModelVersionSet

property id: str | None#

The OCID of the model version set.

property kind: str#

The kind of the object as showing in YAML.

Returns:

“modelVersionSet”

Return type:

str

classmethod list(compartment_id: str | None = None, **kwargs) List[ModelVersionSet][source]#

List model version sets in a given compartment.

Parameters:
  • compartment_id (str) – The OCID of compartment.

  • kwargs – Additional keyword arguments for filtering model version sets.

Returns:

The list of model version sets.

Return type:

List[ModelVersionSet]

model_add(model_id: str, version_label: str | None = None, **kwargs) None[source]#

Adds new model to model version set.

Parameters:
  • model_id (str) – The OCID of the model which needs to be associated with the model version set.

  • version_label (str) – The model version label.

  • kwargs – Additional keyword arguments.

Returns:

Nothing.

Return type:

None

Raises:

ModelVersionSetNotSaved – If model version set has not been saved yet.:

models(**kwargs) List[DataScienceModel][source]#

Gets list of models associated with a model version set.

Parameters:

kwargs

project_id: str

Project OCID.

lifecycle_state: str

Filter results by the specified lifecycle state. Must be a valid state for the resource type. Allowed values are: “ACTIVE”, “DELETED”, “FAILED”, “INACTIVE”

Can be any attribute that oci.data_science.data_science_client.DataScienceClient.list_models. accepts.

Returns:

List of models associated with the model version set.

Return type:

List[DataScienceModel]

Raises:

ModelVersionSetNotSaved – If model version set has not been saved yet.:

property name: str#
property project_id: str#
property status: str | None#

Status of the model version set.

Returns:

Status of the model version set.

Return type:

str

to_dict() dict[source]#

Serializes model version set to a dictionary.

Returns:

The model version set serialized as a dictionary.

Return type:

dict

update() ModelVersionSet[source]#

Updates a model version set.

Returns:

The ModelVersionSet instance (self).

Return type:

ModelVersionSet

with_compartment_id(compartment_id: str) ModelVersionSet[source]#

Sets the compartment OCID.

Parameters:

compartment_id (str) – The compartment OCID.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_defined_tags(**kwargs: Dict[str, Dict[str, object]]) ModelVersionSet[source]#

Sets defined tags.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_description(description: str) ModelVersionSet[source]#

Sets the description.

Parameters:

description (str) – The description of the model version set.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_freeform_tags(**kwargs: Dict[str, str]) ModelVersionSet[source]#

Sets freeform tags.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_name(name: str) ModelVersionSet[source]#

Sets the name of the model version set.

Parameters:

name (str) – The name of the model version set.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

with_project_id(project_id: str) ModelVersionSet[source]#

Sets the project OCID.

Parameters:

project_id (str) – The project OCID.

Returns:

The ModelVersionSet instance (self)

Return type:

ModelVersionSet

exception ads.model.ModelVersionSetNotExists[source]#

Bases: Exception

exception ads.model.ModelVersionSetNotSaved[source]#

Bases: Exception

class ads.model.PyTorchModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'torch', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

PyTorchModel class for estimators from Pytorch framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained pytorch estimator/model using Pytorch.

Type:

Callable

framework#

“pytorch”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> torch_model = PyTorchModel(estimator=torch_estimator,
... artifact_dir=tmp_model_dir)
>>> inference_conda_env = "generalml_p37_cpu_v1"
>>> torch_model.prepare(inference_conda_env=inference_conda_env, force_overwrite=True)
>>> torch_model.reload()
>>> torch_model.verify(...)
>>> torch_model.save()
>>> model_deployment = torch_model.deploy(wait_for_completion=False)
>>> torch_model.predict(...)

Initiates a PyTorchModel instance.

Parameters:
  • estimator (callable) – Any model object generated by pytorch framework

  • artifact_dir (str) – artifact directory to store the files needed for deployment.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

PyTorchModel instance.

Return type:

PyTorchModel

model_save_serializer_type#

alias of PyTorchModelSerializerType

serialize_model(as_onnx: bool = False, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, use_torch_script: bool | None = None, **kwargs) None[source]#

Serialize and save Pytorch model using ONNX or model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect onnx_args.

  • use_torch_script ((bool, optional). Defaults to None (If the default value has not been changed, it will be set as False).) – If set as True, the model will be serialized as a TorchScript program. Check https://pytorch.org/tutorials/beginner/saving_loading_models.html#export-load-model-in-torchscript-format for more details. If set as False, it will only save the trained model’s learned parameters, and the score.py need to be modified to construct the model class instance first. Check https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended for more details.

  • **kwargs (optional params used to serialize pytorch model to onnx,) –

  • following (including the) – onnx_args: (tuple or torch.Tensor), default to None Contains model inputs such that model(onnx_args) is a valid invocation of the model. Can be structured either as: 1) ONLY A TUPLE OF ARGUMENTS; 2) A TENSOR; 3) A TUPLE OF ARGUMENTS ENDING WITH A DICTIONARY OF NAMED ARGUMENTS input_names: (List[str], optional). Names to assign to the input nodes of the graph, in order. output_names: (List[str], optional). Names to assign to the output nodes of the graph, in order. dynamic_axes: (dict, optional), default to None. Specify axes of tensors as dynamic (i.e. known only at run-time).

Returns:

Nothing.

Return type:

None

class ads.model.SERDE[source]#

Bases: Serializer, Deserializer

A layer contains two groups which can interact with each other to serialize and deserialize supported data structure using supported data format.

name = ''#
class ads.model.SklearnModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, model_save_serializer: SERDE | None = 'joblib', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

SklearnModel class for estimators from sklearn framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained sklearn estimator/model using scikit-learn.

Type:

Callable

framework#

“scikit-learn”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> sklearn_estimator = LogisticRegression()
>>> sklearn_estimator.fit(X_train, y_train)
>>> sklearn_model = SklearnModel(estimator=sklearn_estimator,
... artifact_dir=tmp_model_dir)
>>> sklearn_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> sklearn_model.reload()
>>> sklearn_model.verify(X_test)
>>> sklearn_model.save()
>>> model_deployment = sklearn_model.deploy(wait_for_completion=False)
>>> sklearn_model.predict(X_test)

Initiates a SklearnModel instance.

Parameters:
  • estimator (Callable) – Sklearn Model

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

SklearnModel instance.

Return type:

SklearnModel

Examples

>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> sklearn_estimator = LogisticRegression()
>>> sklearn_estimator.fit(X_train, y_train)
>>> sklearn_model = SklearnModel(estimator=sklearn_estimator, artifact_dir=tempfile.mkdtemp())
>>> sklearn_model.prepare(inference_conda_env="dataexpl_p37_cpu_v3")
>>> sklearn_model.verify(X_test)
>>> sklearn_model.save()
>>> model_deployment = sklearn_model.deploy()
>>> sklearn_model.predict(X_test)
>>> sklearn_model.delete_deployment()
model_save_serializer_type#

alias of SklearnModelSerializerType

serialize_model(as_onnx: bool | None = False, initial_types: List[Tuple] | None = None, force_overwrite: bool | None = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs: Dict)[source]#

Serialize and save scikit-learn model using ONNX or model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.

  • initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.

Returns:

Nothing.

Return type:

None

class ads.model.SparkPipelineModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'spark', model_input_serializer: SERDE | None = 'spark', **kwargs)[source]#

Bases: FrameworkSpecificModel

SparkPipelineModel class for estimators from the pyspark framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained pyspark estimator/model using pyspark.

Type:

Callable

framework#

“spark”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare. A ModelDeployment instance.

Type:

ModelArtifact

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import tempfile
>>> from ads.model.framework.spark_model import SparkPipelineModel
>>> from pyspark.ml.linalg import Vectors
>>> from pyspark.ml.classification import LogisticRegression
>>> training = spark.createDataFrame([
>>>     (1.0, Vectors.dense([0.0, 1.1, 0.1])),
>>>     (0.0, Vectors.dense([2.0, 1.0, -1.0])),
>>>     (0.0, Vectors.dense([2.0, 1.3, 1.0])),
>>>     (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"])
>>> lr_estimator = LogisticRegression(maxIter=10, regParam=0.001)
>>> pipeline = Pipeline(stages=[lr_estimator])
>>> pipeline_model = pipeline.fit(training)
>>> spark_model = SparkPipelineModel(estimator=pipeline_model, artifact_dir=tempfile.mkdtemp())
>>> spark_model.prepare(inference_conda_env="dataexpl_p37_cpu_v3")
>>> spark_model.verify(training)
>>> spark_model.save()
>>> model_deployment = spark_model.deploy()
>>> spark_model.predict(training)
>>> spark_model.delete_deployment()

Initiates a SparkPipelineModel instance.

Parameters:
  • estimator (Callable) – SparkPipelineModel

  • artifact_dir (str) – The URI for the generated artifact, which can be local path or OCI object storage URI.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to ads.model.serde.model_input.SparkModelInputSERDE.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

SparkPipelineModel instance.

Return type:

SparkPipelineModel

Examples

>>> import tempfile
>>> from ads.model.framework.spark_model import SparkPipelineModel
>>> from pyspark.ml.linalg import Vectors
>>> from pyspark.ml.classification import LogisticRegression
>>> from pyspark.ml import Pipeline
>>> training = spark.createDataFrame([
>>>     (1.0, Vectors.dense([0.0, 1.1, 0.1])),
>>>     (0.0, Vectors.dense([2.0, 1.0, -1.0])),
>>>     (0.0, Vectors.dense([2.0, 1.3, 1.0])),
>>>     (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"])
>>> lr_estimator = LogisticRegression(maxIter=10, regParam=0.001)
>>> pipeline = Pipeline(stages=[lr_estimator])
>>> pipeline_model = pipeline.fit(training)
>>> spark_model = SparkPipelineModel(estimator=pipeline_model, artifact_dir=tempfile.mkdtemp())
>>> spark_model.prepare(inference_conda_env="pyspark30_p37_cpu_v5")
>>> spark_model.verify(training)
>>> spark_model.save()
>>> model_deployment = spark_model.deploy()
>>> spark_model.predict(training)
>>> spark_model.delete_deployment()
model_input_serializer_type#

alias of SparkModelInputSerializerType

model_save_serializer_type#

alias of SparkModelSerializerType

serialize_model(as_onnx: bool = False, X_sample: Dict | str | List | ndarray | Series | DataFrame | pyspark.sql.DataFrame | pyspark.pandas.DataFrame | None = None, force_overwrite: bool = False, **kwargs) None[source]#

Serialize and save pyspark model using spark serialization.

Parameters:

force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

Return type:

None

class ads.model.TensorFlowModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'tf', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

TensorFlowModel class for estimators from Tensorflow framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Directory for generate artifact.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained tensorflow estimator/model using Tensorflow.

Type:

Callable

framework#

“tensorflow”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> from ads.model.framework.tensorflow_model import TensorFlowModel
>>> import tempfile
>>> import tensorflow as tf
>>> mnist = tf.keras.datasets.mnist
>>> (x_train, y_train), (x_test, y_test) = mnist.load_data()
>>> x_train, x_test = x_train / 255.0, x_test / 255.0
>>> tf_estimator = tf.keras.models.Sequential(
...                [
...                    tf.keras.layers.Flatten(input_shape=(28, 28)),
...                    tf.keras.layers.Dense(128, activation="relu"),
...                    tf.keras.layers.Dropout(0.2),
...                    tf.keras.layers.Dense(10),
...                ]
...            )
>>> loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
>>> tf_estimator.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
>>> tf_estimator.fit(x_train, y_train, epochs=1)
>>> tf_model = TensorFlowModel(estimator=tf_estimator,
... artifact_dir=tempfile.mkdtemp())
>>> inference_conda_env = "generalml_p37_cpu_v1"
>>> tf_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> tf_model.verify(x_test[:1])
>>> tf_model.save()
>>> model_deployment = tf_model.deploy(wait_for_completion=False)
>>> tf_model.predict(x_test[:1])

Initiates a TensorFlowModel instance.

Parameters:
  • estimator (callable) – Any model object generated by tensorflow framework

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

TensorFlowModel instance.

Return type:

TensorFlowModel

model_save_serializer_type#

alias of TensorflowModelSerializerType

serialize_model(as_onnx: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, force_overwrite: bool = False, **kwargs) None[source]#

Serialize and save Tensorflow model using ONNX or model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect input_signature.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • **kwargs (optional params used to serialize tensorflow model to onnx,) –

  • following (including the) – input_signature: a tuple or a list of tf.TensorSpec objects). default to None. Define the shape/dtype of the input so that model(input_signature) is a valid invocation of the model. opset_version: int. Defaults to None. Used for the ONNX model.

Returns:

Nothing.

Return type:

None

class ads.model.XGBoostModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'xgboost', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

XGBoostModel class for estimators from xgboost framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained xgboost estimator/model using Xgboost.

Type:

Callable

framework#

“xgboost”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import xgboost as xgb
>>> import tempfile
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.xgboost_model import XGBoostModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> xgboost_estimator = xgb.XGBClassifier()
>>> xgboost_estimator.fit(X_train, y_train)
>>> xgboost_model = XGBoostModel(estimator=xgboost_estimator, artifact_dir=tmp_model_dir)
>>> xgboost_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> xgboost_model.reload()
>>> xgboost_model.verify(X_test)
>>> xgboost_model.save()
>>> model_deployment = xgboost_model.deploy(wait_for_completion=False)
>>> xgboost_model.predict(X_test)

Initiates a XGBoostModel instance. This class wraps the XGBoost model as estimator. It’s primary purpose is to hold the trained model and do serialization.

Parameters:
  • estimator – XGBoostModel

  • artifact_dir (str) – artifact directory to store the files needed for deployment.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

XGBoostModel instance.

Return type:

XGBoostModel

Examples

>>> import xgboost as xgb
>>> import tempfile
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.xgboost_model import XGBoostModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> train = xgb.DMatrix(X_train, y_train)
>>> test = xgb.DMatrix(X_test, y_test)
>>> xgboost_estimator = XGBClassifier()
>>> xgboost_estimator.fit(X_train, y_train)
>>> xgboost_model = XGBoostModel(estimator=xgboost_estimator, artifact_dir=tempfile.mkdtemp())
>>> xgboost_model.prepare(inference_conda_env="generalml_p37_cpu_v1")
>>> xgboost_model.verify(X_test)
>>> xgboost_model.save()
>>> model_deployment = xgboost_model.deploy()
>>> xgboost_model.predict(X_test)
>>> xgboost_model.delete_deployment()
model_save_serializer_type#

alias of XgboostModelSerializerType

serialize_model(as_onnx: bool = False, initial_types: List[Tuple] = None, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs)[source]#

Serialize and save Xgboost model using ONNX or model specific method.

Parameters:
  • artifact_dir (str) – Directory for generate artifact.

  • as_onnx ((boolean, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.

  • initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.

  • force_overwrite ((boolean, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.

Returns:

Nothing.

Return type:

None

ads.model.experiment(name: str, create_if_not_exists: bool | None = True, **kwargs: Dict)[source]#

Context manager helping to operate with model version set.

Parameters:
  • name (str) – The name of the model version set.

  • create_if_not_exists ((bool, optional). Defaults to True.) – Creates model version set if not exists.

  • kwargs ((Dict, optional).) –

    compartment_id: (str, optional). Defaults to value from the environment variables.

    The compartment OCID.

    project_id: (str, optional). Defaults to value from the environment variables.

    The project OCID.

    description: (str, optional). Defaults to None.

    The description of the model version set.

Yields:

ModelVersionSet – The model version set object.