ads.model.framework package#

Submodules#

ads.model.framework.huggingface_model module#

class ads.model.framework.huggingface_model.HuggingFacePipelineModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'huggingface', model_input_serializer: SERDE | None = 'cloudpickle', **kwargs)[source]#

Bases: FrameworkSpecificModel

HuggingFacePipelineModel class for estimators from HuggingFace framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained HuggingFace Pipeline using transformers.

Type:

Callable

framework#

“transformers”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> # Image Classification
>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## Download image data
>>> image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image) # convert image to bytes
>>> ## Download a pretrained model
>>> vision_classifier = pipeline(model="google/vit-base-patch16-224")
>>> preds = vision_classifier(images=image)
>>> ## Initiate a HuggingFacePipelineModel instance
>>> vision_model = HuggingFacePipelineModel(vision_classifier, artifact_dir=tempfile.mkdtemp())
>>> ## Prepare
>>> vision_model.prepare(inference_conda_env="pytorch110_p38_cpu_v1", force_overwrite=True)
>>> ## Verify
>>> vision_model.verify(image)
>>> vision_model.verify(image_bytes)
>>> ## Save
>>> vision_model.save()
>>> ## Deploy
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> vision_model.deploy(deployment_bandwidth_mbps=1000,
...                wait_for_completion=False,
...                deployment_log_group_id = log_group_id,
...                deployment_access_log_id = log_id,
...                deployment_predict_log_id = log_id)
>>> ## Predict from endpoint
>>> vision_model.predict(image)
>>> vision_model.predict(image_bytes)
>>> ### Invoke the model
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = vision_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=image_bytes, auth=auth, headers=headers).json()

Examples

>>> # Image Segmentation
>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## Download image data
>>> image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image) # convert image to bytes
>>> ## Download pretrained model
>>> segmenter = pipeline(task="image-segmentation")
>>> preds = segmenter(image)
>>> ## Initiate a HuggingFacePipelineModel instance
>>> segmentation_model = HuggingFacePipelineModel(segmenter, artifact_dir=empfile.mkdtemp())
>>> ## Prepare
>>> conda = "oci://bucket@namespace/path/to/conda/pack"
>>> python_version = "3.8"
>>> segmentation_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
>>> ## Verify
>>> segmentation_model.verify(data=image)
>>> segmentation_model.verify(data=image_bytes)
>>> ## Save
>>> segmentation_model.save()
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> ## Deploy
>>> segmentation_model.deploy(deployment_bandwidth_mbps=1000,
                wait_for_completion=False,
                deployment_log_group_id = log_group_id,
                deployment_access_log_id = log_id,
                deployment_predict_log_id = log_id)
>>> ## Predict from endpoint
>>> segmentation_model.predict(image)
>>> segmentation_model.predict(image_bytes)
>>> ## Invoke the model
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = segmentation_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=image_bytes, auth=auth, headers=headers).json()

Examples

>>> # Zero Shot Image Classification
>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## Download the image data
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image)
>>> ## Download a pretrained model
>>> classifier = pipeline(model="openai/clip-vit-large-patch14")
>>> classifier(
        images=image,
        candidate_labels=["animals", "humans", "landscape"],
    )
>>> ## Initiate a HuggingFacePipelineModel instance
>>> zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp())
>>> conda = "oci://bucket@namespace/path/to/conda/pack"
>>> python_version = "3.8"
>>> ## Prepare
>>> zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
>>> data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]}
>>> body = cloudpickle.dumps(data) # convert image to bytes
>>> ## Verify
>>> zero_shot_image_classification_model.verify(data=data)
>>> zero_shot_image_classification_model.verify(data=body)
>>> ## Save
>>> zero_shot_image_classification_model.save()
>>> ## Deploy
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=1000,
                wait_for_completion=False,
                deployment_log_group_id = log_group_id,
                deployment_access_log_id = log_id,
                deployment_predict_log_id = log_id)
>>> ## Predict from endpoint
>>> zero_shot_image_classification_model.predict(image)
>>> zero_shot_image_classification_model.predict(body)
>>> ### Invoke the model
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=body, auth=auth, headers=headers).json()

Initiates a HuggingFacePipelineModel instance.

Parameters:
  • estimator (Callable) – HuggingFacePipeline Model

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

HuggingFacePipelineModel instance.

Return type:

HuggingFacePipelineModel

Examples

>>> from transformers import pipeline
>>> import tempfile
>>> import PIL.Image
>>> import ads
>>> import requests
>>> import cloudpickle
>>> ## download the image
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"
>>> image = PIL.Image.open(requests.get(image_link, stream=True).raw)
>>> image_bytes = cloudpickle.dumps(image)
>>> ## download the pretrained model
>>> classifier = pipeline(model="openai/clip-vit-large-patch14")
>>> classifier(
        images=image,
        candidate_labels=["animals", "humans", "landscape"],
    )
>>> ## Initiate a HuggingFacePipelineModel instance
>>> zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp())
>>> ## Prepare a model artifact
>>> conda = "oci://bucket@namespace/path/to/conda/pack"
>>> python_version = "3.8"
>>> zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
>>> ## Test data
>>> data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]}
>>> body = cloudpickle.dumps(data) # convert image to bytes
>>> ## Verify
>>> zero_shot_image_classification_model.verify(data=data)
>>> zero_shot_image_classification_model.verify(data=body)
>>> ## Save
>>> zero_shot_image_classification_model.save()
>>> ## Deploy
>>> log_group_id = "<log_group_id>"
>>> log_id = "<log_id>"
>>> zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=100,
                wait_for_completion=False,
                deployment_log_group_id = log_group_id,
                deployment_access_log_id = log_id,
                deployment_predict_log_id = log_id)
>>> zero_shot_image_classification_model.predict(image)
>>> zero_shot_image_classification_model.predict(body)
>>> ### Invoke the model by sending bytes
>>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict"
>>> headers = {"Content-Type": "application/octet-stream"}
>>> requests.post(endpoint, data=body, auth=auth, headers=headers).json()
classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel#

Downloads model artifacts from the model catalog.

Parameters:
  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_id is not available in the GenericModel object.

evaluate(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y_pred: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_score: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, X_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, classes: List | None = None, positive_class: str | None = None, legend_labels: dict | None = None, perfect: bool = True, filename: str | None = None, use_case_type: str | None = None)#

Creates an ads evaluation report.

Parameters:
  • X (DataFrame-like) – The data used to make a prediction. Can be set to None if y_preds is given. (And y_scores for more thorough analysis).

  • y (array-like) – The true values corresponding to the input data

  • y_pred (array-like, optional) – The predictions from each model in the same order as the models

  • y_score (array-like, optional) – The predict_probas from each model in the same order as the models

  • X_train (DataFrame-like, optional) – The data used to train the model

  • y_train (array-like, optional) – The true values corresponding to the input training data

  • classes (List or None, optional) – A List of the possible labels for y, when evaluating a classification use case

  • positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.

  • legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.

  • use_case_type (str, optional) – The type of problem this model is solving. This can be set during prepare(). Examples: “binary_classification”, “regression”, “multinomial_classification” Full list of supported types can be found here: ads.common.model_metadata.UseCaseType

  • filename (str, optional) – If filename is given, the html report will be saved to the location specified.

Examples

>>> import tempfile
>>> from ads.evaluations.evaluator import Evaluator
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from ads.common.model_metadata import UseCaseType
>>>
>>> X, y = make_classification(n_samples=1000)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
>>> est = DecisionTreeClassifier().fit(X_train, y_train)
>>> model = SklearnModel(estimator=est, artifact_dir=tempfile.mkdtemp())
>>> model.prepare(
        inference_conda_env="generalml_p38_cpu_v1",
        training_conda_env="generalml_p38_cpu_v1",
        X_sample=X_test,
        y_sample=y_test,
        use_case_type=UseCaseType.BINARY_CLASSIFICATION,
    )
>>> model.evaluate(X_test, y_test, filename="report.html")
classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()#

Gets model serializer.

introspect() DataFrame#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of HuggingFaceSerializerType

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • featurestore_dataset ((Dataset, optional).) – The feature store dataset

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

  • reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

Examples

Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, force_overwrite: bool = False, X_sample: Dict | str | List | Image | None = None, **kwargs) None[source]#

Serialize and save HuggingFace model using model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, PIL.Image.Image]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect onnx_args.

Returns:

Nothing.

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
...     access_log={
...         log_id=<log_ocid>
...     },
...     description="Description for Custom Model",
...     freeform_tags={"key": "value"},
... )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

    display_name: (str)

    Model deployment display name

    description: (str)

    Model deployment description

    freeform_tags: (dict)

    Model deployment freeform tags

    defined_tags: (dict)

    Model deployment defined tags

    Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

update_summary_action(detail: str, action: str)#

Update the actions needed from the user in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • action ((str)) – new action to be updated for the row specified by detail.

Return type:

None

update_summary_status(detail: str, status: str)#

Update the status in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Return type:

None

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

ads.model.framework.lightgbm_model module#

class ads.model.framework.lightgbm_model.LightGBMModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

LightGBMModel class for estimators from Lightgbm framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained lightgbm estimator/model using Lightgbm.

Type:

Callable

framework#

“lightgbm”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import lightgbm as lgb
>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.lightgbm_model import LightGBMModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> train = lgb.Dataset(X_train, label=y_train)
>>> param = {
...        'objective': 'multiclass', 'num_class': 3,
...        }
>>> lightgbm_estimator = lgb.train(param, train)
>>> lightgbm_model = LightGBMModel(estimator=lightgbm_estimator,
... artifact_dir=tempfile.mkdtemp())
>>> lightgbm_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> lightgbm_model.reload()
>>> lightgbm_model.verify(X_test)
>>> lightgbm_model.save()
>>> model_deployment = lightgbm_model.deploy(wait_for_completion=False)
>>> lightgbm_model.predict(X_test)

Initiates a LightGBMModel instance. This class wraps the Lightgbm model as estimator. It’s primary purpose is to hold the trained model and do serialization.

Parameters:
  • estimator – any model object generated by Lightgbm framework

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

LightGBMModel instance.

Return type:

LightGBMModel

Raises:

TypeError – If the input model is not a Lightgbm model or not supported for serialization.:

Examples

>>> import lightgbm as lgb
>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.lightgbm_model import LightGBMModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> train = lgb.Dataset(X_train, label=y_train)
>>> param = {
... 'objective': 'multiclass', 'num_class': 3,
... }
>>> lightgbm_estimator = lgb.train(param, train)
>>> lightgbm_model = LightGBMModel(estimator=lightgbm_estimator, artifact_dir=tempfile.mkdtemp())
>>> lightgbm_model.prepare(inference_conda_env="generalml_p37_cpu_v1")
>>> lightgbm_model.verify(X_test)
>>> lightgbm_model.save()
>>> model_deployment = lightgbm_model.deploy()
>>> lightgbm_model.predict(X_test)
>>> lightgbm_model.delete_deployment()
classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel#

Downloads model artifacts from the model catalog.

Parameters:
  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_id is not available in the GenericModel object.

evaluate(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y_pred: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_score: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, X_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, classes: List | None = None, positive_class: str | None = None, legend_labels: dict | None = None, perfect: bool = True, filename: str | None = None, use_case_type: str | None = None)#

Creates an ads evaluation report.

Parameters:
  • X (DataFrame-like) – The data used to make a prediction. Can be set to None if y_preds is given. (And y_scores for more thorough analysis).

  • y (array-like) – The true values corresponding to the input data

  • y_pred (array-like, optional) – The predictions from each model in the same order as the models

  • y_score (array-like, optional) – The predict_probas from each model in the same order as the models

  • X_train (DataFrame-like, optional) – The data used to train the model

  • y_train (array-like, optional) – The true values corresponding to the input training data

  • classes (List or None, optional) – A List of the possible labels for y, when evaluating a classification use case

  • positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.

  • legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.

  • use_case_type (str, optional) – The type of problem this model is solving. This can be set during prepare(). Examples: “binary_classification”, “regression”, “multinomial_classification” Full list of supported types can be found here: ads.common.model_metadata.UseCaseType

  • filename (str, optional) – If filename is given, the html report will be saved to the location specified.

Examples

>>> import tempfile
>>> from ads.evaluations.evaluator import Evaluator
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from ads.common.model_metadata import UseCaseType
>>>
>>> X, y = make_classification(n_samples=1000)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
>>> est = DecisionTreeClassifier().fit(X_train, y_train)
>>> model = SklearnModel(estimator=est, artifact_dir=tempfile.mkdtemp())
>>> model.prepare(
        inference_conda_env="generalml_p38_cpu_v1",
        training_conda_env="generalml_p38_cpu_v1",
        X_sample=X_test,
        y_sample=y_test,
        use_case_type=UseCaseType.BINARY_CLASSIFICATION,
    )
>>> model.evaluate(X_test, y_test, filename="report.html")
classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()#

Gets model serializer.

introspect() DataFrame#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of LightGBMModelSerializerType

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • featurestore_dataset ((Dataset, optional).) – The feature store dataset

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

  • reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

Examples

Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs: Dict)[source]#

Serialize and save Lightgbm model.

Parameters:
  • as_onnx ((boolean, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.

  • initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.

  • force_overwrite ((boolean, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.

Returns:

Nothing.

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
...     access_log={
...         log_id=<log_ocid>
...     },
...     description="Description for Custom Model",
...     freeform_tags={"key": "value"},
... )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

    display_name: (str)

    Model deployment display name

    description: (str)

    Model deployment description

    freeform_tags: (dict)

    Model deployment freeform tags

    defined_tags: (dict)

    Model deployment defined tags

    Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

update_summary_action(detail: str, action: str)#

Update the actions needed from the user in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • action ((str)) – new action to be updated for the row specified by detail.

Return type:

None

update_summary_status(detail: str, status: str)#

Update the status in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Return type:

None

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

ads.model.framework.pytorch_model module#

class ads.model.framework.pytorch_model.PyTorchModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'torch', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

PyTorchModel class for estimators from Pytorch framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained pytorch estimator/model using Pytorch.

Type:

Callable

framework#

“pytorch”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> torch_model = PyTorchModel(estimator=torch_estimator,
... artifact_dir=tmp_model_dir)
>>> inference_conda_env = "generalml_p37_cpu_v1"
>>> torch_model.prepare(inference_conda_env=inference_conda_env, force_overwrite=True)
>>> torch_model.reload()
>>> torch_model.verify(...)
>>> torch_model.save()
>>> model_deployment = torch_model.deploy(wait_for_completion=False)
>>> torch_model.predict(...)

Initiates a PyTorchModel instance.

Parameters:
  • estimator (callable) – Any model object generated by pytorch framework

  • artifact_dir (str) – artifact directory to store the files needed for deployment.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

PyTorchModel instance.

Return type:

PyTorchModel

classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel#

Downloads model artifacts from the model catalog.

Parameters:
  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_id is not available in the GenericModel object.

evaluate(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y_pred: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_score: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, X_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, classes: List | None = None, positive_class: str | None = None, legend_labels: dict | None = None, perfect: bool = True, filename: str | None = None, use_case_type: str | None = None)#

Creates an ads evaluation report.

Parameters:
  • X (DataFrame-like) – The data used to make a prediction. Can be set to None if y_preds is given. (And y_scores for more thorough analysis).

  • y (array-like) – The true values corresponding to the input data

  • y_pred (array-like, optional) – The predictions from each model in the same order as the models

  • y_score (array-like, optional) – The predict_probas from each model in the same order as the models

  • X_train (DataFrame-like, optional) – The data used to train the model

  • y_train (array-like, optional) – The true values corresponding to the input training data

  • classes (List or None, optional) – A List of the possible labels for y, when evaluating a classification use case

  • positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.

  • legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.

  • use_case_type (str, optional) – The type of problem this model is solving. This can be set during prepare(). Examples: “binary_classification”, “regression”, “multinomial_classification” Full list of supported types can be found here: ads.common.model_metadata.UseCaseType

  • filename (str, optional) – If filename is given, the html report will be saved to the location specified.

Examples

>>> import tempfile
>>> from ads.evaluations.evaluator import Evaluator
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from ads.common.model_metadata import UseCaseType
>>>
>>> X, y = make_classification(n_samples=1000)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
>>> est = DecisionTreeClassifier().fit(X_train, y_train)
>>> model = SklearnModel(estimator=est, artifact_dir=tempfile.mkdtemp())
>>> model.prepare(
        inference_conda_env="generalml_p38_cpu_v1",
        training_conda_env="generalml_p38_cpu_v1",
        X_sample=X_test,
        y_sample=y_test,
        use_case_type=UseCaseType.BINARY_CLASSIFICATION,
    )
>>> model.evaluate(X_test, y_test, filename="report.html")
classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()#

Gets model serializer.

introspect() DataFrame#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of PyTorchModelSerializerType

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • featurestore_dataset ((Dataset, optional).) – The feature store dataset

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

  • reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

Examples

Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, use_torch_script: bool | None = None, **kwargs) None[source]#

Serialize and save Pytorch model using ONNX or model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect onnx_args.

  • use_torch_script ((bool, optional). Defaults to None (If the default value has not been changed, it will be set as False).) – If set as True, the model will be serialized as a TorchScript program. Check https://pytorch.org/tutorials/beginner/saving_loading_models.html#export-load-model-in-torchscript-format for more details. If set as False, it will only save the trained model’s learned parameters, and the score.py need to be modified to construct the model class instance first. Check https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended for more details.

  • **kwargs (optional params used to serialize pytorch model to onnx,) –

  • following (including the) – onnx_args: (tuple or torch.Tensor), default to None Contains model inputs such that model(onnx_args) is a valid invocation of the model. Can be structured either as: 1) ONLY A TUPLE OF ARGUMENTS; 2) A TENSOR; 3) A TUPLE OF ARGUMENTS ENDING WITH A DICTIONARY OF NAMED ARGUMENTS input_names: (List[str], optional). Names to assign to the input nodes of the graph, in order. output_names: (List[str], optional). Names to assign to the output nodes of the graph, in order. dynamic_axes: (dict, optional), default to None. Specify axes of tensors as dynamic (i.e. known only at run-time).

Returns:

Nothing.

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
...     access_log={
...         log_id=<log_ocid>
...     },
...     description="Description for Custom Model",
...     freeform_tags={"key": "value"},
... )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

    display_name: (str)

    Model deployment display name

    description: (str)

    Model deployment description

    freeform_tags: (dict)

    Model deployment freeform tags

    defined_tags: (dict)

    Model deployment defined tags

    Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

update_summary_action(detail: str, action: str)#

Update the actions needed from the user in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • action ((str)) – new action to be updated for the row specified by detail.

Return type:

None

update_summary_status(detail: str, status: str)#

Update the status in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Return type:

None

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

ads.model.framework.sklearn_model module#

class ads.model.framework.sklearn_model.SklearnModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, model_save_serializer: SERDE | None = 'joblib', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

SklearnModel class for estimators from sklearn framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained sklearn estimator/model using scikit-learn.

Type:

Callable

framework#

“scikit-learn”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> sklearn_estimator = LogisticRegression()
>>> sklearn_estimator.fit(X_train, y_train)
>>> sklearn_model = SklearnModel(estimator=sklearn_estimator,
... artifact_dir=tmp_model_dir)
>>> sklearn_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> sklearn_model.reload()
>>> sklearn_model.verify(X_test)
>>> sklearn_model.save()
>>> model_deployment = sklearn_model.deploy(wait_for_completion=False)
>>> sklearn_model.predict(X_test)

Initiates a SklearnModel instance.

Parameters:
  • estimator (Callable) – Sklearn Model

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

SklearnModel instance.

Return type:

SklearnModel

Examples

>>> import tempfile
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> sklearn_estimator = LogisticRegression()
>>> sklearn_estimator.fit(X_train, y_train)
>>> sklearn_model = SklearnModel(estimator=sklearn_estimator, artifact_dir=tempfile.mkdtemp())
>>> sklearn_model.prepare(inference_conda_env="dataexpl_p37_cpu_v3")
>>> sklearn_model.verify(X_test)
>>> sklearn_model.save()
>>> model_deployment = sklearn_model.deploy()
>>> sklearn_model.predict(X_test)
>>> sklearn_model.delete_deployment()
classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel#

Downloads model artifacts from the model catalog.

Parameters:
  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_id is not available in the GenericModel object.

evaluate(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y_pred: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_score: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, X_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, classes: List | None = None, positive_class: str | None = None, legend_labels: dict | None = None, perfect: bool = True, filename: str | None = None, use_case_type: str | None = None)#

Creates an ads evaluation report.

Parameters:
  • X (DataFrame-like) – The data used to make a prediction. Can be set to None if y_preds is given. (And y_scores for more thorough analysis).

  • y (array-like) – The true values corresponding to the input data

  • y_pred (array-like, optional) – The predictions from each model in the same order as the models

  • y_score (array-like, optional) – The predict_probas from each model in the same order as the models

  • X_train (DataFrame-like, optional) – The data used to train the model

  • y_train (array-like, optional) – The true values corresponding to the input training data

  • classes (List or None, optional) – A List of the possible labels for y, when evaluating a classification use case

  • positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.

  • legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.

  • use_case_type (str, optional) – The type of problem this model is solving. This can be set during prepare(). Examples: “binary_classification”, “regression”, “multinomial_classification” Full list of supported types can be found here: ads.common.model_metadata.UseCaseType

  • filename (str, optional) – If filename is given, the html report will be saved to the location specified.

Examples

>>> import tempfile
>>> from ads.evaluations.evaluator import Evaluator
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from ads.common.model_metadata import UseCaseType
>>>
>>> X, y = make_classification(n_samples=1000)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
>>> est = DecisionTreeClassifier().fit(X_train, y_train)
>>> model = SklearnModel(estimator=est, artifact_dir=tempfile.mkdtemp())
>>> model.prepare(
        inference_conda_env="generalml_p38_cpu_v1",
        training_conda_env="generalml_p38_cpu_v1",
        X_sample=X_test,
        y_sample=y_test,
        use_case_type=UseCaseType.BINARY_CLASSIFICATION,
    )
>>> model.evaluate(X_test, y_test, filename="report.html")
classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()#

Gets model serializer.

introspect() DataFrame#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of SklearnModelSerializerType

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • featurestore_dataset ((Dataset, optional).) – The feature store dataset

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

  • reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

Examples

Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )

property schema_input#
property schema_output#
serialize_model(as_onnx: bool | None = False, initial_types: List[Tuple] | None = None, force_overwrite: bool | None = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs: Dict)[source]#

Serialize and save scikit-learn model using ONNX or model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.

  • initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.

Returns:

Nothing.

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
...     access_log={
...         log_id=<log_ocid>
...     },
...     description="Description for Custom Model",
...     freeform_tags={"key": "value"},
... )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

    display_name: (str)

    Model deployment display name

    description: (str)

    Model deployment description

    freeform_tags: (dict)

    Model deployment freeform tags

    defined_tags: (dict)

    Model deployment defined tags

    Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

update_summary_action(detail: str, action: str)#

Update the actions needed from the user in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • action ((str)) – new action to be updated for the row specified by detail.

Return type:

None

update_summary_status(detail: str, status: str)#

Update the status in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Return type:

None

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

ads.model.framework.spark_model module#

class ads.model.framework.spark_model.SparkPipelineModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'spark', model_input_serializer: SERDE | None = 'spark', **kwargs)[source]#

Bases: FrameworkSpecificModel

SparkPipelineModel class for estimators from the pyspark framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained pyspark estimator/model using pyspark.

Type:

Callable

framework#

“spark”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare. A ModelDeployment instance.

Type:

ModelArtifact

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import tempfile
>>> from ads.model.framework.spark_model import SparkPipelineModel
>>> from pyspark.ml.linalg import Vectors
>>> from pyspark.ml.classification import LogisticRegression
>>> training = spark.createDataFrame([
>>>     (1.0, Vectors.dense([0.0, 1.1, 0.1])),
>>>     (0.0, Vectors.dense([2.0, 1.0, -1.0])),
>>>     (0.0, Vectors.dense([2.0, 1.3, 1.0])),
>>>     (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"])
>>> lr_estimator = LogisticRegression(maxIter=10, regParam=0.001)
>>> pipeline = Pipeline(stages=[lr_estimator])
>>> pipeline_model = pipeline.fit(training)
>>> spark_model = SparkPipelineModel(estimator=pipeline_model, artifact_dir=tempfile.mkdtemp())
>>> spark_model.prepare(inference_conda_env="dataexpl_p37_cpu_v3")
>>> spark_model.verify(training)
>>> spark_model.save()
>>> model_deployment = spark_model.deploy()
>>> spark_model.predict(training)
>>> spark_model.delete_deployment()

Initiates a SparkPipelineModel instance.

Parameters:
  • estimator (Callable) – SparkPipelineModel

  • artifact_dir (str) – The URI for the generated artifact, which can be local path or OCI object storage URI.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to ads.model.serde.model_input.SparkModelInputSERDE.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

SparkPipelineModel instance.

Return type:

SparkPipelineModel

Examples

>>> import tempfile
>>> from ads.model.framework.spark_model import SparkPipelineModel
>>> from pyspark.ml.linalg import Vectors
>>> from pyspark.ml.classification import LogisticRegression
>>> from pyspark.ml import Pipeline
>>> training = spark.createDataFrame([
>>>     (1.0, Vectors.dense([0.0, 1.1, 0.1])),
>>>     (0.0, Vectors.dense([2.0, 1.0, -1.0])),
>>>     (0.0, Vectors.dense([2.0, 1.3, 1.0])),
>>>     (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"])
>>> lr_estimator = LogisticRegression(maxIter=10, regParam=0.001)
>>> pipeline = Pipeline(stages=[lr_estimator])
>>> pipeline_model = pipeline.fit(training)
>>> spark_model = SparkPipelineModel(estimator=pipeline_model, artifact_dir=tempfile.mkdtemp())
>>> spark_model.prepare(inference_conda_env="pyspark30_p37_cpu_v5")
>>> spark_model.verify(training)
>>> spark_model.save()
>>> model_deployment = spark_model.deploy()
>>> spark_model.predict(training)
>>> spark_model.delete_deployment()
classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel#

Downloads model artifacts from the model catalog.

Parameters:
  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_id is not available in the GenericModel object.

evaluate(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y_pred: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_score: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, X_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, classes: List | None = None, positive_class: str | None = None, legend_labels: dict | None = None, perfect: bool = True, filename: str | None = None, use_case_type: str | None = None)#

Creates an ads evaluation report.

Parameters:
  • X (DataFrame-like) – The data used to make a prediction. Can be set to None if y_preds is given. (And y_scores for more thorough analysis).

  • y (array-like) – The true values corresponding to the input data

  • y_pred (array-like, optional) – The predictions from each model in the same order as the models

  • y_score (array-like, optional) – The predict_probas from each model in the same order as the models

  • X_train (DataFrame-like, optional) – The data used to train the model

  • y_train (array-like, optional) – The true values corresponding to the input training data

  • classes (List or None, optional) – A List of the possible labels for y, when evaluating a classification use case

  • positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.

  • legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.

  • use_case_type (str, optional) – The type of problem this model is solving. This can be set during prepare(). Examples: “binary_classification”, “regression”, “multinomial_classification” Full list of supported types can be found here: ads.common.model_metadata.UseCaseType

  • filename (str, optional) – If filename is given, the html report will be saved to the location specified.

Examples

>>> import tempfile
>>> from ads.evaluations.evaluator import Evaluator
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from ads.common.model_metadata import UseCaseType
>>>
>>> X, y = make_classification(n_samples=1000)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
>>> est = DecisionTreeClassifier().fit(X_train, y_train)
>>> model = SklearnModel(estimator=est, artifact_dir=tempfile.mkdtemp())
>>> model.prepare(
        inference_conda_env="generalml_p38_cpu_v1",
        training_conda_env="generalml_p38_cpu_v1",
        X_sample=X_test,
        y_sample=y_test,
        use_case_type=UseCaseType.BINARY_CLASSIFICATION,
    )
>>> model.evaluate(X_test, y_test, filename="report.html")
classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()#

Gets model serializer.

introspect() DataFrame#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of SparkModelInputSerializerType

model_save_serializer_type#

alias of SparkModelSerializerType

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • featurestore_dataset ((Dataset, optional).) – The feature store dataset

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

  • reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

Examples

Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, X_sample: Dict | str | List | ndarray | Series | DataFrame | pyspark.sql.DataFrame | pyspark.pandas.DataFrame | None = None, force_overwrite: bool = False, **kwargs) None[source]#

Serialize and save pyspark model using spark serialization.

Parameters:

force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
...     access_log={
...         log_id=<log_ocid>
...     },
...     description="Description for Custom Model",
...     freeform_tags={"key": "value"},
... )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

    display_name: (str)

    Model deployment display name

    description: (str)

    Model deployment description

    freeform_tags: (dict)

    Model deployment freeform tags

    defined_tags: (dict)

    Model deployment defined tags

    Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

update_summary_action(detail: str, action: str)#

Update the actions needed from the user in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • action ((str)) – new action to be updated for the row specified by detail.

Return type:

None

update_summary_status(detail: str, status: str)#

Update the status in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Return type:

None

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

ads.model.framework.tensorflow_model module#

class ads.model.framework.tensorflow_model.TensorFlowModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'tf', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

TensorFlowModel class for estimators from Tensorflow framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Directory for generate artifact.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained tensorflow estimator/model using Tensorflow.

Type:

Callable

framework#

“tensorflow”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> from ads.model.framework.tensorflow_model import TensorFlowModel
>>> import tempfile
>>> import tensorflow as tf
>>> mnist = tf.keras.datasets.mnist
>>> (x_train, y_train), (x_test, y_test) = mnist.load_data()
>>> x_train, x_test = x_train / 255.0, x_test / 255.0
>>> tf_estimator = tf.keras.models.Sequential(
...                [
...                    tf.keras.layers.Flatten(input_shape=(28, 28)),
...                    tf.keras.layers.Dense(128, activation="relu"),
...                    tf.keras.layers.Dropout(0.2),
...                    tf.keras.layers.Dense(10),
...                ]
...            )
>>> loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
>>> tf_estimator.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
>>> tf_estimator.fit(x_train, y_train, epochs=1)
>>> tf_model = TensorFlowModel(estimator=tf_estimator,
... artifact_dir=tempfile.mkdtemp())
>>> inference_conda_env = "generalml_p37_cpu_v1"
>>> tf_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> tf_model.verify(x_test[:1])
>>> tf_model.save()
>>> model_deployment = tf_model.deploy(wait_for_completion=False)
>>> tf_model.predict(x_test[:1])

Initiates a TensorFlowModel instance.

Parameters:
  • estimator (callable) – Any model object generated by tensorflow framework

  • artifact_dir (str) – Directory for generate artifact.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

TensorFlowModel instance.

Return type:

TensorFlowModel

classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel#

Downloads model artifacts from the model catalog.

Parameters:
  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_id is not available in the GenericModel object.

evaluate(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y_pred: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_score: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, X_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, classes: List | None = None, positive_class: str | None = None, legend_labels: dict | None = None, perfect: bool = True, filename: str | None = None, use_case_type: str | None = None)#

Creates an ads evaluation report.

Parameters:
  • X (DataFrame-like) – The data used to make a prediction. Can be set to None if y_preds is given. (And y_scores for more thorough analysis).

  • y (array-like) – The true values corresponding to the input data

  • y_pred (array-like, optional) – The predictions from each model in the same order as the models

  • y_score (array-like, optional) – The predict_probas from each model in the same order as the models

  • X_train (DataFrame-like, optional) – The data used to train the model

  • y_train (array-like, optional) – The true values corresponding to the input training data

  • classes (List or None, optional) – A List of the possible labels for y, when evaluating a classification use case

  • positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.

  • legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.

  • use_case_type (str, optional) – The type of problem this model is solving. This can be set during prepare(). Examples: “binary_classification”, “regression”, “multinomial_classification” Full list of supported types can be found here: ads.common.model_metadata.UseCaseType

  • filename (str, optional) – If filename is given, the html report will be saved to the location specified.

Examples

>>> import tempfile
>>> from ads.evaluations.evaluator import Evaluator
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from ads.common.model_metadata import UseCaseType
>>>
>>> X, y = make_classification(n_samples=1000)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
>>> est = DecisionTreeClassifier().fit(X_train, y_train)
>>> model = SklearnModel(estimator=est, artifact_dir=tempfile.mkdtemp())
>>> model.prepare(
        inference_conda_env="generalml_p38_cpu_v1",
        training_conda_env="generalml_p38_cpu_v1",
        X_sample=X_test,
        y_sample=y_test,
        use_case_type=UseCaseType.BINARY_CLASSIFICATION,
    )
>>> model.evaluate(X_test, y_test, filename="report.html")
classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()#

Gets model serializer.

introspect() DataFrame#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of TensorflowModelSerializerType

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • featurestore_dataset ((Dataset, optional).) – The feature store dataset

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

  • reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

Examples

Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, force_overwrite: bool = False, **kwargs) None[source]#

Serialize and save Tensorflow model using ONNX or model specific method.

Parameters:
  • as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect input_signature.

  • force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • **kwargs (optional params used to serialize tensorflow model to onnx,) –

  • following (including the) – input_signature: a tuple or a list of tf.TensorSpec objects). default to None. Define the shape/dtype of the input so that model(input_signature) is a valid invocation of the model. opset_version: int. Defaults to None. Used for the ONNX model.

Returns:

Nothing.

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
...     access_log={
...         log_id=<log_ocid>
...     },
...     description="Description for Custom Model",
...     freeform_tags={"key": "value"},
... )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

    display_name: (str)

    Model deployment display name

    description: (str)

    Model deployment description

    freeform_tags: (dict)

    Model deployment freeform tags

    defined_tags: (dict)

    Model deployment defined tags

    Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

update_summary_action(detail: str, action: str)#

Update the actions needed from the user in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • action ((str)) – new action to be updated for the row specified by detail.

Return type:

None

update_summary_status(detail: str, status: str)#

Update the status in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Return type:

None

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

ads.model.framework.xgboost_model module#

class ads.model.framework.xgboost_model.XGBoostModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'xgboost', model_input_serializer: SERDE | None = None, **kwargs)[source]#

Bases: FrameworkSpecificModel

XGBoostModel class for estimators from xgboost framework.

algorithm#

The algorithm of the model.

Type:

str

artifact_dir#

Artifact directory to store the files needed for deployment.

Type:

str

auth#

Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.

Type:

Dict

estimator#

A trained xgboost estimator/model using Xgboost.

Type:

Callable

framework#

“xgboost”, the framework name of the model.

Type:

str

hyperparameter#

The hyperparameters of the estimator.

Type:

dict

metadata_custom#

The model custom metadata.

Type:

ModelCustomMetadata

metadata_provenance#

The model provenance metadata.

Type:

ModelProvenanceMetadata

metadata_taxonomy#

The model taxonomy metadata.

Type:

ModelTaxonomyMetadata

model_artifact#

This is built by calling prepare.

Type:

ModelArtifact

model_deployment#

A ModelDeployment instance.

Type:

ModelDeployment

model_file_name#

Name of the serialized model.

Type:

str

model_id#

The model ID.

Type:

str

properties#

ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.

Type:

ModelProperties

runtime_info#

A RuntimeInfo instance.

Type:

RuntimeInfo

schema_input#

Schema describes the structure of the input data.

Type:

Schema

schema_output#

Schema describes the structure of the output data.

Type:

Schema

serialize#

Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.

Type:

bool

version#

The framework version of the model.

Type:

str

delete_deployment(...)#

Deletes the current model deployment.

deploy(..., \*\*kwargs)#

Deploys a model.

from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from the specified folder, or zip/tar archive.

from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)#

Loads model from model catalog.

introspect(...)#

Runs model introspection.

predict(data, ...)#

Returns prediction of input data run against the model deployment endpoint.

prepare(..., \*\*kwargs)#

Prepare and save the score.py, serialized model and runtime.yaml file.

reload(...)#

Reloads the model artifact files: score.py and the runtime.yaml.

save(..., \*\*kwargs)#

Saves model artifacts to the model catalog.

summary_status(...)#

Gets a summary table of the current status.

verify(data, ...)#

Tests if deployment works in local environment.

Examples

>>> import xgboost as xgb
>>> import tempfile
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.xgboost_model import XGBoostModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> xgboost_estimator = xgb.XGBClassifier()
>>> xgboost_estimator.fit(X_train, y_train)
>>> xgboost_model = XGBoostModel(estimator=xgboost_estimator, artifact_dir=tmp_model_dir)
>>> xgboost_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True)
>>> xgboost_model.reload()
>>> xgboost_model.verify(X_test)
>>> xgboost_model.save()
>>> model_deployment = xgboost_model.deploy(wait_for_completion=False)
>>> xgboost_model.predict(X_test)

Initiates a XGBoostModel instance. This class wraps the XGBoost model as estimator. It’s primary purpose is to hold the trained model and do serialization.

Parameters:
  • estimator – XGBoostModel

  • artifact_dir (str) – artifact directory to store the files needed for deployment.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.

  • model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.

Returns:

XGBoostModel instance.

Return type:

XGBoostModel

Examples

>>> import xgboost as xgb
>>> import tempfile
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.datasets import load_iris
>>> from ads.model.framework.xgboost_model import XGBoostModel
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
>>> train = xgb.DMatrix(X_train, y_train)
>>> test = xgb.DMatrix(X_test, y_test)
>>> xgboost_estimator = XGBClassifier()
>>> xgboost_estimator.fit(X_train, y_train)
>>> xgboost_model = XGBoostModel(estimator=xgboost_estimator, artifact_dir=tempfile.mkdtemp())
>>> xgboost_model.prepare(inference_conda_env="generalml_p37_cpu_v1")
>>> xgboost_model.verify(X_test)
>>> xgboost_model.save()
>>> model_deployment = xgboost_model.deploy()
>>> xgboost_model.predict(X_test)
>>> xgboost_model.delete_deployment()
classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None#

Deletes a model from Model Catalog.

Parameters:
  • model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.

  • delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.

  • delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.

  • artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.

Return type:

None

Raises:

ValueError – If model_id not provided.

delete_deployment(wait_for_completion: bool = True) None#

Deletes the current deployment.

Parameters:

wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.

Return type:

None

Raises:

ValueError – if there is not deployment attached yet.:

deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment#

Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.

Example

>>> # This is an example to deploy model on container runtime
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp())
>>> model.summary_status()
>>> model.prepare(
...     model_file_name="toy_model.pkl",
...     ignore_conda_error=True, # set ignore_conda_error=True for container runtime
...     force_overwrite=True
... )
>>> model.verify()
>>> model.save()
>>> model.deploy(
...     deployment_image="iad.ocir.io/<namespace>/<image>:<tag>",
...     entrypoint=["python", "/opt/ds/model/deployed_model/api.py"],
...     server_port=5000,
...     health_check_port=5000,
...     environment_variables={"key":"value"}
... )
Parameters:
  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken from the environment variables.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:

ValueError – If model_id is not specified.

download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel#

Downloads model artifacts from the model catalog.

Parameters:
  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_id is not available in the GenericModel object.

evaluate(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y_pred: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_score: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, X_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, y_train: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | None = None, classes: List | None = None, positive_class: str | None = None, legend_labels: dict | None = None, perfect: bool = True, filename: str | None = None, use_case_type: str | None = None)#

Creates an ads evaluation report.

Parameters:
  • X (DataFrame-like) – The data used to make a prediction. Can be set to None if y_preds is given. (And y_scores for more thorough analysis).

  • y (array-like) – The true values corresponding to the input data

  • y_pred (array-like, optional) – The predictions from each model in the same order as the models

  • y_score (array-like, optional) – The predict_probas from each model in the same order as the models

  • X_train (DataFrame-like, optional) – The data used to train the model

  • y_train (array-like, optional) – The true values corresponding to the input training data

  • classes (List or None, optional) – A List of the possible labels for y, when evaluating a classification use case

  • positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.

  • legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.

  • use_case_type (str, optional) – The type of problem this model is solving. This can be set during prepare(). Examples: “binary_classification”, “regression”, “multinomial_classification” Full list of supported types can be found here: ads.common.model_metadata.UseCaseType

  • filename (str, optional) – If filename is given, the html report will be saved to the location specified.

Examples

>>> import tempfile
>>> from ads.evaluations.evaluator import Evaluator
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from ads.model.framework.sklearn_model import SklearnModel
>>> from ads.common.model_metadata import UseCaseType
>>>
>>> X, y = make_classification(n_samples=1000)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
>>> est = DecisionTreeClassifier().fit(X_train, y_train)
>>> model = SklearnModel(estimator=est, artifact_dir=tempfile.mkdtemp())
>>> model.prepare(
        inference_conda_env="generalml_p38_cpu_v1",
        training_conda_env="generalml_p38_cpu_v1",
        X_sample=X_test,
        y_sample=y_test,
        use_case_type=UseCaseType.BINARY_CLASSIFICATION,
    )
>>> model.evaluate(X_test, y_test, filename="report.html")
classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model OCID or model deployment OCID.

Parameters:
  • ocid (str) – The model OCID or model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self#

Loads model from a folder, or zip/tar archive.

Parameters:
  • uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.

  • model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

Returns:

An instance of GenericModel class.

Return type:

Self

Raises:

ValueError – If model_file_name not provided.

classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model catalog.

Parameters:
  • model_id (str) – The model OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self#

Loads model from model deployment.

Parameters:
  • model_deployment_id (str) – The model deployment OCID.

  • model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.

  • artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.

  • properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints

  • kwargs

    compartment_id(str, optional)

    Compartment OCID. If not specified, the value will be taken from the environment variables.

    timeout(int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

Returns:

An instance of GenericModel class.

Return type:

Self

get_data_serializer()#

Gets data serializer.

Returns:

object

Return type:

ads.model.Serializer object.

get_model_serializer()#

Gets model serializer.

introspect() DataFrame#

Conducts instrospection.

Returns:

A pandas DataFrame which contains the instrospection results.

Return type:

pandas.DataFrame

property metadata_custom#
property metadata_provenance#
property metadata_taxonomy#
property model_deployment_id#
property model_id#
model_input_serializer_type#

alias of ModelInputSerializerType

model_save_serializer_type#

alias of XgboostModelSerializerType

populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)#

Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.

Parameters:
  • use_case_type ((str, optional). Defaults to None.) – The use case type of the model.

  • data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to None.) – The training model OCID.

  • ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

Returns:

Nothing.

Return type:

None

populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)#

Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.

Parameters:
  • data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.

predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Returns prediction of input data run against the model deployment endpoint.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.predict(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

Dictionary with the predicted values.

Return type:

Dict[str, Any]

Raises:
prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel#

Prepare and save the score.py, serialized model and runtime.yaml file.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment#

Shortcut for prepare, save and deploy steps.

Parameters:
  • inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.

  • inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.

  • training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.

  • training_python_version ((str, optional). Defaults to None.) – Python version used during training.

  • model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.

  • as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.

  • initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.

  • force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.

  • namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.

  • use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.

  • X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.

  • y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.

  • training_script_path (str. Defaults to None.) – Training script path.

  • training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.

  • ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.

  • max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).

  • ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.

  • model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • model_description ((str, optional). Defaults to None.) – The description of the model.

  • model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.

  • deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.

  • deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.

  • deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.

  • deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.

  • deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.

  • deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm

  • deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.

  • deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.

  • deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    impute_values: (dict, optional).

    The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    image_digest: (str, optional). Defaults to None.

    The digest of docker container image.

    cmd: (List, optional). Defaults to empty.

    The command line arguments for running docker container image.

    entrypoint: (List, optional). Defaults to empty.

    The entrypoint for running docker container image.

    server_port: (int, optional). Defaults to 8080.

    The server port for docker container image.

    health_check_port: (int, optional). Defaults to 8080.

    The health check port for docker container image.

    deployment_mode: (str, optional). Defaults to HTTPS_ONLY.

    The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.

    input_stream_ids: (List, optional). Defaults to empty.

    The input stream ids. Required for STREAM_ONLY mode.

    output_stream_ids: (List, optional). Defaults to empty.

    The output stream ids. Required for STREAM_ONLY mode.

    environment_variables: (Dict, optional). Defaults to empty.

    The environment variables for model deployment.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    max_wait_time(int, optional). Defaults to 1200 seconds.

    Maximum amount of time to wait in seconds. Negative implies infinite wait time.

    poll_interval(int, optional). Defaults to 10 seconds.

    Poll interval in seconds.

    freeform_tags: (Dict[str, str], optional). Defaults to None.

    Freeform tags of the model deployment.

    defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.

    Defined tags of the model deployment.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

Raises:
  • FileExistsError – If files already exist but force_overwrite is False.

  • ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.

reload() GenericModel#

Reloads the model artifact files: score.py and the runtime.yaml.

Returns:

An instance of GenericModel class.

Return type:

GenericModel

reload_runtime_info() None#

Reloads the model artifact file: runtime.yaml.

Returns:

Nothing.

Return type:

None

restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment#

Restarts the current deployment.

Parameters:
  • max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.

  • poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.

Returns:

The ModelDeployment instance.

Return type:

ModelDeployment

save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str#

Saves model artifacts to the model catalog.

Parameters:
  • display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.

  • description ((str, optional). Defaults to None.) – The description of the model.

  • freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.

  • defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.

  • ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.

  • bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.

  • overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.

  • remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.

  • model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.

  • version_label ((str, optional). Defaults to None.) – The model version lebel.

  • featurestore_dataset ((Dataset, optional).) – The feature store dataset

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

  • reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.

  • model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.

  • kwargs

    project_id: (str, optional).

    Project OCID. If not specified, the value will be taken either from the environment variables or model properties.

    compartment_id(str, optional).

    Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.

    region: (str, optional). Defaults to None.

    The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.

    timeout: (int, optional). Defaults to 10 seconds.

    The connection timeout in seconds for the client.

    Also can be any attribute that oci.data_science.models.Model accepts.

Raises:

RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.

Returns:

The model id.

Return type:

str

Examples

Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )

property schema_input#
property schema_output#
serialize_model(as_onnx: bool = False, initial_types: List[Tuple] = None, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs)[source]#

Serialize and save Xgboost model using ONNX or model specific method.

Parameters:
  • artifact_dir (str) – Directory for generate artifact.

  • as_onnx ((boolean, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.

  • initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.

  • force_overwrite ((boolean, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.

  • X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.

Returns:

Nothing.

Return type:

None

set_model_input_serializer(model_input_serializer: str | SERDE)#

Registers serializer used for serializing data passed in verify/predict.

Examples

>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_input_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_input_serializer(MySERDE())
Parameters:

model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.

set_model_save_serializer(model_save_serializer: str | SERDE)#

Registers serializer used for saving model.

Examples

>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it.
>>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it.
>>> from ads.model import SERDE
>>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE):
...     def __init__(self):
...         super().__init__()
...     def serialize(self, data):
...         serialized_data = 1
...         return serialized_data
...     def deserialize(self, data):
...         deserialized_data = 2
...         return deserialized_data
>>> class Toy:
...     def predict(self, x):
...         return x ** 2
>>> generic_model = GenericModel(
...    estimator=Toy(),
...    artifact_dir=tempfile.mkdtemp(),
...    model_save_serializer=MySERDE()
... )
>>> # Or register the serializer after creating model instance.
>>> generic_model.set_model_save_serializer(MySERDE())
Parameters:

model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.

summary_status() DataFrame#

A summary table of the current status.

Returns:

The summary stable of the current status.

Return type:

pd.DataFrame

update(**kwargs) GenericModel#

Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.

Parameters:

kwargs

display_name: (str, optional). Defaults to None.

The name of the model.

description: (str, optional). Defaults to None.

The description of the model.

freeform_tagsDict(str, str), Defaults to None.

Freeform tags for the model.

defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.

Defined tags for the model.

version_label: (str, optional). Defaults to None.

The model version lebel.

Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.

Returns:

An instance of GenericModel (self).

Return type:

GenericModel

Raises:

ValueError – if model not saved to the Model Catalog.

classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment#

Updates a model deployment.

You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.

Examples

>>> # Update access log id, freeform tags and description for the model deployment
>>> model.update_deployment(
...     access_log={
...         log_id=<log_ocid>
...     },
...     description="Description for Custom Model",
...     freeform_tags={"key": "value"},
... )
Parameters:
  • model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.

  • properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.

  • wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.

  • max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.

  • poll_interval (int) – Poll interval in seconds (Defaults to 10).

  • kwargs

    auth: (Dict, optional). Defaults to None.

    The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

    display_name: (str)

    Model deployment display name

    description: (str)

    Model deployment description

    freeform_tags: (dict)

    Model deployment freeform tags

    defined_tags: (dict)

    Model deployment defined tags

    Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.

Returns:

An instance of ModelDeployment class.

Return type:

ModelDeployment

update_summary_action(detail: str, action: str)#

Update the actions needed from the user in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • action ((str)) – new action to be updated for the row specified by detail.

Return type:

None

update_summary_status(detail: str, status: str)#

Update the status in the summary table.

Parameters:
  • detail ((str)) – value of the detail in the details column of the summary status table. Used to locate which row to update.

  • status ((str)) – new status to be updated for the row specified by detail.

Return type:

None

upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None#

Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.

Parameters:
  • uri (str) –

    The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:

    >>> upload_artifact(uri="/some/local/folder/")
    >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
    

  • auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.

  • force_overwrite (bool) – Overwrite target_dir if exists.

  • parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.

verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any]#

Test if deployment works in local environment.

Examples

>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg"
>>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options
>>> prediction = model.verify(
...        image="oci://<bucket>@<tenancy>/myimage.png",
...        storage_options=ads.auth.default_signer()
... )['prediction']
Parameters:
  • data (Any) – Data used to test if deployment works in local environment.

  • reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.

  • auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.

  • kwargs

    content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.

    A valid string path for image file can be local path, http(s), oci, s3, gs.

    storage_options: dict

    Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.

Returns:

A dictionary which contains prediction results.

Return type:

Dict

Module contents#