ads.model package¶
Subpackages¶
- ads.model.common package
- ads.model.deployment package
- Subpackages
- Submodules
- ads.model.deployment.model_deployer module
ModelDeployer
ModelDeployer.config
ModelDeployer.ds_client
ModelDeployer.ds_composite_client
ModelDeployer.deploy()
ModelDeployer.get_model_deployment()
ModelDeployer.get_model_deployment_state()
ModelDeployer.delete()
ModelDeployer.list_deployments()
ModelDeployer.show_deployments()
ModelDeployer.delete()
ModelDeployer.deploy()
ModelDeployer.deploy_from_model_uri()
ModelDeployer.get_model_deployment()
ModelDeployer.get_model_deployment_state()
ModelDeployer.list_deployments()
ModelDeployer.show_deployments()
ModelDeployer.update()
- ads.model.deployment.model_deployment module
LogNotConfiguredError
ModelDeployment
ModelDeployment.config
ModelDeployment.properties
ModelDeployment.workflow_state_progress
ModelDeployment.workflow_steps
ModelDeployment.dsc_model_deployment
ModelDeployment.state
ModelDeployment.created_by
ModelDeployment.lifecycle_state
ModelDeployment.lifecycle_details
ModelDeployment.time_created
ModelDeployment.display_name
ModelDeployment.description
ModelDeployment.freeform_tags
ModelDeployment.defined_tags
ModelDeployment.runtime
ModelDeployment.infrastructure
ModelDeployment.deploy()
ModelDeployment.delete()
ModelDeployment.update()
ModelDeployment.activate()
ModelDeployment.deactivate()
ModelDeployment.list()
ModelDeployment.with_display_name()
ModelDeployment.with_description()
ModelDeployment.with_freeform_tags()
ModelDeployment.with_defined_tags()
ModelDeployment.with_runtime()
ModelDeployment.with_infrastructure()
ModelDeployment.from_dict()
ModelDeployment.from_id()
ModelDeployment.sync()
ModelDeployment.CONST_CREATED_BY
ModelDeployment.CONST_DEFINED_TAG
ModelDeployment.CONST_DESCRIPTION
ModelDeployment.CONST_DISPLAY_NAME
ModelDeployment.CONST_FREEFORM_TAG
ModelDeployment.CONST_ID
ModelDeployment.CONST_INFRASTRUCTURE
ModelDeployment.CONST_LIFECYCLE_DETAILS
ModelDeployment.CONST_LIFECYCLE_STATE
ModelDeployment.CONST_MODEL_DEPLOYMENT_URL
ModelDeployment.CONST_RUNTIME
ModelDeployment.CONST_TIME_CREATED
ModelDeployment.access_log
ModelDeployment.activate()
ModelDeployment.attribute_map
ModelDeployment.build()
ModelDeployment.created_by
ModelDeployment.deactivate()
ModelDeployment.defined_tags
ModelDeployment.delete()
ModelDeployment.deploy()
ModelDeployment.description
ModelDeployment.display_name
ModelDeployment.freeform_tags
ModelDeployment.from_dict()
ModelDeployment.from_id()
ModelDeployment.id
ModelDeployment.infrastructure
ModelDeployment.initialize_spec_attributes
ModelDeployment.kind
ModelDeployment.lifecycle_details
ModelDeployment.lifecycle_state
ModelDeployment.list()
ModelDeployment.list_df()
ModelDeployment.logs()
ModelDeployment.model_deployment_id
ModelDeployment.model_input_serializer
ModelDeployment.predict()
ModelDeployment.predict_log
ModelDeployment.runtime
ModelDeployment.show_logs()
ModelDeployment.state
ModelDeployment.status
ModelDeployment.sync()
ModelDeployment.time_created
ModelDeployment.to_dict()
ModelDeployment.type
ModelDeployment.update()
ModelDeployment.url
ModelDeployment.watch()
ModelDeployment.with_defined_tags()
ModelDeployment.with_description()
ModelDeployment.with_display_name()
ModelDeployment.with_freeform_tags()
ModelDeployment.with_infrastructure()
ModelDeployment.with_runtime()
ModelDeploymentLogType
ModelDeploymentPredictError
- ads.model.deployment.model_deployment_properties module
ModelDeploymentProperties
ModelDeploymentProperties.swagger_types
ModelDeploymentProperties.model_id
ModelDeploymentProperties.model_uri
ModelDeploymentProperties.with_prop()
ModelDeploymentProperties.with_instance_configuration()
ModelDeploymentProperties.with_access_log()
ModelDeploymentProperties.with_predict_log()
ModelDeploymentProperties.build()
ModelDeploymentProperties.build()
ModelDeploymentProperties.sub_properties
ModelDeploymentProperties.to_oci_model()
ModelDeploymentProperties.to_update_deployment()
ModelDeploymentProperties.with_access_log()
ModelDeploymentProperties.with_category_log()
ModelDeploymentProperties.with_instance_configuration()
ModelDeploymentProperties.with_logging_configuration()
ModelDeploymentProperties.with_predict_log()
ModelDeploymentProperties.with_prop()
- Module contents
- ads.model.extractor package
- Submodules
- ads.model.extractor.keras_extractor module
- ads.model.extractor.lightgbm_extractor module
- ads.model.extractor.model_info_extractor module
ModelInfoExtractor
ModelInfoExtractor.framework()
ModelInfoExtractor.algorithm()
ModelInfoExtractor.version()
ModelInfoExtractor.hyperparameter()
ModelInfoExtractor.info()
ModelInfoExtractor.algorithm()
ModelInfoExtractor.framework()
ModelInfoExtractor.hyperparameter()
ModelInfoExtractor.info()
ModelInfoExtractor.version()
normalize_hyperparameter()
- ads.model.extractor.model_info_extractor_factory module
- ads.model.extractor.pytorch_extractor module
- ads.model.extractor.sklearn_extractor module
- ads.model.extractor.spark_extractor module
- ads.model.extractor.tensorflow_extractor module
TensorflowExtractor
TensorflowExtractor.model
TensorflowExtractor.estimator
TensorflowExtractor.framework()
TensorflowExtractor.algorithm()
TensorflowExtractor.version()
TensorflowExtractor.hyperparameter()
TensorflowExtractor.algorithm
TensorflowExtractor.framework
TensorflowExtractor.hyperparameter
TensorflowExtractor.version
- ads.model.extractor.xgboost_extractor module
- Module contents
- ads.model.framework package
- Submodules
- ads.model.framework.huggingface_model module
HuggingFacePipelineModel
HuggingFacePipelineModel.algorithm
HuggingFacePipelineModel.artifact_dir
HuggingFacePipelineModel.auth
HuggingFacePipelineModel.estimator
HuggingFacePipelineModel.framework
HuggingFacePipelineModel.hyperparameter
HuggingFacePipelineModel.metadata_custom
HuggingFacePipelineModel.metadata_provenance
HuggingFacePipelineModel.metadata_taxonomy
HuggingFacePipelineModel.model_artifact
HuggingFacePipelineModel.model_deployment
HuggingFacePipelineModel.model_file_name
HuggingFacePipelineModel.model_id
HuggingFacePipelineModel.properties
HuggingFacePipelineModel.runtime_info
HuggingFacePipelineModel.schema_input
HuggingFacePipelineModel.schema_output
HuggingFacePipelineModel.serialize
HuggingFacePipelineModel.version
HuggingFacePipelineModel.delete_deployment()
HuggingFacePipelineModel.deploy()
HuggingFacePipelineModel.from_model_artifact()
HuggingFacePipelineModel.from_model_catalog()
HuggingFacePipelineModel.introspect()
HuggingFacePipelineModel.predict()
HuggingFacePipelineModel.prepare()
HuggingFacePipelineModel.reload()
HuggingFacePipelineModel.save()
HuggingFacePipelineModel.summary_status()
HuggingFacePipelineModel.verify()
HuggingFacePipelineModel.delete()
HuggingFacePipelineModel.delete_deployment()
HuggingFacePipelineModel.deploy()
HuggingFacePipelineModel.download_artifact()
HuggingFacePipelineModel.evaluate()
HuggingFacePipelineModel.from_id()
HuggingFacePipelineModel.from_model_artifact()
HuggingFacePipelineModel.from_model_catalog()
HuggingFacePipelineModel.from_model_deployment()
HuggingFacePipelineModel.get_data_serializer()
HuggingFacePipelineModel.get_model_serializer()
HuggingFacePipelineModel.introspect()
HuggingFacePipelineModel.metadata_custom
HuggingFacePipelineModel.metadata_provenance
HuggingFacePipelineModel.metadata_taxonomy
HuggingFacePipelineModel.model_deployment_id
HuggingFacePipelineModel.model_id
HuggingFacePipelineModel.model_input_serializer_type
HuggingFacePipelineModel.model_save_serializer_type
HuggingFacePipelineModel.populate_metadata()
HuggingFacePipelineModel.populate_schema()
HuggingFacePipelineModel.predict()
HuggingFacePipelineModel.prepare()
HuggingFacePipelineModel.prepare_save_deploy()
HuggingFacePipelineModel.reload()
HuggingFacePipelineModel.reload_runtime_info()
HuggingFacePipelineModel.restart_deployment()
HuggingFacePipelineModel.save()
HuggingFacePipelineModel.schema_input
HuggingFacePipelineModel.schema_output
HuggingFacePipelineModel.serialize_model()
HuggingFacePipelineModel.set_model_input_serializer()
HuggingFacePipelineModel.set_model_save_serializer()
HuggingFacePipelineModel.summary_status()
HuggingFacePipelineModel.update()
HuggingFacePipelineModel.update_deployment()
HuggingFacePipelineModel.update_summary_action()
HuggingFacePipelineModel.update_summary_status()
HuggingFacePipelineModel.upload_artifact()
HuggingFacePipelineModel.verify()
- ads.model.framework.lightgbm_model module
LightGBMModel
LightGBMModel.algorithm
LightGBMModel.artifact_dir
LightGBMModel.auth
LightGBMModel.estimator
LightGBMModel.framework
LightGBMModel.hyperparameter
LightGBMModel.metadata_custom
LightGBMModel.metadata_provenance
LightGBMModel.metadata_taxonomy
LightGBMModel.model_artifact
LightGBMModel.model_deployment
LightGBMModel.model_file_name
LightGBMModel.model_id
LightGBMModel.properties
LightGBMModel.runtime_info
LightGBMModel.schema_input
LightGBMModel.schema_output
LightGBMModel.serialize
LightGBMModel.version
LightGBMModel.delete_deployment()
LightGBMModel.deploy()
LightGBMModel.from_model_artifact()
LightGBMModel.from_model_catalog()
LightGBMModel.introspect()
LightGBMModel.predict()
LightGBMModel.prepare()
LightGBMModel.reload()
LightGBMModel.save()
LightGBMModel.summary_status()
LightGBMModel.verify()
LightGBMModel.delete()
LightGBMModel.delete_deployment()
LightGBMModel.deploy()
LightGBMModel.download_artifact()
LightGBMModel.evaluate()
LightGBMModel.from_id()
LightGBMModel.from_model_artifact()
LightGBMModel.from_model_catalog()
LightGBMModel.from_model_deployment()
LightGBMModel.get_data_serializer()
LightGBMModel.get_model_serializer()
LightGBMModel.introspect()
LightGBMModel.metadata_custom
LightGBMModel.metadata_provenance
LightGBMModel.metadata_taxonomy
LightGBMModel.model_deployment_id
LightGBMModel.model_id
LightGBMModel.model_input_serializer_type
LightGBMModel.model_save_serializer_type
LightGBMModel.populate_metadata()
LightGBMModel.populate_schema()
LightGBMModel.predict()
LightGBMModel.prepare()
LightGBMModel.prepare_save_deploy()
LightGBMModel.reload()
LightGBMModel.reload_runtime_info()
LightGBMModel.restart_deployment()
LightGBMModel.save()
LightGBMModel.schema_input
LightGBMModel.schema_output
LightGBMModel.serialize_model()
LightGBMModel.set_model_input_serializer()
LightGBMModel.set_model_save_serializer()
LightGBMModel.summary_status()
LightGBMModel.update()
LightGBMModel.update_deployment()
LightGBMModel.update_summary_action()
LightGBMModel.update_summary_status()
LightGBMModel.upload_artifact()
LightGBMModel.verify()
- ads.model.framework.pytorch_model module
PyTorchModel
PyTorchModel.algorithm
PyTorchModel.artifact_dir
PyTorchModel.auth
PyTorchModel.estimator
PyTorchModel.framework
PyTorchModel.hyperparameter
PyTorchModel.metadata_custom
PyTorchModel.metadata_provenance
PyTorchModel.metadata_taxonomy
PyTorchModel.model_artifact
PyTorchModel.model_deployment
PyTorchModel.model_file_name
PyTorchModel.model_id
PyTorchModel.properties
PyTorchModel.runtime_info
PyTorchModel.schema_input
PyTorchModel.schema_output
PyTorchModel.serialize
PyTorchModel.version
PyTorchModel.delete_deployment()
PyTorchModel.deploy()
PyTorchModel.from_model_artifact()
PyTorchModel.from_model_catalog()
PyTorchModel.introspect()
PyTorchModel.predict()
PyTorchModel.prepare()
PyTorchModel.reload()
PyTorchModel.save()
PyTorchModel.summary_status()
PyTorchModel.verify()
PyTorchModel.delete()
PyTorchModel.delete_deployment()
PyTorchModel.deploy()
PyTorchModel.download_artifact()
PyTorchModel.evaluate()
PyTorchModel.from_id()
PyTorchModel.from_model_artifact()
PyTorchModel.from_model_catalog()
PyTorchModel.from_model_deployment()
PyTorchModel.get_data_serializer()
PyTorchModel.get_model_serializer()
PyTorchModel.introspect()
PyTorchModel.metadata_custom
PyTorchModel.metadata_provenance
PyTorchModel.metadata_taxonomy
PyTorchModel.model_deployment_id
PyTorchModel.model_id
PyTorchModel.model_input_serializer_type
PyTorchModel.model_save_serializer_type
PyTorchModel.populate_metadata()
PyTorchModel.populate_schema()
PyTorchModel.predict()
PyTorchModel.prepare()
PyTorchModel.prepare_save_deploy()
PyTorchModel.reload()
PyTorchModel.reload_runtime_info()
PyTorchModel.restart_deployment()
PyTorchModel.save()
PyTorchModel.schema_input
PyTorchModel.schema_output
PyTorchModel.serialize_model()
PyTorchModel.set_model_input_serializer()
PyTorchModel.set_model_save_serializer()
PyTorchModel.summary_status()
PyTorchModel.update()
PyTorchModel.update_deployment()
PyTorchModel.update_summary_action()
PyTorchModel.update_summary_status()
PyTorchModel.upload_artifact()
PyTorchModel.verify()
- ads.model.framework.sklearn_model module
SklearnModel
SklearnModel.algorithm
SklearnModel.artifact_dir
SklearnModel.auth
SklearnModel.estimator
SklearnModel.framework
SklearnModel.hyperparameter
SklearnModel.metadata_custom
SklearnModel.metadata_provenance
SklearnModel.metadata_taxonomy
SklearnModel.model_artifact
SklearnModel.model_deployment
SklearnModel.model_file_name
SklearnModel.model_id
SklearnModel.properties
SklearnModel.runtime_info
SklearnModel.schema_input
SklearnModel.schema_output
SklearnModel.serialize
SklearnModel.version
SklearnModel.delete_deployment()
SklearnModel.deploy()
SklearnModel.from_model_artifact()
SklearnModel.from_model_catalog()
SklearnModel.introspect()
SklearnModel.predict()
SklearnModel.prepare()
SklearnModel.reload()
SklearnModel.save()
SklearnModel.summary_status()
SklearnModel.verify()
SklearnModel.delete()
SklearnModel.delete_deployment()
SklearnModel.deploy()
SklearnModel.download_artifact()
SklearnModel.evaluate()
SklearnModel.from_id()
SklearnModel.from_model_artifact()
SklearnModel.from_model_catalog()
SklearnModel.from_model_deployment()
SklearnModel.get_data_serializer()
SklearnModel.get_model_serializer()
SklearnModel.introspect()
SklearnModel.metadata_custom
SklearnModel.metadata_provenance
SklearnModel.metadata_taxonomy
SklearnModel.model_deployment_id
SklearnModel.model_id
SklearnModel.model_input_serializer_type
SklearnModel.model_save_serializer_type
SklearnModel.populate_metadata()
SklearnModel.populate_schema()
SklearnModel.predict()
SklearnModel.prepare()
SklearnModel.prepare_save_deploy()
SklearnModel.reload()
SklearnModel.reload_runtime_info()
SklearnModel.restart_deployment()
SklearnModel.save()
SklearnModel.schema_input
SklearnModel.schema_output
SklearnModel.serialize_model()
SklearnModel.set_model_input_serializer()
SklearnModel.set_model_save_serializer()
SklearnModel.summary_status()
SklearnModel.update()
SklearnModel.update_deployment()
SklearnModel.update_summary_action()
SklearnModel.update_summary_status()
SklearnModel.upload_artifact()
SklearnModel.verify()
- ads.model.framework.spark_model module
SparkPipelineModel
SparkPipelineModel.algorithm
SparkPipelineModel.artifact_dir
SparkPipelineModel.auth
SparkPipelineModel.estimator
SparkPipelineModel.framework
SparkPipelineModel.hyperparameter
SparkPipelineModel.metadata_custom
SparkPipelineModel.metadata_provenance
SparkPipelineModel.metadata_taxonomy
SparkPipelineModel.model_artifact
SparkPipelineModel.model_file_name
SparkPipelineModel.model_id
SparkPipelineModel.properties
SparkPipelineModel.runtime_info
SparkPipelineModel.schema_input
SparkPipelineModel.schema_output
SparkPipelineModel.serialize
SparkPipelineModel.version
SparkPipelineModel.delete_deployment()
SparkPipelineModel.deploy()
SparkPipelineModel.from_model_artifact()
SparkPipelineModel.from_model_catalog()
SparkPipelineModel.introspect()
SparkPipelineModel.predict()
SparkPipelineModel.prepare()
SparkPipelineModel.reload()
SparkPipelineModel.save()
SparkPipelineModel.summary_status()
SparkPipelineModel.verify()
SparkPipelineModel.delete()
SparkPipelineModel.delete_deployment()
SparkPipelineModel.deploy()
SparkPipelineModel.download_artifact()
SparkPipelineModel.evaluate()
SparkPipelineModel.from_id()
SparkPipelineModel.from_model_artifact()
SparkPipelineModel.from_model_catalog()
SparkPipelineModel.from_model_deployment()
SparkPipelineModel.get_data_serializer()
SparkPipelineModel.get_model_serializer()
SparkPipelineModel.introspect()
SparkPipelineModel.metadata_custom
SparkPipelineModel.metadata_provenance
SparkPipelineModel.metadata_taxonomy
SparkPipelineModel.model_deployment_id
SparkPipelineModel.model_id
SparkPipelineModel.model_input_serializer_type
SparkPipelineModel.model_save_serializer_type
SparkPipelineModel.populate_metadata()
SparkPipelineModel.populate_schema()
SparkPipelineModel.predict()
SparkPipelineModel.prepare()
SparkPipelineModel.prepare_save_deploy()
SparkPipelineModel.reload()
SparkPipelineModel.reload_runtime_info()
SparkPipelineModel.restart_deployment()
SparkPipelineModel.save()
SparkPipelineModel.schema_input
SparkPipelineModel.schema_output
SparkPipelineModel.serialize_model()
SparkPipelineModel.set_model_input_serializer()
SparkPipelineModel.set_model_save_serializer()
SparkPipelineModel.summary_status()
SparkPipelineModel.update()
SparkPipelineModel.update_deployment()
SparkPipelineModel.update_summary_action()
SparkPipelineModel.update_summary_status()
SparkPipelineModel.upload_artifact()
SparkPipelineModel.verify()
- ads.model.framework.tensorflow_model module
TensorFlowModel
TensorFlowModel.algorithm
TensorFlowModel.artifact_dir
TensorFlowModel.auth
TensorFlowModel.estimator
TensorFlowModel.framework
TensorFlowModel.hyperparameter
TensorFlowModel.metadata_custom
TensorFlowModel.metadata_provenance
TensorFlowModel.metadata_taxonomy
TensorFlowModel.model_artifact
TensorFlowModel.model_deployment
TensorFlowModel.model_file_name
TensorFlowModel.model_id
TensorFlowModel.properties
TensorFlowModel.runtime_info
TensorFlowModel.schema_input
TensorFlowModel.schema_output
TensorFlowModel.serialize
TensorFlowModel.version
TensorFlowModel.delete_deployment()
TensorFlowModel.deploy()
TensorFlowModel.from_model_artifact()
TensorFlowModel.from_model_catalog()
TensorFlowModel.introspect()
TensorFlowModel.predict()
TensorFlowModel.prepare()
TensorFlowModel.reload()
TensorFlowModel.save()
TensorFlowModel.summary_status()
TensorFlowModel.verify()
TensorFlowModel.delete()
TensorFlowModel.delete_deployment()
TensorFlowModel.deploy()
TensorFlowModel.download_artifact()
TensorFlowModel.evaluate()
TensorFlowModel.from_id()
TensorFlowModel.from_model_artifact()
TensorFlowModel.from_model_catalog()
TensorFlowModel.from_model_deployment()
TensorFlowModel.get_data_serializer()
TensorFlowModel.get_model_serializer()
TensorFlowModel.introspect()
TensorFlowModel.metadata_custom
TensorFlowModel.metadata_provenance
TensorFlowModel.metadata_taxonomy
TensorFlowModel.model_deployment_id
TensorFlowModel.model_id
TensorFlowModel.model_input_serializer_type
TensorFlowModel.model_save_serializer_type
TensorFlowModel.populate_metadata()
TensorFlowModel.populate_schema()
TensorFlowModel.predict()
TensorFlowModel.prepare()
TensorFlowModel.prepare_save_deploy()
TensorFlowModel.reload()
TensorFlowModel.reload_runtime_info()
TensorFlowModel.restart_deployment()
TensorFlowModel.save()
TensorFlowModel.schema_input
TensorFlowModel.schema_output
TensorFlowModel.serialize_model()
TensorFlowModel.set_model_input_serializer()
TensorFlowModel.set_model_save_serializer()
TensorFlowModel.summary_status()
TensorFlowModel.update()
TensorFlowModel.update_deployment()
TensorFlowModel.update_summary_action()
TensorFlowModel.update_summary_status()
TensorFlowModel.upload_artifact()
TensorFlowModel.verify()
- ads.model.framework.xgboost_model module
XGBoostModel
XGBoostModel.algorithm
XGBoostModel.artifact_dir
XGBoostModel.auth
XGBoostModel.estimator
XGBoostModel.framework
XGBoostModel.hyperparameter
XGBoostModel.metadata_custom
XGBoostModel.metadata_provenance
XGBoostModel.metadata_taxonomy
XGBoostModel.model_artifact
XGBoostModel.model_deployment
XGBoostModel.model_file_name
XGBoostModel.model_id
XGBoostModel.properties
XGBoostModel.runtime_info
XGBoostModel.schema_input
XGBoostModel.schema_output
XGBoostModel.serialize
XGBoostModel.version
XGBoostModel.delete_deployment()
XGBoostModel.deploy()
XGBoostModel.from_model_artifact()
XGBoostModel.from_model_catalog()
XGBoostModel.introspect()
XGBoostModel.predict()
XGBoostModel.prepare()
XGBoostModel.reload()
XGBoostModel.save()
XGBoostModel.summary_status()
XGBoostModel.verify()
XGBoostModel.delete()
XGBoostModel.delete_deployment()
XGBoostModel.deploy()
XGBoostModel.download_artifact()
XGBoostModel.evaluate()
XGBoostModel.from_id()
XGBoostModel.from_model_artifact()
XGBoostModel.from_model_catalog()
XGBoostModel.from_model_deployment()
XGBoostModel.get_data_serializer()
XGBoostModel.get_model_serializer()
XGBoostModel.introspect()
XGBoostModel.metadata_custom
XGBoostModel.metadata_provenance
XGBoostModel.metadata_taxonomy
XGBoostModel.model_deployment_id
XGBoostModel.model_id
XGBoostModel.model_input_serializer_type
XGBoostModel.model_save_serializer_type
XGBoostModel.populate_metadata()
XGBoostModel.populate_schema()
XGBoostModel.predict()
XGBoostModel.prepare()
XGBoostModel.prepare_save_deploy()
XGBoostModel.reload()
XGBoostModel.reload_runtime_info()
XGBoostModel.restart_deployment()
XGBoostModel.save()
XGBoostModel.schema_input
XGBoostModel.schema_output
XGBoostModel.serialize_model()
XGBoostModel.set_model_input_serializer()
XGBoostModel.set_model_save_serializer()
XGBoostModel.summary_status()
XGBoostModel.update()
XGBoostModel.update_deployment()
XGBoostModel.update_summary_action()
XGBoostModel.update_summary_status()
XGBoostModel.upload_artifact()
XGBoostModel.verify()
- Module contents
- ads.model.model_artifact_boilerplate package
- ads.model.runtime package
- Submodules
- ads.model.runtime.env_info module
- ads.model.runtime.model_deployment_details module
- ads.model.runtime.model_provenance_details module
ModelProvenanceDetails
ModelProvenanceDetails.project_ocid
ModelProvenanceDetails.tenancy_ocid
ModelProvenanceDetails.training_code
ModelProvenanceDetails.training_compartment_ocid
ModelProvenanceDetails.training_conda_env
ModelProvenanceDetails.training_region
ModelProvenanceDetails.training_resource_ocid
ModelProvenanceDetails.user_ocid
ModelProvenanceDetails.vm_image_internal_id
TrainingCode
- ads.model.runtime.runtime_info module
- ads.model.runtime.utils module
- Module contents
- ads.model.service package
- Submodules
- ads.model.service.oci_datascience_model module
ModelArtifactNotFoundError
ModelMetadataArtifactDetails
ModelMetadataArtifactNotFoundError
ModelNotSavedError
ModelProvenanceNotFoundError
ModelWithActiveDeploymentError
OCIDataScienceModel
OCIDataScienceModel.create()
OCIDataScienceModel.create_model_provenance()
OCIDataScienceModel.get_model_provenance()
OCIDataScienceModel.get_artifact_info()
OCIDataScienceModel.create_model_artifact()
OCIDataScienceModel.import_model_artifact()
OCIDataScienceModel.update()
OCIDataScienceModel.delete()
OCIDataScienceModel.model_deployment()
OCIDataScienceModel.from_id()
OCIDataScienceModel.create()
OCIDataScienceModel.create_custom_metadata_artifact()
OCIDataScienceModel.create_defined_metadata_artifact()
OCIDataScienceModel.create_model_artifact()
OCIDataScienceModel.create_model_provenance()
OCIDataScienceModel.delete()
OCIDataScienceModel.delete_custom_metadata_artifact()
OCIDataScienceModel.delete_defined_metadata_artifact()
OCIDataScienceModel.export_model_artifact()
OCIDataScienceModel.from_id()
OCIDataScienceModel.get_artifact_info()
OCIDataScienceModel.get_custom_metadata_artifact()
OCIDataScienceModel.get_defined_metadata_artifact()
OCIDataScienceModel.get_metadata_content()
OCIDataScienceModel.get_model_artifact_content()
OCIDataScienceModel.get_model_provenance()
OCIDataScienceModel.head_custom_metadata_artifact()
OCIDataScienceModel.head_defined_metadata_artifact()
OCIDataScienceModel.import_model_artifact()
OCIDataScienceModel.is_model_created_by_reference()
OCIDataScienceModel.model_deployment()
OCIDataScienceModel.restore_archived_model_artifact()
OCIDataScienceModel.update()
OCIDataScienceModel.update_custom_metadata_artifact()
OCIDataScienceModel.update_defined_metadata_artifact()
OCIDataScienceModel.update_model_provenance()
check_for_model_id()
convert_model_metadata_response()
- ads.model.service.oci_datascience_model_version_set module
- Module contents
- ads.model.transformer package
Submodules¶
ads.model.artifact module¶
- exception ads.model.artifact.AritfactFolderStructureError(required_files: Tuple[str])[source]¶
Bases:
Exception
- exception ads.model.artifact.ArtifactRequiredFilesError(required_files: Tuple[str])[source]¶
Bases:
Exception
- class ads.model.artifact.ModelArtifact(artifact_dir: str, model_file_name: str | None = None, reload: bool | None = False, ignore_conda_error: bool | None = False, local_copy_dir: str | None = None, auth: dict | None = None)[source]¶
Bases:
object
The class that represents model artifacts. It is designed to help to generate and manage model artifacts.
Initializes a ModelArtifact instance.
- Parameters:
artifact_dir (str) – The artifact folder to store the files needed for deployment.
model_file_name ((str, optional). Defaults to None.) – The file name of the serialized model.
reload ((bool, optional). Defaults to False.) – Determine whether will reload the Model into the env.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
local_copy_dir ((str, optional). Defaults to None.) – The local back up directory of the model artifacts.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Returns:
A ModelArtifact instance.
- Return type:
- Raises:
ValueError – If artifact_dir not provided.
- classmethod from_uri(uri: str, artifact_dir: str, model_file_name: str | None = None, force_overwrite: bool | None = False, auth: Dict | None = None, ignore_conda_error: bool | None = False, reload: bool | None = False)[source]¶
Constructs a ModelArtifact object from the existing model artifacts.
- Parameters:
uri (str) – The URI of source artifact folder or achive. Can be local path or OCI object storage URI.
artifact_dir (str) – The local artifact folder to store the files needed for deployment.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
model_file_name ((str, optional). Defaults to None) – The file name of the serialized model.
reload ((bool, optional). Defaults to False.) – Whether to reload the Model into the environment.
- Returns:
A ModelArtifact instance
- Return type:
- Raises:
ValueError – If uri is equal to artifact_dir, and it not exists. If artifact_dir is not provided.
- prepare_runtime_yaml(inference_conda_env: str, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', bucketname: str = 'service-conda-packs', auth: dict | None = None, ignore_conda_error: bool = False) None [source]¶
Generate a runtime yaml file and save it to the artifact directory.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – The object storage path of conda pack which will be used in deployment. Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – The python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – The object storage path of conda pack used during training. Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
training_python_version ((str, optional). Defaults to None.) – The python version used during training.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional)) – The namespace of region. Defaults to environment variable CONDA_BUCKET_NS.
bucketname ((str, optional)) – The bucketname of service pack. Defaults to environment variable CONDA_BUCKET_NAME.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Raises:
ValueError – If neither slug or conda_env_uri is provided.
- Returns:
A RuntimeInfo instance.
- Return type:
- prepare_schema(schema_name: str)[source]¶
Copies schema to artifact directory.
- Parameters:
schema_name (str) – The schema name
- Return type:
None
- Raises:
FileExistsError – If schema_name doesn’t exist.
- prepare_score_py(jinja_template_filename: str, model_file_name: str | None = None, **kwargs)[source]¶
Prepares score.py file.
- Parameters:
jinja_template_filename (str.) – The jinja template file name.
model_file_name ((str, optional). Defaults to None.) – The file name of the serialized model.
**kwargs ((dict)) – use_torch_script: bool data_deserializer: str
- Return type:
None
- Raises:
ValueError – If model_file_name not provided.
ads.model.artifact_downloader module¶
- class ads.model.artifact_downloader.ArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, force_overwrite: bool | None = False)[source]¶
Bases:
ABC
The abstract class to download model artifacts.
Initializes ArtifactDownloader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
target_dir (str) – The target location of model after download.
force_overwrite (bool) – Overwrite target_dir if exists.
- PROGRESS_STEPS_COUNT = 1¶
- download()[source]¶
Downloads model artifacts.
- Return type:
None
- Raises:
ValueError – If target directory does not exist.
- class ads.model.artifact_downloader.LargeArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, auth: Dict | None = None, force_overwrite: bool | None = False, region: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_file_description: dict | None = None)[source]¶
Bases:
ArtifactDownloader
Initializes LargeArtifactDownloader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
target_dir (str) – The target location of model after download.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Overwrite target_dir if exists.
region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
model_file_description ((dict, optional). Defaults to None.) – Contains object path details for models created by reference.
- PROGRESS_STEPS_COUNT = 4¶
- class ads.model.artifact_downloader.SmallArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, force_overwrite: bool | None = False)[source]¶
Bases:
ArtifactDownloader
Initializes ArtifactDownloader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
target_dir (str) – The target location of model after download.
force_overwrite (bool) – Overwrite target_dir if exists.
- PROGRESS_STEPS_COUNT = 3¶
ads.model.artifact_uploader module¶
- class ads.model.artifact_uploader.ArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str)[source]¶
Bases:
ABC
The abstract class to upload model artifacts.
Initializes ArtifactUploader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
artifact_path (str) – The model artifact location.
- PROGRESS_STEPS_COUNT = 3¶
- class ads.model.artifact_uploader.LargeArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str, bucket_uri: str | None = None, auth: Dict | None = None, region: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, parallel_process_count: int = 9)[source]¶
Bases:
ArtifactUploader
The class helper to upload large model artifacts.
- artifact_path¶
- The model artifact location. Possible values are:
object storage path to zip archive. Example: oci://<bucket_name>@<namespace>/prefix/mymodel.zip.
local path to zip archive. Example: ./mymodel.zip.
local path to folder with artifacts. Example: ./mymodel.
- Type:
- auth¶
The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Type:
- bucket_uri¶
The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
Added in version 2.8.10: If artifact_path is object storage path to a zip archive, bucket_uri will be ignored.
- Type:
- dsc_model¶
The data scince model instance.
- Type:
- progress¶
An instance of the TqdmProgressBar.
- Type:
- region¶
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Type:
- remove_existing_artifact¶
Wether artifacts uploaded to object storage bucket need to be removed or not.
- Type:
- upload_manager¶
The uploadManager simplifies interaction with the Object Storage service.
- Type:
UploadManager
Initializes LargeArtifactUploader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
artifact_path (str) –
- The model artifact location. Possible values are:
object storage path to zip archive. Example: oci://<bucket_name>@<namespace>/prefix/mymodel.zip.
local path to zip archive. Example: ./mymodel.zip.
local path to folder with artifacts. Example: ./mymodel.
bucket_uri ((str, optional). Defaults to None.) –
The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts from local which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
Added in version 2.8.10: If artifact_path is object storage path to a zip archive, bucket_uri will be ignored.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
parallel_process_count ((int, optional).) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
- PROGRESS_STEPS_COUNT = 4¶
- class ads.model.artifact_uploader.SmallArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str)[source]¶
Bases:
ArtifactUploader
The class helper to upload small model artifacts.
Initializes ArtifactUploader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
artifact_path (str) – The model artifact location.
- PROGRESS_STEPS_COUNT = 1¶
ads.model.base_properties module¶
- class ads.model.base_properties.BaseProperties[source]¶
Bases:
Serializable
Represents base properties class.
- with_prop(name: str, value: Any) BaseProperties [source]¶
Sets property value.
- with_dict(obj_dict: Dict) BaseProperties [source]¶
Populates properties values from dict.
- with_env() BaseProperties [source]¶
Populates properties values from environment variables.
- with_config(config: ads.config.ConfigSection) BaseProperties [source]¶
Sets properties values from the config profile.
- from_dict(obj_dict: Dict[str, Any]) 'BaseProperties' [source]¶
Creates an instance of the properties class from a dictionary.
- from_config(uri: str, profile: str, auth: Dict | None = None) "BaseProperties": [source]¶
Loads properties from the config file.
- to_config(uri: str, profile: str, force_overwrite: bool | None = False, auth: Dict | None = None) None [source]¶
Saves properties to the config file.
- classmethod from_config(uri: str, profile: str, auth: Dict | None = None) BaseProperties [source]¶
Loads properties from the config file.
- Parameters:
uri (str) – The URI of the config file. Can be local path or OCI object storage URI.
profile (str) – The config profile name.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Returns:
Instance of the BaseProperties.
- Return type:
- classmethod from_dict(obj_dict: Dict[str, Any]) BaseProperties [source]¶
Creates an instance of the properties class from a dictionary.
- Parameters:
obj_dict (Dict[str, Any]) – List of properties and values in dictionary format.
- Returns:
Instance of the BaseProperties.
- Return type:
- to_config(uri: str, profile: str, force_overwrite: bool | None = False, auth: Dict | None = None) None [source]¶
Saves properties to the config file.
- Parameters:
uri (str) – The URI of the config file. Can be local path or OCI object storage URI.
profile (str) – The config profile name.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Returns:
Nothing
- Return type:
None
- to_dict(**kwargs)[source]¶
Serializes instance of class into a dictionary.
- Returns:
A dictionary.
- Return type:
Dict
- with_config(config: ConfigSection) BaseProperties [source]¶
Sets properties values from the config profile.
- Returns:
Instance of the BaseProperties.
- Return type:
- with_dict(obj_dict: Dict[str, Any]) BaseProperties [source]¶
Sets properties from a dict.
- with_env() BaseProperties [source]¶
Sets properties values from environment variables.
- Returns:
Instance of the BaseProperties.
- Return type:
- exception ads.model.generic_model.ArtifactsNotAvailableError(msg='Model artifacts are either not generated or not available locally.')[source]¶
Bases:
Exception
- class ads.model.generic_model.DataScienceModelType[source]¶
Bases:
ExtendedEnum
- MODEL = 'datasciencemodel'¶
- MODEL_DEPLOYMENT = 'datasciencemodeldeployment'¶
- class ads.model.generic_model.FrameworkSpecificModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]¶
Bases:
GenericModel
GenericModel Constructor.
- Parameters:
estimator ((Callable).) – Trained model.
artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.
- predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any] [source]¶
Returns prediction of input data run against the model deployment endpoint.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.predict( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
Dictionary with the predicted values.
- Return type:
Dict[str, Any]
- Raises:
NotActiveDeploymentError – If model deployment process was not started or not finished yet.
ValueError – If data is empty or not JSON serializable.
- verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any] [source]¶
Test if deployment works in local environment.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.verify( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data used to test if deployment works in local environment.
reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
A dictionary which contains prediction results.
- Return type:
Dict
- class ads.model.generic_model.GenericModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]¶
Bases:
MetadataMixin
,Introspectable
,EvaluatorMixin
Generic Model class which is the base class for all the frameworks including the unsupported frameworks.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
Any model object generated by sklearn framework
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- model_input_serializer¶
Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Type:
- properties¶
ModelProperties object required to save and deploy model.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- from_model_artifact(uri, ..., \*\*kwargs)[source]¶
Loads model from the specified folder, or zip/tar archive.
- from_model_deployment(model_deployment_id, ..., \*\*kwargs)[source]¶
Loads model from model deployment.
- predict(data, ...)[source]¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)[source]¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- set_model_input_serializer(serde)[source]¶
Registers serializer used for serializing data passed in verify/predict.
Examples
>>> import tempfile >>> from ads.model.generic_model import GenericModel
>>> class Toy: ... def predict(self, x): ... return x ** 2 >>> estimator = Toy()
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp()) >>> model.summary_status() >>> model.prepare( ... inference_conda_env="dbexp_p38_cpu_v1", ... inference_python_version="3.8", ... model_file_name="toy_model.pkl", ... training_id=None, ... force_overwrite=True ... ) >>> model.verify(2) >>> model.save() >>> model.deploy() >>> # Update access log id, freeform tags and description for the model deployment >>> model.update_deployment( ... access_log={ ... log_id=<log_ocid> ... }, ... description="Description for Custom Model", ... freeform_tags={"key": "value"}, ... ) >>> model.predict(2) >>> # Uncomment the line below to delete the model and the associated model deployment >>> # model.delete(delete_associated_model_deployment = True)
GenericModel Constructor.
- Parameters:
estimator ((Callable).) – Trained model.
artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.
- classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None [source]¶
Deletes a model from Model Catalog.
- Parameters:
model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.
delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.
delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.
artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.
- Return type:
None
- Raises:
ValueError – If model_id not provided.
- delete_deployment(wait_for_completion: bool = True) None [source]¶
Deletes the current deployment.
- Parameters:
wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.
- Return type:
None
- Raises:
ValueError – if there is not deployment attached yet.:
- deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_private_endpoint_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment [source]¶
Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.
Example
>>> # This is an example to deploy model on container runtime >>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp()) >>> model.summary_status() >>> model.prepare( ... model_file_name="toy_model.pkl", ... ignore_conda_error=True, # set ignore_conda_error=True for container runtime ... force_overwrite=True ... ) >>> model.verify() >>> model.save() >>> model.deploy( ... deployment_image="iad.ocir.io/<namespace>/<image>:<tag>", ... entrypoint=["python", "/opt/ds/model/deployed_model/api.py"], ... server_port=5000, ... health_check_port=5000, ... environment_variables={"key":"value"} ... )
- Parameters:
wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.
display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.
deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.
deployment_instance_private_endpoint_id ((str, optional). Default to None.) – The private endpoint id of instance used for deployment.
deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.
deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.
deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.
deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.
deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.
deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.
kwargs –
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken from the environment variables.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken from the environment variables.
- max_wait_time(int, optional). Defaults to 1200 seconds.
Maximum amount of time to wait in seconds. Negative implies infinite wait time.
- poll_interval(int, optional). Defaults to 10 seconds.
Poll interval in seconds.
- freeform_tags: (Dict[str, str], optional). Defaults to None.
Freeform tags of the model deployment.
- defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.
Defined tags of the model deployment.
- image_digest: (str, optional). Defaults to None.
The digest of docker container image.
- cmd: (List, optional). Defaults to empty.
The command line arguments for running docker container image.
- entrypoint: (List, optional). Defaults to empty.
The entrypoint for running docker container image.
- server_port: (int, optional). Defaults to 8080.
The server port for docker container image.
- health_check_port: (int, optional). Defaults to 8080.
The health check port for docker container image.
- deployment_mode: (str, optional). Defaults to HTTPS_ONLY.
The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.
- input_stream_ids: (List, optional). Defaults to empty.
The input stream ids. Required for STREAM_ONLY mode.
- output_stream_ids: (List, optional). Defaults to empty.
The output stream ids. Required for STREAM_ONLY mode.
- environment_variables: (Dict, optional). Defaults to empty.
The environment variables for model deployment.
Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.
- Returns:
The ModelDeployment instance.
- Return type:
- Raises:
ValueError – If model_id is not specified.
- download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel [source]¶
Downloads model artifacts from the model catalog.
- Parameters:
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
- Returns:
An instance of GenericModel class.
- Return type:
- Raises:
ValueError – If model_id is not available in the GenericModel object.
- classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model OCID or model deployment OCID.
- Parameters:
ocid (str) – The model OCID or model deployment OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- Returns:
An instance of GenericModel class.
- Return type:
- classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self [source]¶
Loads model from a folder, or zip/tar archive.
- Parameters:
uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.
model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
- Returns:
An instance of GenericModel class.
- Return type:
- Raises:
ValueError – If model_file_name not provided.
- classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model catalog.
- Parameters:
model_id (str) – The model OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Returns:
An instance of GenericModel class.
- Return type:
- classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model deployment.
- Parameters:
model_deployment_id (str) – The model deployment OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Returns:
An instance of GenericModel class.
- Return type:
- get_data_serializer()[source]¶
Gets data serializer.
- Returns:
object
- Return type:
ads.model.Serializer object.
- introspect() DataFrame [source]¶
Conducts instrospection.
- Returns:
A pandas DataFrame which contains the instrospection results.
- Return type:
pandas.DataFrame
- property metadata_custom¶
- property metadata_provenance¶
- property metadata_taxonomy¶
- property model_deployment_id¶
- property model_id¶
- model_input_serializer_type¶
alias of
ModelInputSerializerType
- model_save_serializer_type¶
alias of
ModelSerializerType
- predict(data: Any | None = None, auto_serialize_data: bool = False, local: bool = False, **kwargs) Dict[str, Any] [source]¶
Returns prediction of input data run against the model deployment endpoint.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.predict( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
local (bool.) – Whether to invoke the prediction locally. Default to False.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
Dictionary with the predicted values.
- Return type:
Dict[str, Any]
- Raises:
NotActiveDeploymentError – If model deployment process was not started or not finished yet.
ValueError – If model is not deployed yet or the endpoint information is not available.
- prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel [source]¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.
training_python_version ((str, optional). Defaults to None.) – Python version used during training.
model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.
as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.
initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.
use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.
ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.
kwargs –
- impute_values: (dict, optional).
The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.
- Raises:
FileExistsError – If files already exist but force_overwrite is False.
ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.
- Returns:
An instance of GenericModel class.
- Return type:
- prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_private_endpoint_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment [source]¶
Shortcut for prepare, save and deploy steps.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.
training_python_version ((str, optional). Defaults to None.) – Python version used during training.
model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.
as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.
initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.
use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.
ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
model_description ((str, optional). Defaults to None.) – The description of the model.
model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.
model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.
ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.
wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.
deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.
deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.
deployment_instance_private_endpoint_id ((str, optional). Default to None.) – The private endpoint id of instance used for deployment.
deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.
deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.
deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.
deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.
deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.
deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.
version_label ((str, optional). Defaults to None.) – The model version lebel.
model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.
kwargs –
- impute_values: (dict, optional).
The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken either from the environment variables or model properties.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.
- image_digest: (str, optional). Defaults to None.
The digest of docker container image.
- cmd: (List, optional). Defaults to empty.
The command line arguments for running docker container image.
- entrypoint: (List, optional). Defaults to empty.
The entrypoint for running docker container image.
- server_port: (int, optional). Defaults to 8080.
The server port for docker container image.
- health_check_port: (int, optional). Defaults to 8080.
The health check port for docker container image.
- deployment_mode: (str, optional). Defaults to HTTPS_ONLY.
The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.
- input_stream_ids: (List, optional). Defaults to empty.
The input stream ids. Required for STREAM_ONLY mode.
- output_stream_ids: (List, optional). Defaults to empty.
The output stream ids. Required for STREAM_ONLY mode.
- environment_variables: (Dict, optional). Defaults to empty.
The environment variables for model deployment.
- timeout: (int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- max_wait_time(int, optional). Defaults to 1200 seconds.
Maximum amount of time to wait in seconds. Negative implies infinite wait time.
- poll_interval(int, optional). Defaults to 10 seconds.
Poll interval in seconds.
- freeform_tags: (Dict[str, str], optional). Defaults to None.
Freeform tags of the model deployment.
- defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.
Defined tags of the model deployment.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.
- Returns:
The ModelDeployment instance.
- Return type:
- Raises:
FileExistsError – If files already exist but force_overwrite is False.
ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.
- reload() GenericModel [source]¶
Reloads the model artifact files: score.py and the runtime.yaml.
- Returns:
An instance of GenericModel class.
- Return type:
- reload_runtime_info() None [source]¶
Reloads the model artifact file: runtime.yaml.
- Returns:
Nothing.
- Return type:
None
- restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment [source]¶
Restarts the current deployment.
- Parameters:
max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.
poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.
- Returns:
The ModelDeployment instance.
- Return type:
- save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str [source]¶
Saves model artifacts to the model catalog.
- Parameters:
display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.
defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.
ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.
version_label ((str, optional). Defaults to None.) – The model version lebel.
featurestore_dataset ((Dataset, optional).) – The feature store dataset
parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.
model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.
kwargs –
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken either from the environment variables or model properties.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- timeout: (int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
Also can be any attribute that oci.data_science.models.Model accepts.
- Raises:
RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.
- Returns:
The model id.
- Return type:
Examples
Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )
- property schema_input¶
- property schema_output¶
- serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: any | None = None, **kwargs)[source]¶
Serialize and save model using ONNX or model specific method.
- Parameters:
as_onnx ((boolean, optional)) – If set as True, convert into ONNX model.
initial_types ((List[Tuple], optional)) – a python list. Each element is a tuple of a variable name and a data type.
force_overwrite ((boolean, optional)) – If set as True, overwrite serialized model if exists.
X_sample ((any, optional). Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model, used to valid model input type.
- Returns:
Nothing
- Return type:
None
- set_model_input_serializer(model_input_serializer: str | SERDE)[source]¶
Registers serializer used for serializing data passed in verify/predict.
Examples
>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it. >>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it. >>> from ads.model import SERDE >>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE): ... def __init__(self): ... super().__init__() ... def serialize(self, data): ... serialized_data = 1 ... return serialized_data ... def deserialize(self, data): ... deserialized_data = 2 ... return deserialized_data
>>> class Toy: ... def predict(self, x): ... return x ** 2
>>> generic_model = GenericModel( ... estimator=Toy(), ... artifact_dir=tempfile.mkdtemp(), ... model_input_serializer=MySERDE() ... )
>>> # Or register the serializer after creating model instance. >>> generic_model.set_model_input_serializer(MySERDE())
- Parameters:
model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.
- set_model_save_serializer(model_save_serializer: str | SERDE)[source]¶
Registers serializer used for saving model.
Examples
>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it. >>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it. >>> from ads.model import SERDE >>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE): ... def __init__(self): ... super().__init__() ... def serialize(self, data): ... serialized_data = 1 ... return serialized_data ... def deserialize(self, data): ... deserialized_data = 2 ... return deserialized_data
>>> class Toy: ... def predict(self, x): ... return x ** 2
>>> generic_model = GenericModel( ... estimator=Toy(), ... artifact_dir=tempfile.mkdtemp(), ... model_save_serializer=MySERDE() ... )
>>> # Or register the serializer after creating model instance. >>> generic_model.set_model_save_serializer(MySERDE())
- Parameters:
model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.
- summary_status() DataFrame [source]¶
A summary table of the current status.
- Returns:
The summary stable of the current status.
- Return type:
pd.DataFrame
- update(**kwargs) GenericModel [source]¶
Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.
- Parameters:
kwargs –
- display_name: (str, optional). Defaults to None.
The name of the model.
- description: (str, optional). Defaults to None.
The description of the model.
- freeform_tagsDict(str, str), Defaults to None.
Freeform tags for the model.
- defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.
Defined tags for the model.
- version_label: (str, optional). Defaults to None.
The model version lebel.
Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.
- Returns:
An instance of GenericModel (self).
- Return type:
- Raises:
ValueError – if model not saved to the Model Catalog.
- classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment [source]¶
Updates a model deployment.
You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.
Examples
>>> # Update access log id, freeform tags and description for the model deployment >>> model.update_deployment( ... access_log={ ... log_id=<log_ocid> ... }, ... description="Description for Custom Model", ... freeform_tags={"key": "value"}, ... )
- Parameters:
model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.
properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.
wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
kwargs –
- auth: (Dict, optional). Defaults to None.
The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- display_name: (str)
Model deployment display name
- description: (str)
Model deployment description
- freeform_tags: (dict)
Model deployment freeform tags
- defined_tags: (dict)
Model deployment defined tags
Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.
- Returns:
An instance of ModelDeployment class.
- Return type:
- update_summary_action(detail: str, action: str)[source]¶
Update the actions needed from the user in the summary table.
- upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None [source]¶
Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.
- Parameters:
uri (str) –
The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:
>>> upload_artifact(uri="/some/local/folder/") >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite (bool) – Overwrite target_dir if exists.
parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
- verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = False, **kwargs) Dict[str, Any] [source]¶
Test if deployment works in local environment.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.verify( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data used to test if deployment works in local environment.
reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.
is_json_payload (bool) – Defaults to False. Indicate whether to send data with a application/json MIME TYPE.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
A dictionary which contains prediction results.
- Return type:
Dict
- class ads.model.generic_model.ModelDeploymentRuntimeType[source]¶
Bases:
object
- CONDA = 'conda'¶
- CONTAINER = 'container'¶
- class ads.model.generic_model.ModelState(value)[source]¶
Bases:
Enum
An enumeration.
- AVAILABLE = 'Available'¶
- DONE = 'Done'¶
- NEEDSACTION = 'Needs Action'¶
- NOTAPPLICABLE = 'Not Applicable'¶
- NOTAVAILABLE = 'Not Available'¶
- exception ads.model.generic_model.SerializeInputNotImplementedError[source]¶
Bases:
NotImplementedError
- exception ads.model.generic_model.SerializeModelNotImplementedError[source]¶
Bases:
NotImplementedError
- class ads.model.generic_model.SummaryStatus[source]¶
Bases:
object
SummaryStatus class which track the status of the Model frameworks.
- update_action(detail: str, action: str) None [source]¶
Updates the action of the summary status table of the corresponding detail.
ads.model.model_introspect module¶
The module that helps to minimize the number of errors of the model post-deployment process. The model provides a simple testing harness to ensure that model artifacts are thoroughly tested before being saved to the model catalog.
Classes¶
- ModelIntrospect
Class to introspect model artifacts.
Examples
>>> model_introspect = ModelIntrospect(artifact=model_artifact)
>>> model_introspect()
... Test key Test name Result Message
... ----------------------------------------------------------------------------
... test_key_1 test_name_1 Passed test passed
... test_key_2 test_name_2 Not passed some error occured
>>> model_introspect.status
... Passed
- class ads.model.model_introspect.Introspectable[source]¶
Bases:
ABC
Base class that represents an introspectable object.
- exception ads.model.model_introspect.IntrospectionNotPassed[source]¶
Bases:
ValueError
- class ads.model.model_introspect.ModelIntrospect(artifact: Introspectable)[source]¶
Bases:
object
Class to introspect model artifacts.
- Parameters:
Examples
>>> model_introspect = ModelIntrospect(artifact=model_artifact) >>> result = model_introspect() ... Test key Test name Result Message ... ---------------------------------------------------------------------------- ... test_key_1 test_name_1 Passed test passed ... test_key_2 test_name_2 Not passed some error occured
Initializes the Model Introspect.
- Parameters:
artifact (Introspectable) – The instance of ModelArtifact object.
- Raises:
ValueError – If model artifact object not provided.:
TypeError – If provided input paramater not a ModelArtifact instance.:
- property failures: int¶
Calculates the number of failures.
- Returns:
The number of failures.
- Return type:
ads.model.model_metadata module¶
- class ads.model.model_metadata.Framework[source]¶
Bases:
ExtendedEnum
- BERT = 'bert'¶
- CUML = 'cuml'¶
- EMBEDDING_ONNX = 'embedding_onnx'¶
- EMCEE = 'emcee'¶
- ENSEMBLE = 'ensemble'¶
- FLAIR = 'flair'¶
- GENSIM = 'gensim'¶
- H20 = 'h2o'¶
- KERAS = 'keras'¶
- LIGHT_GBM = 'lightgbm'¶
- MXNET = 'mxnet'¶
- NLTK = 'nltk'¶
- ORACLE_AUTOML = 'oracle_automl'¶
- OTHER = 'other'¶
- PROPHET = 'prophet'¶
- PYMC3 = 'pymc3'¶
- PYOD = 'pyod'¶
- PYSTAN = 'pystan'¶
- PYTORCH = 'pytorch'¶
- SCIKIT_LEARN = 'scikit-learn'¶
- SKTIME = 'sktime'¶
- SPACY = 'spacy'¶
- SPARK = 'pyspark'¶
- STATSMODELS = 'statsmodels'¶
- TENSORFLOW = 'tensorflow'¶
- TRANSFORMERS = 'transformers'¶
- WORD2VEC = 'word2vec'¶
- XGBOOST = 'xgboost'¶
- class ads.model.model_metadata.MetadataCustomCategory[source]¶
Bases:
ExtendedEnum
- OTHER = 'Other'¶
- PERFORMANCE = 'Performance'¶
- TRAINING_AND_VALIDATION_DATASETS = 'Training and Validation Datasets'¶
- TRAINING_ENV = 'Training Environment'¶
- TRAINING_PROFILE = 'Training Profile'¶
- class ads.model.model_metadata.MetadataCustomKeys[source]¶
Bases:
ExtendedEnum
- CLIENT_LIBRARY = 'ClientLibrary'¶
- CONDA_ENVIRONMENT = 'CondaEnvironment'¶
- CONDA_ENVIRONMENT_PATH = 'CondaEnvironmentPath'¶
- ENVIRONMENT_TYPE = 'EnvironmentType'¶
- MODEL_ARTIFACTS = 'ModelArtifacts'¶
- MODEL_FILE_NAME = 'ModelFileName'¶
- MODEL_SERIALIZATION_FORMAT = 'ModelSerializationFormat'¶
- SLUG_NAME = 'SlugName'¶
- TRAINING_DATASET = 'TrainingDataset'¶
- TRAINING_DATASET_NUMBER_OF_COLS = 'TrainingDatasetNumberOfCols'¶
- TRAINING_DATASET_NUMBER_OF_ROWS = 'TrainingDatasetNumberOfRows'¶
- TRAINING_DATASET_SIZE = 'TrainingDatasetSize'¶
- VALIDATION_DATASET = 'ValidationDataset'¶
- VALIDATION_DATASET_NUMBER_OF_COLS = 'ValidationDataSetNumberOfCols'¶
- VALIDATION_DATASET_NUMBER_OF_ROWS = 'ValidationDatasetNumberOfRows'¶
- VALIDATION_DATASET_SIZE = 'ValidationDatasetSize'¶
- class ads.model.model_metadata.MetadataCustomPrintColumns[source]¶
Bases:
ExtendedEnum
- CATEGORY = 'Category'¶
- DESCRIPTION = 'Description'¶
- HAS_ARTIFACT = 'HasArtifact'¶
- KEY = 'Key'¶
- VALUE = 'Value'¶
- exception ads.model.model_metadata.MetadataDescriptionTooLong(key: str, length: int)[source]¶
Bases:
ValueError
Maximum allowed length of metadata description has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.
- exception ads.model.model_metadata.MetadataSizeTooLarge(size: int)[source]¶
Bases:
ValueError
Maximum allowed size for model metadata has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.
- class ads.model.model_metadata.MetadataTaxonomyKeys[source]¶
Bases:
ExtendedEnum
- ALGORITHM = 'Algorithm'¶
- ARTIFACT_TEST_RESULT = 'ArtifactTestResults'¶
- FRAMEWORK = 'Framework'¶
- FRAMEWORK_VERSION = 'FrameworkVersion'¶
- HYPERPARAMETERS = 'Hyperparameters'¶
- USE_CASE_TYPE = 'UseCaseType'¶
- class ads.model.model_metadata.MetadataTaxonomyPrintColumns[source]¶
Bases:
ExtendedEnum
- HAS_ARTIFACT = 'HasArtifact'¶
- KEY = 'Key'¶
- VALUE = 'Value'¶
- exception ads.model.model_metadata.MetadataValueTooLong(key: str, length: int)[source]¶
Bases:
ValueError
Maximum allowed length of metadata value has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.
- class ads.model.model_metadata.ModelCustomMetadata[source]¶
Bases:
ModelMetadata
Class that represents Model Custom Metadata.
- get(self, key: str) ModelCustomMetadataItem ¶
Returns the model metadata item by provided key.
- to_dict(self)¶
Serializes model metadata into a dictionary.
- from_dict(cls) ModelCustomMetadata [source]¶
Constructs model metadata from dictionary.
- to_yaml(self)¶
Serializes model metadata into a YAML.
- add(self, key: str, value: str, description: str = '', category: str = MetadataCustomCategory.OTHER, replace: bool = False) None: [source]¶
Adds a new model metadata item. Replaces existing one if replace flag is True.
- to_json(self)¶
Serializes model metadata into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata to a local file or object storage.
Examples
>>> metadata_custom = ModelCustomMetadata() >>> metadata_custom.add(key="format", value="pickle") >>> metadata_custom.add(key="note", value="important note", description="some description") >>> metadata_custom["format"].description = "some description" >>> metadata_custom.to_dataframe() Key Value Description Category ---------------------------------------------------------------------------- 0 format pickle some description user defined 1 note important note some description user defined >>> metadata_custom metadata: - category: user defined description: some description key: format value: pickle - category: user defined description: some description key: note value: important note >>> metadata_custom.remove("format") >>> metadata_custom metadata: - category: user defined description: some description key: note value: important note >>> metadata_custom.to_dict() {'metadata': [{ 'key': 'note', 'value': 'important note', 'category': 'user defined', 'description': 'some description' }]} >>> metadata_custom.reset() >>> metadata_custom metadata: - category: None description: None key: note value: None >>> metadata_custom.clear() >>> metadata_custom.to_dataframe() Key Value Description Category ----------------------------------------------------------------------------
Initializes custom model metadata.
- add(key: str, value: str, description: str = '', category: str = 'Other', replace: bool = False) None [source]¶
Adds a new model metadata item. Overrides the existing one if replace flag is True.
- Parameters:
- Returns:
Nothing.
- Return type:
None
- Raises:
TypeError – If provided key is not a string. If provided description not a string.
ValueError – If provided key is empty. If provided value is empty. If provided value cannot be serialized to JSON. If item with provided key is already registered and replace flag is False. If provided category is not supported.
MetadataValueTooLong – If the length of provided value exceeds 255 charracters.
MetadataDescriptionTooLong – If the length of provided description exceeds 255 charracters.
- classmethod from_dict(data: Dict) ModelCustomMetadata [source]¶
Constructs an instance of ModelCustomMetadata from a dictionary.
- Parameters:
data (Dict) – Model metadata in a dictionary format.
- Returns:
An instance of model custom metadata.
- Return type:
- Raises:
ValueError – In case of the wrong input data format.
- isempty() bool [source]¶
Checks if metadata is empty.
- Returns:
True if metadata is empty, False otherwise.
- Return type:
- remove(key: str) None [source]¶
Removes a model metadata item.
- Parameters:
key (str) – The key of the metadata item that should be removed.
- Returns:
Nothing.
- Return type:
None
- set_training_data(path: str, data_size: str | None = None)[source]¶
Adds training_data path and data size information into model custom metadata.
- class ads.model.model_metadata.ModelCustomMetadataItem(key: str, value: str | None = None, description: str | None = None, category: str | None = None, has_artifact: bool = False)[source]¶
Bases:
ModelTaxonomyMetadataItem
Class that represents model custom metadata item.
- from_dict(cls) ModelCustomMetadataItem ¶
Constructs model metadata item from dictionary.
- to_yaml(self)¶
Serializes model metadata item to YAML.
- update(self, value: str = '', description: str = '', category: str = '') None [source]¶
Updates metadata item information.
- to_json(self) JSON ¶
Serializes metadata item into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata item value to a local file or object storage.
- reset() None [source]¶
Resets model metadata item.
Resets value, description and category to None.
- Returns:
Nothing.
- Return type:
None
- update(value: str, description: str, category: str, has_artifact: bool = False) None [source]¶
Updates metadata item.
- validate() bool [source]¶
Validates metadata item.
- Returns:
True if validation passed.
- Return type:
- Raises:
ValueError – If invalid category provided.
MetadataValueTooLong – If value exceeds the length limit.
- class ads.model.model_metadata.ModelMetadata[source]¶
Bases:
ABC
The base abstract class representing model metadata.
- get(self, key: str) ModelMetadataItem [source]¶
Returns the model metadata item by provided key.
- from_dict(cls) ModelMetadata [source]¶
Constructs model metadata from dictionary.
- to_json_file(self, file_path: str, storage_options: dict = None) None [source]¶
Saves the metadata to a local file or object storage.
Initializes Model Metadata.
- abstract from_dict(data: Dict) ModelMetadata [source]¶
Constructs an instance of ModelMetadata from a dictionary.
- Parameters:
data (Dict) – Model metadata in a dictionary format.
- Returns:
An instance of model metadata.
- Return type:
- get(key: str, value: ~typing.Any | None = <object object>) ModelMetadataItem | Any [source]¶
Returns the model metadata item by provided key.
- Parameters:
- Returns:
The model metadata item.
- Return type:
- Raises:
ValueError – If provided key is empty or metadata item not found.
- property keys: Tuple[str]¶
Returns all registered metadata keys.
- Returns:
The list of metadata keys.
- Return type:
Tuple[str]
- reset() None [source]¶
Resets all model metadata items to empty values.
Resets value, description and category to None for every metadata item.
- size() int [source]¶
Returns the size of the model metadata in bytes.
- Returns:
The size of model metadata in bytes.
- Return type:
- abstract to_dataframe() DataFrame [source]¶
Returns the model metadata list in a data frame format.
- Returns:
The model metadata in a dataframe format.
- Return type:
pandas.DataFrame
- to_dict()[source]¶
Serializes model metadata into a dictionary.
- Returns:
The model metadata in a dictionary representation.
- Return type:
Dict
- to_json()[source]¶
Serializes model metadata into a JSON.
- Returns:
The model metadata in a JSON representation.
- Return type:
JSON
- to_json_file(file_path: str, storage_options: dict | None = None) None [source]¶
Saves the metadata to a local file or object storage.
- Parameters:
file_path (str) – The file path to store the data. “oci://bucket_name@namespace/folder_name/” “oci://bucket_name@namespace/folder_name/metadata.json” “path/to/local/folder” “path/to/local/folder/metadata.json”
storage_options (dict. Default None) – Parameters passed on to the backend filesystem class. Defaults to options set using DatasetFactory.set_default_storage().
- Returns:
Nothing.
- Return type:
None
- Raises:
ValueError – When file path is empty.:
TypeError – When file path not a string.:
Examples
>>> metadata = ModelTaxonomyMetadataItem() >>> storage_options = {"config": oci.config.from_file(os.path.join("~/.oci", "config"))} >>> storage_options {'log_requests': False, 'additional_user_agent': '', 'pass_phrase': None, 'user': '<user-id>', 'fingerprint': '05:15:2b:b1:46:8a:32:ec:e2:69:5b:32:01:**:**:**)', 'tenancy': '<tenancy-id>', 'region': 'us-ashburn-1', 'key_file': '/home/datascience/.oci/oci_api_key.pem'} >>> metadata.to_json_file(file_path = 'oci://bucket_name@namespace/folder_name/metadata_taxonomy.json', storage_options=storage_options) >>> metadata_item.to_json_file("path/to/local/folder/metadata_taxonomy.json")
- to_yaml()[source]¶
Serializes model metadata into a YAML.
- Returns:
The model metadata in a YAML representation.
- Return type:
Yaml
- validate() bool [source]¶
Validates model metadata.
- Returns:
True if metadata is valid.
- Return type:
- validate_size() bool [source]¶
Validates model metadata size.
Validates the size of metadata. Throws an error if the size of the metadata exceeds expected value.
- Returns:
True if metadata size is valid.
- Return type:
- Raises:
MetadataSizeTooLarge – If the size of the metadata exceeds expected value.
- class ads.model.model_metadata.ModelMetadataItem[source]¶
Bases:
ABC
The base abstract class representing model metadata item.
- from_dict(cls, data: Dict) ModelMetadataItem [source]¶
Constructs an instance of ModelMetadataItem from a dictionary.
- to_json_file(self, file_path: str, storage_options: dict = None) None [source]¶
Saves the metadata item value to a local file or object storage.
- classmethod from_dict(data: Dict) ModelMetadataItem [source]¶
Constructs an instance of ModelMetadataItem from a dictionary.
- Parameters:
data (Dict) – Metadata item in a dictionary format.
- Returns:
An instance of model metadata item.
- Return type:
- size() int [source]¶
Returns the size of the model metadata in bytes.
- Returns:
The size of model metadata in bytes.
- Return type:
- to_dict() dict [source]¶
Serializes model metadata item to dictionary.
- Returns:
The dictionary representation of model metadata item.
- Return type:
- to_json()[source]¶
Serializes metadata item into a JSON.
- Returns:
The metadata item in a JSON representation.
- Return type:
JSON
- to_json_file(file_path: str, storage_options: dict | None = None) None [source]¶
Saves the metadata item value to a local file or object storage.
- Parameters:
file_path (str) – The file path to store the data. “oci://bucket_name@namespace/folder_name/” “oci://bucket_name@namespace/folder_name/result.json” “path/to/local/folder” “path/to/local/folder/result.json”
storage_options (dict. Default None) – Parameters passed on to the backend filesystem class. Defaults to options set using DatasetFactory.set_default_storage().
- Returns:
Nothing.
- Return type:
None
- Raises:
ValueError – When file path is empty.:
TypeError – When file path not a string.:
Examples
>>> metadata_item = ModelCustomMetadataItem(key="key1", value="value1") >>> storage_options = {"config": oci.config.from_file(os.path.join("~/.oci", "config"))} >>> storage_options {'log_requests': False, 'additional_user_agent': '', 'pass_phrase': None, 'user': '<user-id>', 'fingerprint': '05:15:2b:b1:46:8a:32:ec:e2:69:5b:32:01:**:**:**)', 'tenancy': '<tenency-id>', 'region': 'us-ashburn-1', 'key_file': '/home/datascience/.oci/oci_api_key.pem'} >>> metadata_item.to_json_file(file_path = 'oci://bucket_name@namespace/folder_name/file.json', storage_options=storage_options) >>> metadata_item.to_json_file("path/to/local/folder/file.json")
- class ads.model.model_metadata.ModelProvenanceMetadata(repo: str | None = None, git_branch: str | None = None, git_commit: str | None = None, repository_url: str | None = None, training_script_path: str | None = None, training_id: str | None = None, artifact_dir: str | None = None)[source]¶
Bases:
DataClassSerializable
ModelProvenanceMetadata class.
Examples
>>> provenance_metadata = ModelProvenanceMetadata.fetch_training_code_details() ModelProvenanceMetadata(repo=<git.repo.base.Repo '/home/datascience/.git'>, git_branch='master', git_commit='99ad04c31803f1d4ffcc3bf4afbd6bcf69a06af2', repository_url='file:///home/datascience', "", "") >>> provenance_metadata.assert_path_not_dirty("your_path", ignore=False)
- assert_path_not_dirty(path: str, ignore: bool)[source]¶
Checks if all the changes in this path has been commited.
- Parameters:
path ((str)) – path.
(bool) (ignore) – whether to ignore the changes or not.
- Raises:
ChangesNotCommitted – if there are changes not being commited.:
- Returns:
Nothing.
- Return type:
None
- classmethod fetch_training_code_details(training_script_path: str | None = None, training_id: str | None = None, artifact_dir: str | None = None)[source]¶
Fetches the training code details: repo, git_branch, git_commit, repository_url, training_script_path and training_id.
- Parameters:
- Returns:
A ModelProvenanceMetadata instance.
- Return type:
- classmethod from_dict(data: Dict[str, str]) ModelProvenanceMetadata [source]¶
Constructs an instance of ModelProvenanceMetadata from a dictionary.
- class ads.model.model_metadata.ModelTaxonomyMetadata[source]¶
Bases:
ModelMetadata
Class that represents Model Taxonomy Metadata.
- get(self, key: str) ModelTaxonomyMetadataItem ¶
Returns the model metadata item by provided key.
- to_dict(self)¶
Serializes model metadata into a dictionary.
- from_dict(cls) ModelTaxonomyMetadata [source]¶
Constructs model metadata from dictionary.
- to_yaml(self)¶
Serializes model metadata into a YAML.
- to_json(self)¶
Serializes model metadata into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata to a local file or object storage.
Examples
>>> metadata_taxonomy = ModelTaxonomyMetadata() >>> metadata_taxonomy.to_dataframe() Key Value -------------------------------------------- 0 UseCaseType binary_classification 1 Framework sklearn 2 FrameworkVersion 0.2.2 3 Algorithm algorithm 4 Hyperparameters {}
>>> metadata_taxonomy.reset() >>> metadata_taxonomy.to_dataframe() Key Value -------------------------------------------- 0 UseCaseType None 1 Framework None 2 FrameworkVersion None 3 Algorithm None 4 Hyperparameters None
>>> metadata_taxonomy metadata: - key: UseCaseType category: None description: None value: None
Initializes Model Metadata.
- classmethod from_dict(data: Dict) ModelTaxonomyMetadata [source]¶
Constructs an instance of ModelTaxonomyMetadata from a dictionary.
- Parameters:
data (Dict) – Model metadata in a dictionary format.
- Returns:
An instance of model taxonomy metadata.
- Return type:
- Raises:
ValueError – In case of the wrong input data format.
- class ads.model.model_metadata.ModelTaxonomyMetadataItem(key: str, value: str | None = None, has_artifact: bool = False)[source]¶
Bases:
ModelMetadataItem
Class that represents model taxonomy metadata item.
- to_dict(self) Dict ¶
Serializes model metadata item to dictionary.
- from_dict(cls) ModelTaxonomyMetadataItem ¶
Constructs model metadata item from dictionary.
- to_yaml(self)¶
Serializes model metadata item to YAML.
- to_json(self) JSON ¶
Serializes metadata item into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata item value to a local file or object storage.
- reset() None [source]¶
Resets model metadata item.
Resets value to None.
- Returns:
Nothing.
- Return type:
None
- update(value: str, has_artifact: bool = False) None [source]¶
Updates metadata item value.
- Parameters:
value (str) – The value of model metadata item.
- Returns:
Nothing.
- Return type:
None
- validate() bool [source]¶
Validates metadata item.
- Returns:
True if validation passed.
- Return type:
- Raises:
ValueError – If invalid UseCaseType provided. If invalid Framework provided.
- class ads.model.model_metadata.UseCaseType[source]¶
Bases:
ExtendedEnum
- ANOMALY_DETECTION = 'anomaly_detection'¶
- BINARY_CLASSIFICATION = 'binary_classification'¶
- CLUSTERING = 'clustering'¶
- DIMENSIONALITY_REDUCTION = 'dimensionality_reduction/representation'¶
- IMAGE_CLASSIFICATION = 'image_classification'¶
- MULTINOMIAL_CLASSIFICATION = 'multinomial_classification'¶
- NER = 'ner'¶
- OBJECT_LOCALIZATION = 'object_localization'¶
- OTHER = 'other'¶
- RECOMMENDER = 'recommender'¶
- REGRESSION = 'regression'¶
- SENTIMENT_ANALYSIS = 'sentiment_analysis'¶
- TIME_SERIES_FORECASTING = 'time_series_forecasting'¶
- TOPIC_MODELING = 'topic_modeling'¶
ads.model.model_metadata_mixin module¶
- class ads.model.model_metadata_mixin.MetadataMixin[source]¶
Bases:
object
MetadataMixin class which populates the custom metadata, taxonomy metadata, input/output schema and provenance metadata.
- populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)[source]¶
Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.
- Parameters:
use_case_type ((str, optional). Defaults to None.) – The use case type of the model.
data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to None.) – The training model OCID.
ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.
- Returns:
Nothing.
- Return type:
None
- populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)[source]¶
Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.
- Parameters:
data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.
ads.model.model_properties module¶
- class ads.model.model_properties.ModelProperties(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, training_resource_id: str | None = None, training_script_path: str | None = None, training_id: str | None = None, compartment_id: str | None = None, project_id: str | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = None, overwrite_existing_artifact: bool | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_private_endpoint_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | int | None = None, deployment_ocpus: float | int | None = None, deployment_image: str | None = None)[source]¶
Bases:
BaseProperties
Represents properties required to save and deploy model.
ads.model.model_version_set module¶
- class ads.model.model_version_set.ModelVersionSet(spec: Dict | None = None, **kwargs)[source]¶
Bases:
Builder
Represents Model Version Set.
- delete(self, delete_model: bool | None = False) "ModelVersionSet": [source]¶
Removes a model version set.
- from_dict(cls, config: dict) 'ModelVersionSet' [source]¶
Load a model version set instance from a dictionary of configurations.
Examples
>>> mvs = (ModelVersionSet() ... .with_compartment_id(os.environ["PROJECT_COMPARTMENT_OCID"]) ... .with_project_id(os.environ["PROJECT_OCID"]) ... .with_name("test_experiment") ... .with_description("Experiment number one")) >>> mvs.create() >>> mvs.model_add(model_ocid, version_label="Version label 1") >>> mvs.model_list() >>> mvs.details_link ... https://console.<region>.oraclecloud.com/data-science/model-version-sets/<ocid> >>> mvs.delete()
Initializes a model version set.
- Parameters:
spec ((Dict, optional). Defaults to None.) – Object specification.
kwargs (Dict) –
Specification as keyword arguments. If ‘spec’ contains the same key as the one in kwargs, the value from kwargs will be used.
project_id: str
compartment_id: str
name: str
description: str
defined_tags: Dict[str, Dict[str, object]]
freeform_tags: Dict[str, str]
- CONST_COMPARTMENT_ID = 'compartmentId'¶
- CONST_DEFINED_TAG = 'definedTags'¶
- CONST_DESCRIPTION = 'description'¶
- CONST_FREEFORM_TAG = 'freeformTags'¶
- CONST_ID = 'id'¶
- CONST_NAME = 'name'¶
- CONST_PROJECT_ID = 'projectId'¶
- LIFECYCLE_STATE_ACTIVE = 'ACTIVE'¶
- LIFECYCLE_STATE_DELETED = 'DELETED'¶
- LIFECYCLE_STATE_DELETING = 'DELETING'¶
- LIFECYCLE_STATE_FAILED = 'FAILED'¶
- attribute_map = {'compartmentId': 'compartment_id', 'definedTags': 'defined_tags', 'description': 'description', 'freeformTags': 'freeform_tags', 'id': 'id', 'name': 'name', 'projectId': 'project_id'}¶
- create(**kwargs) ModelVersionSet [source]¶
Creates a model version set.
- Parameters:
kwargs – Additional keyword arguments.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- delete(delete_model: bool | None = False) ModelVersionSet [source]¶
Removes a model version set.
- Parameters:
delete_model ((bool, optional). Defaults to False.) – By default, this parameter is false. A model version set can only be deleted if all the models associate with it are already in the DELETED state. You can optionally specify the deleteRelatedModels boolean query parameters to true, which deletes all associated models for you.
- Returns:
The ModelVersionSet instance (self).
- Return type:
- property details_link: str¶
Link to details page in OCI console.
- Returns:
Link to details page in OCI console.
- Return type:
- classmethod from_dict(config: dict) ModelVersionSet [source]¶
Load a model version set instance from a dictionary of configurations.
- Parameters:
config (dict) – A dictionary of configurations.
- Returns:
The model version set instance.
- Return type:
- classmethod from_dsc_model_version_set(dsc_model_version_set: DataScienceModelVersionSet) ModelVersionSet [source]¶
Initialize a ModelVersionSet instance from a DataScienceModelVersionSet.
- Parameters:
dsc_model_version_set (DataScienceModelVersionSet) – An instance of DataScienceModelVersionSet.
- Returns:
An instance of ModelVersionSet.
- Return type:
- classmethod from_id(id: str) ModelVersionSet [source]¶
Gets an existing model version set by OCID.
- Parameters:
id (str) – The model version set OCID.
- Returns:
An instance of ModelVersionSet.
- Return type:
- classmethod from_name(name: str, compartment_id: str | None = None) ModelVersionSet [source]¶
Gets an existing model version set by name.
- Parameters:
- Returns:
An instance of ModelVersionSet.
- Return type:
- classmethod from_ocid(ocid: str) ModelVersionSet [source]¶
Gets an existing model version set by OCID.
- Parameters:
id (str) – The model version set OCID.
- Returns:
An instance of ModelVersionSet.
- Return type:
- property kind: str¶
The kind of the object as showing in YAML.
- Returns:
“modelVersionSet”
- Return type:
- classmethod list(compartment_id: str | None = None, category: str = 'USER', **kwargs) List[ModelVersionSet] [source]¶
List model version sets in a given compartment.
- Parameters:
compartment_id (str) – The OCID of compartment.
category ((str, optional). Defaults to USER.) – The category of Model. Allowed values are: “USER”, “SERVICE”
kwargs – Additional keyword arguments for filtering model version sets.
- Returns:
The list of model version sets.
- Return type:
List[ModelVersionSet]
- model_add(model_id: str, version_label: str | None = None, **kwargs) None [source]¶
Adds new model to model version set.
- Parameters:
- Returns:
Nothing.
- Return type:
None
- Raises:
ModelVersionSetNotSaved – If model version set has not been saved yet.:
- models(**kwargs) List[DataScienceModel] [source]¶
Gets list of models associated with a model version set.
- Parameters:
kwargs –
- project_id: str
Project OCID.
- lifecycle_state: str
Filter results by the specified lifecycle state. Must be a valid state for the resource type. Allowed values are: “ACTIVE”, “DELETED”, “FAILED”, “INACTIVE”
Can be any attribute that oci.data_science.data_science_client.DataScienceClient.list_models. accepts.
- Returns:
List of models associated with the model version set.
- Return type:
List[DataScienceModel]
- Raises:
ModelVersionSetNotSaved – If model version set has not been saved yet.:
- property status: str | None¶
Status of the model version set.
- Returns:
Status of the model version set.
- Return type:
- to_dict() dict [source]¶
Serializes model version set to a dictionary.
- Returns:
The model version set serialized as a dictionary.
- Return type:
- update() ModelVersionSet [source]¶
Updates a model version set.
- Returns:
The ModelVersionSet instance (self).
- Return type:
- with_compartment_id(compartment_id: str) ModelVersionSet [source]¶
Sets the compartment OCID.
- Parameters:
compartment_id (str) – The compartment OCID.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_defined_tags(**kwargs: Dict[str, Dict[str, object]]) ModelVersionSet [source]¶
Sets defined tags.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_description(description: str) ModelVersionSet [source]¶
Sets the description.
- Parameters:
description (str) – The description of the model version set.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_freeform_tags(**kwargs: Dict[str, str]) ModelVersionSet [source]¶
Sets freeform tags.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_name(name: str) ModelVersionSet [source]¶
Sets the name of the model version set.
- Parameters:
name (str) – The name of the model version set.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_project_id(project_id: str) ModelVersionSet [source]¶
Sets the project OCID.
- Parameters:
project_id (str) – The project OCID.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- ads.model.model_version_set.experiment(name: str, create_if_not_exists: bool | None = True, **kwargs: Dict)[source]¶
Context manager helping to operate with model version set.
- Parameters:
name (str) – The name of the model version set.
create_if_not_exists ((bool, optional). Defaults to True.) – Creates model version set if not exists.
kwargs ((Dict, optional).) –
- compartment_id: (str, optional). Defaults to value from the environment variables.
The compartment OCID.
- project_id: (str, optional). Defaults to value from the environment variables.
The project OCID.
- description: (str, optional). Defaults to None.
The description of the model version set.
- Yields:
ModelVersionSet – The model version set object.
Module contents¶
- class ads.model.AutoMLModel(estimator: Callable, artifact_dir: str, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs)[source]¶
Bases:
FrameworkSpecificModel
AutoMLModel class for estimators from AutoML framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained automl estimator/model using oracle automl.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> import tempfile >>> import logging >>> import warnings >>> from ads.automl.driver import AutoML >>> from ads.automl.provider import OracleAutoMLProvider >>> from ads.dataset.dataset_browser import DatasetBrowser >>> from ads.model.framework.automl_model import AutoMLModel >>> from ads.model.model_metadata import UseCaseType >>> ds = DatasetBrowser.sklearn().open("wine").set_target("target") >>> train, test = ds.train_test_split(test_size=0.1, random_state = 42)
>>> ml_engine = OracleAutoMLProvider(n_jobs=-1, loglevel=logging.ERROR) >>> oracle_automl = AutoML(train, provider=ml_engine) >>> model, baseline = oracle_automl.train( ... model_list=['LogisticRegression', 'DecisionTreeClassifier'], ... random_state = 42, ... time_budget = 500 ... )
>>> automl_model.prepare(inference_conda_env=inference_conda_env, force_overwrite=True) >>> automl_model.verify(...) >>> automl_model.save() >>> model_deployment = automl_model.deploy(wait_for_completion=False)
Initiates a AutoMLModel instance.
- Parameters:
estimator (Callable) – Any model object generated by automl framework.
artifact_dir (str) – Directory for generate artifact.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
AutoMLModel instance.
- Return type:
- Raises:
TypeError – If the input model is not an AutoML model.
- class ads.model.DataScienceModel(spec: Dict | None = None, **kwargs)[source]¶
Bases:
Builder
Represents a Data Science Model.
- input_schema¶
Model input schema.
- Type:
ads.feature_engineering.Schema
- output_schema¶
Model output schema.
- Type:
ads.feature_engineering.Schema, Dict
- defined_metadata_list¶
Model defined metadata.
- Type:
- custom_metadata_list¶
Model custom metadata.
- Type:
- provenance_metadata¶
Model provenance metadata.
- Type:
- artifact¶
The artifact location. Can be either path to folder with artifacts or path to zip archive.
- Type:
- backup_setting¶
The value to assign to the backup_setting property of this CreateModelDetails.
- Type:
ModelBackupSetting
- retention_setting¶
The value to assign to the retention_setting property of this CreateModelDetails.
- Type:
ModelRetentionSetting
- retention_operation_details¶
The value to assign to the retention_operation_details property for the Model.
- Type:
ModelRetentionOperationDetails
- backup_operation_details¶
The value to assign to the backup_operation_details property for the Model.
- Type:
ModelBackupOperationDetails
- delete(self, delete_associated_model_deployment: bool | None = False) "DataScienceModel": [source]¶
Removes model.
- from_dict(cls, config: dict) 'DataScienceModel' [source]¶
Loads model instance from a dictionary of configurations.
- list(cls, compartment_id: str = None, \*\*kwargs) List['DataScienceModel'] [source]¶
Lists datascience models in a given compartment.
- sync(self):
Sync up a datascience model with OCI datascience model.
- with_compartment_id(self, compartment_id: str) 'DataScienceModel' [source]¶
Sets the compartment ID.
- with_freeform_tags(self, \*\*kwargs: Dict[str, str]) 'DataScienceModel' [source]¶
Sets freeform tags.
- with_defined_tags(self, \*\*kwargs: Dict[str, Dict[str, object]]) 'DataScienceModel' [source]¶
Sets defined tags.
- with_input_schema(self, schema: Schema | Dict) 'DataScienceModel' [source]¶
Sets the model input schema.
- with_output_schema(self, schema: Schema | Dict) 'DataScienceModel' [source]¶
Sets the model output schema.
- with_defined_metadata_list(self, metadata: ModelTaxonomyMetadata | Dict) 'DataScienceModel' [source]¶
Sets model taxonomy (defined) metadata.
- with_custom_metadata_list(self, metadata: ModelCustomMetadata | Dict) 'DataScienceModel' [source]¶
Sets model custom metadata.
- with_provenance_metadata(self, metadata: ModelProvenanceMetadata | Dict) 'DataScienceModel' [source]¶
Sets model provenance metadata.
- with_artifact(self, \*uri: str)[source]¶
Sets the artifact location. Can be a local. For models created by reference, uri can take in single arg or multiple args in case of a fine-tuned or multimodel setting.
- with_model_version_set_id(self, model_version_set_id: str):
Sets the model version set ID.
- with_version_label(self, version_label: str):
Sets the model version label.
- with_version_id(self, version_id: str):
Sets the model version id.
- with_model_file_description: dict
Sets path details for models created by reference. Input can be either a dict, string or json file and the schema is dictated by model_file_description_schema.json
Examples
>>> ds_model = (DataScienceModel() ... .with_compartment_id(os.environ["NB_SESSION_COMPARTMENT_OCID"]) ... .with_project_id(os.environ["PROJECT_OCID"]) ... .with_display_name("TestModel") ... .with_description("Testing the test model") ... .with_freeform_tags(tag1="val1", tag2="val2") ... .with_artifact("/path/to/the/model/artifacts/")) >>> ds_model.create() >>> ds_model.status() >>> ds_model.with_description("new description").update() >>> ds_model.download_artifact("/path/to/dst/folder/") >>> ds_model.delete() >>> DataScienceModel.list()
Initializes datascience model.
- Parameters:
spec ((Dict, optional). Defaults to None.) – Object specification.
kwargs (Dict) –
Specification as keyword arguments. If ‘spec’ contains the same key as the one in kwargs, the value from kwargs will be used.
project_id: str
compartment_id: str
name: str
description: str
defined_tags: Dict[str, Dict[str, object]]
freeform_tags: Dict[str, str]
input_schema: Union[ads.feature_engineering.Schema, Dict]
output_schema: Union[ads.feature_engineering.Schema, Dict]
defined_metadata_list: Union[ModelTaxonomyMetadata, Dict]
custom_metadata_list: Union[ModelCustomMetadata, Dict]
provenance_metadata: Union[ModelProvenanceMetadata, Dict]
artifact: str
- CONST_ARTIFACT = 'artifact'¶
- CONST_BACKUP_OPERATION_DETAILS = 'backupOperationDetails'¶
- CONST_BACKUP_SETTING = 'backupSetting'¶
- CONST_COMPARTMENT_ID = 'compartmentId'¶
- CONST_CUSTOM_METADATA = 'customMetadataList'¶
- CONST_DEFINED_METADATA = 'definedMetadataList'¶
- CONST_DEFINED_TAG = 'definedTags'¶
- CONST_DESCRIPTION = 'description'¶
- CONST_DISPLAY_NAME = 'displayName'¶
- CONST_FREEFORM_TAG = 'freeformTags'¶
- CONST_ID = 'id'¶
- CONST_INPUT_SCHEMA = 'inputSchema'¶
- CONST_LIFECYCLE_DETAILS = 'lifecycleDetails'¶
- CONST_LIFECYCLE_STATE = 'lifecycleState'¶
- CONST_MODEL_FILE_DESCRIPTION = 'modelDescription'¶
- CONST_MODEL_VERSION_ID = 'versionId'¶
- CONST_MODEL_VERSION_LABEL = 'versionLabel'¶
- CONST_MODEL_VERSION_SET_ID = 'modelVersionSetId'¶
- CONST_MODEL_VERSION_SET_NAME = 'modelVersionSetName'¶
- CONST_OUTPUT_SCHEMA = 'outputSchema'¶
- CONST_PROJECT_ID = 'projectId'¶
- CONST_PROVENANCE_METADATA = 'provenanceMetadata'¶
- CONST_RETENTION_OPERATION_DETAILS = 'retentionOperationDetails'¶
- CONST_RETENTION_SETTING = 'retentionSetting'¶
- CONST_TIME_CREATED = 'timeCreated'¶
- add_artifact(uri: str | None = None, namespace: str | None = None, bucket: str | None = None, prefix: str | None = None, files: List[str] | None = None)[source]¶
Adds information about objects in a specified bucket to the model description JSON.
- Parameters:
uri (str, optional) – The URI representing the location of the artifact in OCI object storage.
namespace (str, optional) – The namespace of the bucket containing the objects. Required if uri is not provided.
bucket (str, optional) – The name of the bucket containing the objects. Required if uri is not provided.
prefix (str, optional) – The prefix of the objects to add. Defaults to None. Cannot be provided if files is provided.
files (list of str, optional) – A list of file names to include in the model description. If provided, only objects with matching file names will be included. Cannot be provided if prefix is provided.
- Return type:
None
- Raises:
If both uri and (namespace and bucket) are provided. - If neither uri nor both namespace and bucket are provided. - If both prefix and files are provided. - If no files are found to add to the model description.
Note
If files is not provided, it retrieves information about all objects in the bucket.
If files is provided, it only retrieves information about objects with matching file names.
If no objects are found to add to the model description, a ValueError is raised.
- attribute_map = {'artifact': 'artifact', 'backupOperationDetails': 'backup_operation_details', 'backupSetting': 'backup_setting', 'compartmentId': 'compartment_id', 'customMetadataList': 'custom_metadata_list', 'definedMetadataList': 'defined_metadata_list', 'definedTags': 'defined_tags', 'description': 'description', 'displayName': 'display_name', 'freeformTags': 'freeform_tags', 'id': 'id', 'inputSchema': 'input_schema', 'lifecycleDetails': 'lifecycle_details', 'lifecycleState': 'lifecycle_state', 'modelDescription': 'model_description', 'modelVersionSetId': 'model_version_set_id', 'modelVersionSetName': 'model_version_set_name', 'outputSchema': 'output_schema', 'projectId': 'project_id', 'provenanceMetadata': 'provenance_metadata', 'retentionOperationDetails': 'retention_operation_details', 'retentionSetting': 'retention_setting', 'timeCreated': 'time_created', 'versionId': 'version_id', 'versionLabel': 'version_label'}¶
- property backup_operation_details: ModelBackupOperationDetails¶
Gets the backup_operation_details of this Model using the spec constant.
- Returns:
The backup_operation_details of this Model.
- Return type:
ModelBackupOperationDetails
- property backup_setting: ModelBackupSetting¶
Gets the backup_setting of this model.
- Returns:
The backup_setting of this model.
- Return type:
BackupSetting
- create(**kwargs) DataScienceModel [source]¶
Creates datascience model.
- Parameters:
kwargs –
Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.
In addition can be also provided the attributes listed below.
- bucket_uri: (str, optional). Defaults to None.
The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
Added in version 2.8.10: If artifact is provided as an object storage path to a zip archive, bucket_uri will be ignored.
- overwrite_existing_artifact: (bool, optional). Defaults to True.
Overwrite target bucket artifact if exists.
- remove_existing_artifact: (bool, optional). Defaults to True.
Wether artifacts uploaded to object storage bucket need to be removed or not.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variable.
- auth: (Dict, optional). Defaults to None.
The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- timeout: (int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- parallel_process_count: (int, optional).
The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
- model_by_reference: (bool, optional)
Whether model artifact is made available to Model Store by reference. Requires artifact location to be provided using with_artifact method.
- Returns:
The DataScienceModel instance (self)
- Return type:
- Raises:
ValueError – If compartment id not provided. If project id not provided.
- create_custom_metadata_artifact(metadata_key_name: str, artifact_path_or_content: str | bytes, path_type: MetadataArtifactPathType = 'local') ModelMetadataArtifactDetails [source]¶
Creates model custom metadata artifact for specified model.
- Parameters:
metadata_key_name (str) – The name of the model custom metadata key
artifact_path_or_content (Union[str,bytes]) – The model custom metadata artifact path to be uploaded. It can also be the actual content of the custom metadata artifact The type is string when it represents local path or oss path. The type is bytes when it represents content itself
path_type (MetadataArtifactPathType) – Can be either of MetadataArtifactPathType.LOCAL , MetadataArtifactPathType.OSS , MetadataArtifactPathType.CONTENT Specifies what type of path is to be provided for metadata artifact.
Example
ds_model=DataScienceModel.from_id("ocid1.datasciencemodel.iad.xxyxz...") (>>>)
ds_model.create_custom_metadata_artifact( (>>>)
"README" (...)
:param : :param … artifact_path_or_content=”/Users/<username>/Downloads/README.md”: :param : :param … path_type=MetadataArtifactPathType.LOCAL: :param … ):
- Returns:
The model custom metadata artifact creation info. Example: {
’Date’: ‘Mon, 02 Dec 2024 06:38:24 GMT’, ‘opc-request-id’: ‘E4F7’, ‘ETag’: ‘77156317-8bb9-4c4a-882b-0d85f8140d93’, ‘X-Content-Type-Options’: ‘nosniff’, ‘Content-Length’: ‘4029958’, ‘Vary’: ‘Origin’, ‘Strict-Transport-Security’: ‘max-age=31536000; includeSubDomains’, ‘status’: 204
}
- Return type:
- create_defined_metadata_artifact(metadata_key_name: str, artifact_path_or_content: str | bytes, path_type: MetadataArtifactPathType = 'local') ModelMetadataArtifactDetails [source]¶
Creates model defined metadata artifact for specified model.
- Parameters:
metadata_key_name (str) – The name of the model defined metadata key
artifact_path_or_content (Union[str,bytes]) – The model defined metadata artifact path to be uploaded. It can also be the actual content of the defined metadata The type is string when it represents local path or oss path. The type is bytes when it represents content itself
path_type (MetadataArtifactPathType) – Can be either of MetadataArtifactPathType.LOCAL , MetadataArtifactPathType.OSS , MetadataArtifactPathType.CONTENT Specifies what type of path is to be provided for metadata artifact. Can be either local , oss or the actual content itself
Example
ds_model=DataScienceModel.from_id("ocid1.datasciencemodel.iad.xxyxz...") (>>>)
ds_model.create_defined_metadata_artifact( (>>>)
"README" (...)
:param : :param … artifact_path_or_content=”oci: :type … artifact_path_or_content=”oci: //path/to/bucket/README.md”, :param … path_type=MetadataArtifactPathType.OSS: :param … ):
- Returns:
The model defined metadata artifact creation info. Example: {
’Date’: ‘Mon, 02 Dec 2024 06:38:24 GMT’, ‘opc-request-id’: ‘E4F7’, ‘ETag’: ‘77156317-8bb9-4c4a-882b-0d85f8140d93’, ‘X-Content-Type-Options’: ‘nosniff’, ‘Content-Length’: ‘4029958’, ‘Vary’: ‘Origin’, ‘Strict-Transport-Security’: ‘max-age=31536000; includeSubDomains’, ‘status’: 204
}
- Return type:
- property custom_metadata_list: ModelCustomMetadata¶
Returns model custom metadatda.
- property defined_metadata_list: ModelTaxonomyMetadata¶
Returns model taxonomy (defined) metadatda.
- delete(delete_associated_model_deployment: bool | None = False) DataScienceModel [source]¶
Removes model from the model catalog.
- Parameters:
delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.
- Returns:
The DataScienceModel instance (self).
- Return type:
- delete_custom_metadata_artifact(metadata_key_name: str) ModelMetadataArtifactDetails [source]¶
Deletes model custom metadata artifact for specified model metadata key.
- Parameters:
metadata_key_name (str) – The name of the model metadatum in the metadata.
- Returns:
The model custom metadata artifact delete call info. Example: {
’Date’: ‘Mon, 02 Dec 2024 06:38:24 GMT’, ‘opc-request-id’: ‘E4F7’, ‘X-Content-Type-Options’: ‘nosniff’, ‘Vary’: ‘Origin’, ‘Strict-Transport-Security’: ‘max-age=31536000; includeSubDomains’, ‘status’: 204
}
- Return type:
- delete_defined_metadata_artifact(metadata_key_name: str) ModelMetadataArtifactDetails [source]¶
Deletes model defined metadata artifact for specified model metadata key.
- Parameters:
metadata_key_name (str) – The name of the model metadatum in the metadata.
- Returns:
The model defined metadata artifact delete call info. Example: {
’Date’: ‘Mon, 02 Dec 2024 06:38:24 GMT’, ‘opc-request-id’: ‘E4F7’, ‘X-Content-Type-Options’: ‘nosniff’, ‘Vary’: ‘Origin’, ‘Strict-Transport-Security’: ‘max-age=31536000; includeSubDomains’, ‘status’: 204
}
- Return type:
- download_artifact(target_dir: str, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, region: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, timeout: int | None = None)[source]¶
Downloads model artifacts from the model catalog.
- Parameters:
target_dir (str) – The target location of model artifacts.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Overwrite target directory if exists.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
timeout ((int, optional). Defaults to 10 seconds.) – The connection timeout in seconds for the client.
- Raises:
ModelArtifactSizeError – If model artifacts size greater than 2GB and temporary OS bucket uri not provided.
- classmethod from_dict(config: Dict) DataScienceModel [source]¶
Loads model instance from a dictionary of configurations.
- Parameters:
config (Dict) – A dictionary of configurations.
- Returns:
The model instance.
- Return type:
- classmethod from_id(id: str) DataScienceModel [source]¶
Gets an existing model by OCID.
- Parameters:
id (str) – The model OCID.
- Returns:
An instance of DataScienceModel.
- Return type:
- get_custom_metadata_artifact(metadata_key_name: str, target_dir: str, override: bool = False) bytes [source]¶
Downloads model custom metadata artifact content for specified model metadata key.
- Parameters:
metadata_key_name (str) – The name of the custom metadata key of the model
target_dir (str) – The local file path where downloaded model custom metadata artifact will be saved.
override (bool) – A boolean flag that controls downloaded metadata artifact file overwriting - If True, overwrites the file if it already exists. - If False (default), raises a FileExistsError if the file exists.
- Returns:
File content of the custom metadata artifact
- Return type:
- get_defined_metadata_artifact(metadata_key_name: str, target_dir: str, override: bool = False) bytes [source]¶
Downloads model defined metadata artifact content for specified model metadata key.
- Parameters:
metadata_key_name (str) – The name of the model metadatum in the metadata.
target_dir (str) – The local file path where downloaded model defined metadata artifact will be saved.
override (bool) – A boolean flag that controls downloaded metadata artifact file overwriting - If True, overwrites the file if it already exists. - If False (default), raises a FileExistsError if the file exists.
- Returns:
File content of the custom metadata artifact
- Return type:
- if_model_custom_metadata_artifact_exist(metadata_key_name: str, **kwargs) bool [source]¶
Checks if the custom metadata artifact exists for the model.
- property input_schema: Schema | Dict¶
Returns model input schema.
- Returns:
Model input schema.
- Return type:
ads.feature_engineering.Schema
- property lifecycle_details: str¶
Gets the lifecycle_details of this DataScienceModel. Details about the lifecycle state of the model.
- Returns:
The lifecycle_details of this DataScienceModel.
- Return type:
- property lifecycle_state: str | None¶
Status of the model.
- Returns:
Status of the model.
- Return type:
- classmethod list(compartment_id: str | None = None, project_id: str | None = None, category: str = 'USER', **kwargs) List[DataScienceModel] [source]¶
Lists datascience models in a given compartment.
- Parameters:
compartment_id ((str, optional). Defaults to None.) – The compartment OCID.
project_id ((str, optional). Defaults to None.) – The project OCID.
category ((str, optional). Defaults to USER.) – The category of Model. Allowed values are: “USER”, “SERVICE”
kwargs – Additional keyword arguments for filtering models.
- Returns:
The list of the datascience models.
- Return type:
List[DataScienceModel]
- classmethod list_df(compartment_id: str | None = None, project_id: str | None = None, category: str = 'USER', **kwargs) DataFrame [source]¶
Lists datascience models in a given compartment.
- Parameters:
compartment_id ((str, optional). Defaults to None.) – The compartment OCID.
project_id ((str, optional). Defaults to None.) – The project OCID.
category ((str, optional). Defaults to None.) – The category of Model.
kwargs – Additional keyword arguments for filtering models.
- Returns:
The list of the datascience models in a pandas dataframe format.
- Return type:
pandas.DataFrame
- property output_schema: Schema | Dict¶
Returns model output schema.
- Returns:
Model output schema.
- Return type:
ads.feature_engineering.Schema
- property provenance_metadata: ModelProvenanceMetadata¶
Returns model provenance metadatda.
- remove_artifact(uri: str | None = None, namespace: str | None = None, bucket: str | None = None, prefix: str | None = None)[source]¶
Removes information about objects in a specified bucket or using a specified URI from the model description JSON.
- Parameters:
uri (str, optional) – The URI representing the location of the artifact in OCI object storage.
namespace (str, optional) – The namespace of the bucket containing the objects. Required if uri is not provided.
bucket (str, optional) – The name of the bucket containing the objects. Required if uri is not provided.
prefix (str, optional) – The prefix of the objects to remove. Defaults to None.
- Return type:
None
- Raises:
If both ‘uri’ and (‘namespace’ and ‘bucket’) are provided. - If neither ‘uri’ nor both ‘namespace’ and ‘bucket’ are provided. - If the model description JSON is None.
- restore_model(restore_model_for_hours_specified: int | None = None) None [source]¶
Restore archived model artifact.
- Parameters:
restore_model_for_hours_specified (Optional[int]) – Duration in hours for which the archived model is available for access.
- Return type:
None
- Raises:
ValueError – If the model ID is invalid or if any parameters are incorrect.
- property retention_operation_details: ModelRetentionOperationDetails¶
Gets the retention_operation_details of this Model using the spec constant.
- Returns:
The retention_operation_details of this Model.
- Return type:
ModelRetentionOperationDetails
- property retention_setting: ModelRetentionSetting¶
Gets the retention_setting of this model.
- Returns:
The retention_setting of this model.
- Return type:
RetentionSetting
- to_dict() Dict [source]¶
Serializes model to a dictionary.
- Returns:
The model serialized as a dictionary.
- Return type:
- update(**kwargs) DataScienceModel [source]¶
Updates datascience model in model catalog.
- Parameters:
kwargs – Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.
- Returns:
The DataScienceModel instance (self).
- Return type:
- update_custom_metadata_artifact(metadata_key_name: str, artifact_path_or_content: str | bytes, path_type: MetadataArtifactPathType = 'local') ModelMetadataArtifactDetails [source]¶
Update model custom metadata artifact for specified model.
- Parameters:
metadata_key_name (str) – The name of the model custom metadata key
artifact_path_or_content (Union[str,bytes]) – The model custom metadata artifact path to be uploaded. It can also be the actual content of the custom metadata The type is string when it represents local path or oss path. The type is bytes when it represents content itself
path_type (MetadataArtifactPathType) – Can be either of MetadataArtifactPathType.LOCAL , MetadataArtifactPathType.OSS , MetadataArtifactPathType.CONTENT Specifies what type of path is to be provided for metadata artifact. Can be either local , oss or the actual content itself
- Returns:
The model custom metadata artifact update info. Example: {
’Date’: ‘Mon, 02 Dec 2024 06:38:24 GMT’, ‘opc-request-id’: ‘E4F7’, ‘ETag’: ‘77156317-8bb9-4c4a-882b-0d85f8140d93’, ‘X-Content-Type-Options’: ‘nosniff’, ‘Content-Length’: ‘4029958’, ‘Vary’: ‘Origin’, ‘Strict-Transport-Security’: ‘max-age=31536000; includeSubDomains’, ‘status’: 204
}
- Return type:
- update_defined_metadata_artifact(metadata_key_name: str, artifact_path_or_content: str | bytes, path_type: MetadataArtifactPathType = 'local') ModelMetadataArtifactDetails [source]¶
Update model defined metadata artifact for specified model.
- Parameters:
metadata_key_name (str) – The name of the model defined metadata key
artifact_path_or_content (Union[str,bytes]) – The model defined metadata artifact path to be uploaded. It can also be the actual content of the defined metadata The type is string when it represents local path or oss path. The type is bytes when it represents content itself
path_type (MetadataArtifactPathType) – Can be either of MetadataArtifactPathType.LOCAL , MetadataArtifactPathType.OSS , MetadataArtifactPathType.CONTENT Specifies what type of path is to be provided for metadata artifact. Can be either local , oss or the actual content itself
- Returns:
The model defined metadata artifact update info. Example: {
’Date’: ‘Mon, 02 Dec 2024 06:38:24 GMT’, ‘opc-request-id’: ‘E4F7’, ‘ETag’: ‘77156317-8bb9-4c4a-882b-0d85f8140d93’, ‘X-Content-Type-Options’: ‘nosniff’, ‘Content-Length’: ‘4029958’, ‘Vary’: ‘Origin’, ‘Strict-Transport-Security’: ‘max-age=31536000; includeSubDomains’, ‘status’: 204
}
- Return type:
- upload_artifact(bucket_uri: str | None = None, auth: Dict | None = None, region: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, timeout: int | None = None, parallel_process_count: int = 9, model_by_reference: bool | None = False) None [source]¶
Uploads model artifacts to the model catalog.
- Parameters:
bucket_uri ((str, optional). Defaults to None.) –
The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
Added in version 2.8.10: If artifact is provided as an object storage path to a zip archive, bucket_uri will be ignored.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
timeout ((int, optional). Defaults to 10 seconds.) – The connection timeout in seconds for the client.
parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.
- with_artifact(uri: str, *args)[source]¶
Sets the artifact location. Can be a local.
- Parameters:
uri (str) – Path to artifact directory or to the ZIP archive. It could contain a serialized model(required) as well as any files needed for deployment. The content of the source folder will be zipped and uploaded to the model catalog. For models created by reference, uri can take in single arg or multiple args in case of a fine-tuned or multimodel setting.
Examples
>>> .with_artifact(uri="./model1/") >>> .with_artifact(uri="./model1.zip") >>> .with_artifact("./model1", "./model2")
- with_backup_setting(backup_setting: Dict | ModelBackupSetting) DataScienceModel [source]¶
Sets the model’s backup setting details.
- Parameters:
backup_setting (Union[Dict, BackupSetting]) – The backup setting details for the model. This can be passed as either a dictionary or an instance of the BackupSetting class.
- Returns:
The DataScienceModel instance (self) for method chaining.
- Return type:
- with_compartment_id(compartment_id: str) DataScienceModel [source]¶
Sets the compartment ID.
- Parameters:
compartment_id (str) – The compartment ID.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_custom_metadata_list(metadata: ModelCustomMetadata | Dict) DataScienceModel [source]¶
Sets model custom metadata.
- Parameters:
metadata (Union[ModelCustomMetadata, Dict]) – The custom metadata.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_defined_metadata_list(metadata: ModelTaxonomyMetadata | Dict) DataScienceModel [source]¶
Sets model taxonomy (defined) metadata.
- Parameters:
metadata (Union[ModelTaxonomyMetadata, Dict]) – The defined metadata.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_defined_tags(**kwargs: Dict[str, Dict[str, object]]) DataScienceModel [source]¶
Sets defined tags.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_description(description: str) DataScienceModel [source]¶
Sets the description.
- Parameters:
description (str) – The description of the model.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_display_name(name: str) DataScienceModel [source]¶
Sets the name.
- Parameters:
name (str) – The name.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_freeform_tags(**kwargs: Dict[str, str]) DataScienceModel [source]¶
Sets freeform tags.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_input_schema(schema: Schema | Dict) DataScienceModel [source]¶
Sets the model input schema.
- Parameters:
schema (Union[ads.feature_engineering.Schema, Dict]) – The model input schema.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_model_file_description(json_dict: dict | None = None, json_string: str | None = None, json_uri: str | None = None)[source]¶
Sets the json file description for model passed by reference :param json_dict: json dict, by default None :type json_dict: dict, optional :param json_string: json string, by default None :type json_string: str, optional :param json_uri: URI location of file containing json, by default None :type json_uri: str, optional
Examples
>>> DataScienceModel().with_model_file_description(json_string="<json_string>") >>> DataScienceModel().with_model_file_description(json_dict=dict()) >>> DataScienceModel().with_model_file_description(json_uri="./model_description.json")
- with_model_version_set_id(model_version_set_id: str)[source]¶
Sets the model version set ID.
- Parameters:
urmodel_version_set_idi (str) – The Model version set OCID.
- with_output_schema(schema: Schema | Dict) DataScienceModel [source]¶
Sets the model output schema.
- Parameters:
schema (Union[ads.feature_engineering.Schema, Dict]) – The model output schema.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_project_id(project_id: str) DataScienceModel [source]¶
Sets the project ID.
- Parameters:
project_id (str) – The project ID.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_provenance_metadata(metadata: ModelProvenanceMetadata | Dict) DataScienceModel [source]¶
Sets model provenance metadata.
- Parameters:
provenance_metadata (Union[ModelProvenanceMetadata, Dict]) – The provenance metadata.
- Returns:
The DataScienceModel instance (self)
- Return type:
- with_retention_setting(retention_setting: Dict | ModelRetentionSetting) DataScienceModel [source]¶
Sets the retention setting details for the model.
- Parameters:
retention_setting (Union[Dict, RetentionSetting]) – The retention setting details for the model. Can be provided as either a dictionary or an instance of the RetentionSetting class.
- Returns:
The DataScienceModel instance (self) for method chaining.
- Return type:
- class ads.model.EmbeddingONNXModel(artifact_dir: str | None = None, model_file_name: str | None = None, config_json: str | None = None, tokenizer_dir: str | None = None, auth: Dict | None = None, serialize: bool = False, **kwargs: dict)[source]¶
Bases:
FrameworkSpecificModel
EmbeddingONNXModel class for embedding onnx model.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, ..., \*\*kwargs)¶
Loads model from model catalog.
- from_model_deployment(model_deployment_id, ..., \*\*kwargs)¶
Loads model from model deployment.
- update_deployment(model_deployment_id, ..., \*\*kwargs)¶
Updates a model deployment.
- from_id(ocid, ..., \*\*kwargs)¶
Loads model from model OCID or model deployment OCID.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)[source]¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- prepare_save_deploy(..., \*\*kwargs)¶
Shortcut for prepare, save and deploy steps.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- restart_deployment(...)¶
Restarts the model deployment.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- set_model_input_serializer(serde)¶
Registers serializer used for serializing data passed in verify/predict.
- summary_status(...)¶
Gets a summary table of the current status.
- upload_artifact(...)¶
Uploads model artifacts to the provided uri.
- download_artifact(...)¶
Downloads model artifacts from the model catalog.
- update_summary_status(...)¶
Update the status in the summary table.
- update_summary_action(...)¶
Update the actions needed from the user in the summary table.
Examples
>>> import tempfile >>> import os >>> import shutil >>> from ads.model import EmbeddingONNXModel >>> from huggingface_hub import snapshot_download
>>> local_dir=tempfile.mkdtemp() >>> allow_patterns=[ ... "onnx/model.onnx", ... "config.json", ... "special_tokens_map.json", ... "tokenizer_config.json", ... "tokenizer.json", ... "vocab.txt" ... ]
>>> # download files needed for this demostration to local folder >>> snapshot_download( ... repo_id="sentence-transformers/all-MiniLM-L6-v2", ... local_dir=local_dir, ... allow_patterns=allow_patterns ... )
>>> artifact_dir = tempfile.mkdtemp() >>> # copy all downloaded files to artifact folder >>> for file in allow_patterns: >>> shutil.copy(local_dir + "/" + file, artifact_dir)
>>> model = EmbeddingONNXModel(artifact_dir=artifact_dir) >>> model.summary_status() >>> model.prepare( ... inference_conda_env="onnxruntime_p311_gpu_x86_64", ... inference_python_version="3.11", ... model_file_name="model.onnx", ... force_overwrite=True ... ) >>> model.summary_status() >>> model.verify( ... { ... "input": ['What are activation functions?', 'What is Deep Learning?'], ... "model": "sentence-transformers/all-MiniLM-L6-v2" ... }, ... ) >>> model.summary_status() >>> model.save(display_name="sentence-transformers/all-MiniLM-L6-v2") >>> model.summary_status() >>> model.deploy( ... display_name="all-MiniLM-L6-v2 Embedding deployment", ... deployment_instance_shape="VM.Standard.E4.Flex", ... deployment_ocpus=20, ... deployment_memory_in_gbs=256, ... ) >>> model.predict( ... { ... "input": ['What are activation functions?', 'What is Deep Learning?'], ... "model": "sentence-transformers/all-MiniLM-L6-v2" ... }, ... ) >>> # Uncomment the line below to delete the model and the associated model deployment >>> # model.delete(delete_associated_model_deployment = True)
Initiates a EmbeddingONNXModel instance.
- Parameters:
artifact_dir ((str, optional). Defaults to None.) – Directory for generate artifact.
model_file_name ((str, optional). Defaults to None.) – Path to the model artifact.
config_json ((str, optional). Defaults to None.) – Path to the config.json file.
tokenizer_dir ((str, optional). Defaults to None.) – Path to the tokenizer directory.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
serialize (bool) – Whether to serialize the model to pkl file by default. Required as False for embedding onnx model.
- Returns:
EmbeddingONNXModel instance.
- Return type:
Examples
>>> import tempfile >>> import os >>> import shutil >>> from ads.model import EmbeddingONNXModel >>> from huggingface_hub import snapshot_download
>>> local_dir=tempfile.mkdtemp() >>> allow_patterns=[ ... "onnx/model.onnx", ... "config.json", ... "special_tokens_map.json", ... "tokenizer_config.json", ... "tokenizer.json", ... "vocab.txt" ... ]
>>> # download files needed for this demostration to local folder >>> snapshot_download( ... repo_id="sentence-transformers/all-MiniLM-L6-v2", ... local_dir=local_dir, ... allow_patterns=allow_patterns ... )
>>> artifact_dir = tempfile.mkdtemp() >>> # copy all downloaded files to artifact folder >>> for file in allow_patterns: >>> shutil.copy(local_dir + "/" + file, artifact_dir)
>>> model = EmbeddingONNXModel(artifact_dir=artifact_dir) >>> model.summary_status() >>> model.prepare( ... inference_conda_env="onnxruntime_p311_gpu_x86_64", ... inference_python_version="3.11", ... model_file_name="model.onnx", ... force_overwrite=True ... ) >>> model.summary_status() >>> model.verify( ... { ... "input": ['What are activation functions?', 'What is Deep Learning?'], ... "model": "sentence-transformers/all-MiniLM-L6-v2" ... }, ... ) >>> model.summary_status() >>> model.save(display_name="sentence-transformers/all-MiniLM-L6-v2") >>> model.summary_status() >>> model.deploy( ... display_name="all-MiniLM-L6-v2 Embedding deployment", ... deployment_instance_shape="VM.Standard.E4.Flex", ... deployment_ocpus=20, ... deployment_memory_in_gbs=256, ... ) >>> model.predict( ... { ... "input": ['What are activation functions?', 'What is Deep Learning?'], ... "model": "sentence-transformers/all-MiniLM-L6-v2" ... }, ... ) >>> # Uncomment the line below to delete the model and the associated model deployment >>> # model.delete(delete_associated_model_deployment = True)
- predict(data=None, auto_serialize_data=False, **kwargs)[source]¶
Returns prediction of input data run against the embedding onnx model deployment endpoint.
Examples
>>> data = { ... "input": ['What are activation functions?', 'What is Deep Learning?'], ... "model": "sentence-transformers/all-MiniLM-L6-v2" ... } >>> prediction = model.predict(data)
- Parameters:
data (Any) – Data for the prediction for model deployment.
auto_serialize_data (bool.) – Whether to auto serialize input data. Required as False for embedding onnx model. Input data must be json serializable.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
Dictionary with the predicted values.
- Return type:
Dict[str, Any]
- verify(data=None, reload_artifacts=True, auto_serialize_data=False, **kwargs)[source]¶
Test if embedding onnx model deployment works in local environment.
Examples
>>> data = { ... "input": ['What are activation functions?', 'What is Deep Learning?'], ... "model": "sentence-transformers/all-MiniLM-L6-v2" ... } >>> prediction = model.verify(data)
- Parameters:
data (Any) – Data used to test if deployment works in local environment.
reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.
auto_serialize_data (bool.) – Whether to auto serialize input data. Required as False for embedding onnx model. Input data must be json serializable.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
A dictionary which contains prediction results.
- Return type:
Dict
- class ads.model.GenericModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]¶
Bases:
MetadataMixin
,Introspectable
,EvaluatorMixin
Generic Model class which is the base class for all the frameworks including the unsupported frameworks.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
Any model object generated by sklearn framework
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- model_input_serializer¶
Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Type:
- properties¶
ModelProperties object required to save and deploy model.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- from_model_artifact(uri, ..., \*\*kwargs)[source]¶
Loads model from the specified folder, or zip/tar archive.
- from_model_deployment(model_deployment_id, ..., \*\*kwargs)[source]¶
Loads model from model deployment.
- predict(data, ...)[source]¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)[source]¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- set_model_input_serializer(serde)[source]¶
Registers serializer used for serializing data passed in verify/predict.
Examples
>>> import tempfile >>> from ads.model.generic_model import GenericModel
>>> class Toy: ... def predict(self, x): ... return x ** 2 >>> estimator = Toy()
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp()) >>> model.summary_status() >>> model.prepare( ... inference_conda_env="dbexp_p38_cpu_v1", ... inference_python_version="3.8", ... model_file_name="toy_model.pkl", ... training_id=None, ... force_overwrite=True ... ) >>> model.verify(2) >>> model.save() >>> model.deploy() >>> # Update access log id, freeform tags and description for the model deployment >>> model.update_deployment( ... access_log={ ... log_id=<log_ocid> ... }, ... description="Description for Custom Model", ... freeform_tags={"key": "value"}, ... ) >>> model.predict(2) >>> # Uncomment the line below to delete the model and the associated model deployment >>> # model.delete(delete_associated_model_deployment = True)
GenericModel Constructor.
- Parameters:
estimator ((Callable).) – Trained model.
artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.
- classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None [source]¶
Deletes a model from Model Catalog.
- Parameters:
model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.
delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.
delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.
artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.
- Return type:
None
- Raises:
ValueError – If model_id not provided.
- delete_deployment(wait_for_completion: bool = True) None [source]¶
Deletes the current deployment.
- Parameters:
wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.
- Return type:
None
- Raises:
ValueError – if there is not deployment attached yet.:
- deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_private_endpoint_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment [source]¶
Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.
Example
>>> # This is an example to deploy model on container runtime >>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp()) >>> model.summary_status() >>> model.prepare( ... model_file_name="toy_model.pkl", ... ignore_conda_error=True, # set ignore_conda_error=True for container runtime ... force_overwrite=True ... ) >>> model.verify() >>> model.save() >>> model.deploy( ... deployment_image="iad.ocir.io/<namespace>/<image>:<tag>", ... entrypoint=["python", "/opt/ds/model/deployed_model/api.py"], ... server_port=5000, ... health_check_port=5000, ... environment_variables={"key":"value"} ... )
- Parameters:
wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.
display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.
deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.
deployment_instance_private_endpoint_id ((str, optional). Default to None.) – The private endpoint id of instance used for deployment.
deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.
deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.
deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.
deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.
deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.
deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.
kwargs –
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken from the environment variables.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken from the environment variables.
- max_wait_time(int, optional). Defaults to 1200 seconds.
Maximum amount of time to wait in seconds. Negative implies infinite wait time.
- poll_interval(int, optional). Defaults to 10 seconds.
Poll interval in seconds.
- freeform_tags: (Dict[str, str], optional). Defaults to None.
Freeform tags of the model deployment.
- defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.
Defined tags of the model deployment.
- image_digest: (str, optional). Defaults to None.
The digest of docker container image.
- cmd: (List, optional). Defaults to empty.
The command line arguments for running docker container image.
- entrypoint: (List, optional). Defaults to empty.
The entrypoint for running docker container image.
- server_port: (int, optional). Defaults to 8080.
The server port for docker container image.
- health_check_port: (int, optional). Defaults to 8080.
The health check port for docker container image.
- deployment_mode: (str, optional). Defaults to HTTPS_ONLY.
The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.
- input_stream_ids: (List, optional). Defaults to empty.
The input stream ids. Required for STREAM_ONLY mode.
- output_stream_ids: (List, optional). Defaults to empty.
The output stream ids. Required for STREAM_ONLY mode.
- environment_variables: (Dict, optional). Defaults to empty.
The environment variables for model deployment.
Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.
- Returns:
The ModelDeployment instance.
- Return type:
- Raises:
ValueError – If model_id is not specified.
- download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel [source]¶
Downloads model artifacts from the model catalog.
- Parameters:
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
- Returns:
An instance of GenericModel class.
- Return type:
- Raises:
ValueError – If model_id is not available in the GenericModel object.
- classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model OCID or model deployment OCID.
- Parameters:
ocid (str) – The model OCID or model deployment OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- Returns:
An instance of GenericModel class.
- Return type:
- classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self [source]¶
Loads model from a folder, or zip/tar archive.
- Parameters:
uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.
model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
- Returns:
An instance of GenericModel class.
- Return type:
- Raises:
ValueError – If model_file_name not provided.
- classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model catalog.
- Parameters:
model_id (str) – The model OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Returns:
An instance of GenericModel class.
- Return type:
- classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model deployment.
- Parameters:
model_deployment_id (str) – The model deployment OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Returns:
An instance of GenericModel class.
- Return type:
- get_data_serializer()[source]¶
Gets data serializer.
- Returns:
object
- Return type:
ads.model.Serializer object.
- introspect() DataFrame [source]¶
Conducts instrospection.
- Returns:
A pandas DataFrame which contains the instrospection results.
- Return type:
pandas.DataFrame
- property metadata_custom¶
- property metadata_provenance¶
- property metadata_taxonomy¶
- property model_deployment_id¶
- property model_id¶
- model_input_serializer_type¶
alias of
ModelInputSerializerType
- model_save_serializer_type¶
alias of
ModelSerializerType
- predict(data: Any | None = None, auto_serialize_data: bool = False, local: bool = False, **kwargs) Dict[str, Any] [source]¶
Returns prediction of input data run against the model deployment endpoint.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.predict( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
local (bool.) – Whether to invoke the prediction locally. Default to False.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
Dictionary with the predicted values.
- Return type:
Dict[str, Any]
- Raises:
NotActiveDeploymentError – If model deployment process was not started or not finished yet.
ValueError – If model is not deployed yet or the endpoint information is not available.
- prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel [source]¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.
training_python_version ((str, optional). Defaults to None.) – Python version used during training.
model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.
as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.
initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.
use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.
ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.
kwargs –
- impute_values: (dict, optional).
The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.
- Raises:
FileExistsError – If files already exist but force_overwrite is False.
ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.
- Returns:
An instance of GenericModel class.
- Return type:
- prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_private_endpoint_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment [source]¶
Shortcut for prepare, save and deploy steps.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.
training_python_version ((str, optional). Defaults to None.) – Python version used during training.
model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.
as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.
initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.
use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.
ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
model_description ((str, optional). Defaults to None.) – The description of the model.
model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.
model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.
ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.
wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.
deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.
deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.
deployment_instance_private_endpoint_id ((str, optional). Default to None.) – The private endpoint id of instance used for deployment.
deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.
deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.
deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.
deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.
deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.
deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.
version_label ((str, optional). Defaults to None.) – The model version lebel.
model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.
kwargs –
- impute_values: (dict, optional).
The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken either from the environment variables or model properties.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.
- image_digest: (str, optional). Defaults to None.
The digest of docker container image.
- cmd: (List, optional). Defaults to empty.
The command line arguments for running docker container image.
- entrypoint: (List, optional). Defaults to empty.
The entrypoint for running docker container image.
- server_port: (int, optional). Defaults to 8080.
The server port for docker container image.
- health_check_port: (int, optional). Defaults to 8080.
The health check port for docker container image.
- deployment_mode: (str, optional). Defaults to HTTPS_ONLY.
The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.
- input_stream_ids: (List, optional). Defaults to empty.
The input stream ids. Required for STREAM_ONLY mode.
- output_stream_ids: (List, optional). Defaults to empty.
The output stream ids. Required for STREAM_ONLY mode.
- environment_variables: (Dict, optional). Defaults to empty.
The environment variables for model deployment.
- timeout: (int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- max_wait_time(int, optional). Defaults to 1200 seconds.
Maximum amount of time to wait in seconds. Negative implies infinite wait time.
- poll_interval(int, optional). Defaults to 10 seconds.
Poll interval in seconds.
- freeform_tags: (Dict[str, str], optional). Defaults to None.
Freeform tags of the model deployment.
- defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.
Defined tags of the model deployment.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.
- Returns:
The ModelDeployment instance.
- Return type:
- Raises:
FileExistsError – If files already exist but force_overwrite is False.
ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.
- reload() GenericModel [source]¶
Reloads the model artifact files: score.py and the runtime.yaml.
- Returns:
An instance of GenericModel class.
- Return type:
- reload_runtime_info() None [source]¶
Reloads the model artifact file: runtime.yaml.
- Returns:
Nothing.
- Return type:
None
- restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment [source]¶
Restarts the current deployment.
- Parameters:
max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.
poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.
- Returns:
The ModelDeployment instance.
- Return type:
- save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str [source]¶
Saves model artifacts to the model catalog.
- Parameters:
display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.
defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.
ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.
version_label ((str, optional). Defaults to None.) – The model version lebel.
featurestore_dataset ((Dataset, optional).) – The feature store dataset
parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.
model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.
kwargs –
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken either from the environment variables or model properties.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- timeout: (int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
Also can be any attribute that oci.data_science.models.Model accepts.
- Raises:
RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.
- Returns:
The model id.
- Return type:
Examples
Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )
- property schema_input¶
- property schema_output¶
- serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: any | None = None, **kwargs)[source]¶
Serialize and save model using ONNX or model specific method.
- Parameters:
as_onnx ((boolean, optional)) – If set as True, convert into ONNX model.
initial_types ((List[Tuple], optional)) – a python list. Each element is a tuple of a variable name and a data type.
force_overwrite ((boolean, optional)) – If set as True, overwrite serialized model if exists.
X_sample ((any, optional). Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model, used to valid model input type.
- Returns:
Nothing
- Return type:
None
- set_model_input_serializer(model_input_serializer: str | SERDE)[source]¶
Registers serializer used for serializing data passed in verify/predict.
Examples
>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it. >>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it. >>> from ads.model import SERDE >>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE): ... def __init__(self): ... super().__init__() ... def serialize(self, data): ... serialized_data = 1 ... return serialized_data ... def deserialize(self, data): ... deserialized_data = 2 ... return deserialized_data
>>> class Toy: ... def predict(self, x): ... return x ** 2
>>> generic_model = GenericModel( ... estimator=Toy(), ... artifact_dir=tempfile.mkdtemp(), ... model_input_serializer=MySERDE() ... )
>>> # Or register the serializer after creating model instance. >>> generic_model.set_model_input_serializer(MySERDE())
- Parameters:
model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.
- set_model_save_serializer(model_save_serializer: str | SERDE)[source]¶
Registers serializer used for saving model.
Examples
>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it. >>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it. >>> from ads.model import SERDE >>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE): ... def __init__(self): ... super().__init__() ... def serialize(self, data): ... serialized_data = 1 ... return serialized_data ... def deserialize(self, data): ... deserialized_data = 2 ... return deserialized_data
>>> class Toy: ... def predict(self, x): ... return x ** 2
>>> generic_model = GenericModel( ... estimator=Toy(), ... artifact_dir=tempfile.mkdtemp(), ... model_save_serializer=MySERDE() ... )
>>> # Or register the serializer after creating model instance. >>> generic_model.set_model_save_serializer(MySERDE())
- Parameters:
model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.
- summary_status() DataFrame [source]¶
A summary table of the current status.
- Returns:
The summary stable of the current status.
- Return type:
pd.DataFrame
- update(**kwargs) GenericModel [source]¶
Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.
- Parameters:
kwargs –
- display_name: (str, optional). Defaults to None.
The name of the model.
- description: (str, optional). Defaults to None.
The description of the model.
- freeform_tagsDict(str, str), Defaults to None.
Freeform tags for the model.
- defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.
Defined tags for the model.
- version_label: (str, optional). Defaults to None.
The model version lebel.
Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.
- Returns:
An instance of GenericModel (self).
- Return type:
- Raises:
ValueError – if model not saved to the Model Catalog.
- classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment [source]¶
Updates a model deployment.
You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.
Examples
>>> # Update access log id, freeform tags and description for the model deployment >>> model.update_deployment( ... access_log={ ... log_id=<log_ocid> ... }, ... description="Description for Custom Model", ... freeform_tags={"key": "value"}, ... )
- Parameters:
model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.
properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.
wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
kwargs –
- auth: (Dict, optional). Defaults to None.
The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- display_name: (str)
Model deployment display name
- description: (str)
Model deployment description
- freeform_tags: (dict)
Model deployment freeform tags
- defined_tags: (dict)
Model deployment defined tags
Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.
- Returns:
An instance of ModelDeployment class.
- Return type:
- update_summary_action(detail: str, action: str)[source]¶
Update the actions needed from the user in the summary table.
- upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None [source]¶
Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.
- Parameters:
uri (str) –
The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:
>>> upload_artifact(uri="/some/local/folder/") >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite (bool) – Overwrite target_dir if exists.
parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
- verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = False, **kwargs) Dict[str, Any] [source]¶
Test if deployment works in local environment.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.verify( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data used to test if deployment works in local environment.
reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.
is_json_payload (bool) – Defaults to False. Indicate whether to send data with a application/json MIME TYPE.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
A dictionary which contains prediction results.
- Return type:
Dict
- class ads.model.HuggingFacePipelineModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'huggingface', model_input_serializer: SERDE | None = 'cloudpickle', **kwargs)[source]¶
Bases:
FrameworkSpecificModel
HuggingFacePipelineModel class for estimators from HuggingFace framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained HuggingFace Pipeline using transformers.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> # Image Classification >>> from transformers import pipeline >>> import tempfile >>> import PIL.Image >>> import ads >>> import requests >>> import cloudpickle >>> ## Download image data >>> image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" >>> image = PIL.Image.open(requests.get(image_link, stream=True).raw) >>> image_bytes = cloudpickle.dumps(image) # convert image to bytes >>> ## Download a pretrained model >>> vision_classifier = pipeline(model="google/vit-base-patch16-224") >>> preds = vision_classifier(images=image) >>> ## Initiate a HuggingFacePipelineModel instance >>> vision_model = HuggingFacePipelineModel(vision_classifier, artifact_dir=tempfile.mkdtemp()) >>> ## Prepare >>> vision_model.prepare(inference_conda_env="pytorch110_p38_cpu_v1", force_overwrite=True) >>> ## Verify >>> vision_model.verify(image) >>> vision_model.verify(image_bytes) >>> ## Save >>> vision_model.save() >>> ## Deploy >>> log_group_id = "<log_group_id>" >>> log_id = "<log_id>" >>> vision_model.deploy(deployment_bandwidth_mbps=1000, ... wait_for_completion=False, ... deployment_log_group_id = log_group_id, ... deployment_access_log_id = log_id, ... deployment_predict_log_id = log_id) >>> ## Predict from endpoint >>> vision_model.predict(image) >>> vision_model.predict(image_bytes) >>> ### Invoke the model >>> auth = ads.common.auth.default_signer()['signer'] >>> endpoint = vision_model.model_deployment.url + "/predict" >>> headers = {"Content-Type": "application/octet-stream"} >>> requests.post(endpoint, data=image_bytes, auth=auth, headers=headers).json()
Examples
>>> # Image Segmentation >>> from transformers import pipeline >>> import tempfile >>> import PIL.Image >>> import ads >>> import requests >>> import cloudpickle >>> ## Download image data >>> image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" >>> image = PIL.Image.open(requests.get(image_link, stream=True).raw) >>> image_bytes = cloudpickle.dumps(image) # convert image to bytes >>> ## Download pretrained model >>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter(image) >>> ## Initiate a HuggingFacePipelineModel instance >>> segmentation_model = HuggingFacePipelineModel(segmenter, artifact_dir=empfile.mkdtemp()) >>> ## Prepare >>> conda = "oci://bucket@namespace/path/to/conda/pack" >>> python_version = "3.8" >>> segmentation_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True) >>> ## Verify >>> segmentation_model.verify(data=image) >>> segmentation_model.verify(data=image_bytes) >>> ## Save >>> segmentation_model.save() >>> log_group_id = "<log_group_id>" >>> log_id = "<log_id>" >>> ## Deploy >>> segmentation_model.deploy(deployment_bandwidth_mbps=1000, wait_for_completion=False, deployment_log_group_id = log_group_id, deployment_access_log_id = log_id, deployment_predict_log_id = log_id) >>> ## Predict from endpoint >>> segmentation_model.predict(image) >>> segmentation_model.predict(image_bytes) >>> ## Invoke the model >>> auth = ads.common.auth.default_signer()['signer']
>>> endpoint = segmentation_model.model_deployment.url + "/predict" >>> headers = {"Content-Type": "application/octet-stream"} >>> requests.post(endpoint, data=image_bytes, auth=auth, headers=headers).json()
Examples
>>> # Zero Shot Image Classification >>> from transformers import pipeline >>> import tempfile >>> import PIL.Image >>> import ads >>> import requests >>> import cloudpickle >>> ## Download the image data >>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png" >>> image = PIL.Image.open(requests.get(image_link, stream=True).raw) >>> image_bytes = cloudpickle.dumps(image) >>> ## Download a pretrained model >>> classifier = pipeline(model="openai/clip-vit-large-patch14") >>> classifier( images=image, candidate_labels=["animals", "humans", "landscape"], ) >>> ## Initiate a HuggingFacePipelineModel instance >>> zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp()) >>> conda = "oci://bucket@namespace/path/to/conda/pack" >>> python_version = "3.8" >>> ## Prepare >>> zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True) >>> data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]} >>> body = cloudpickle.dumps(data) # convert image to bytes >>> ## Verify >>> zero_shot_image_classification_model.verify(data=data) >>> zero_shot_image_classification_model.verify(data=body) >>> ## Save >>> zero_shot_image_classification_model.save() >>> ## Deploy >>> log_group_id = "<log_group_id>" >>> log_id = "<log_id>" >>> zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=1000, wait_for_completion=False, deployment_log_group_id = log_group_id, deployment_access_log_id = log_id, deployment_predict_log_id = log_id) >>> ## Predict from endpoint >>> zero_shot_image_classification_model.predict(image) >>> zero_shot_image_classification_model.predict(body) >>> ### Invoke the model >>> auth = ads.common.auth.default_signer()['signer'] >>> endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict" >>> headers = {"Content-Type": "application/octet-stream"} >>> requests.post(endpoint, data=body, auth=auth, headers=headers).json()
Initiates a HuggingFacePipelineModel instance.
- Parameters:
estimator (Callable) – HuggingFacePipeline Model
artifact_dir (str) – Directory for generate artifact.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
HuggingFacePipelineModel instance.
- Return type:
Examples
>>> from transformers import pipeline >>> import tempfile >>> import PIL.Image >>> import ads >>> import requests >>> import cloudpickle >>> ## download the image >>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png" >>> image = PIL.Image.open(requests.get(image_link, stream=True).raw) >>> image_bytes = cloudpickle.dumps(image) >>> ## download the pretrained model >>> classifier = pipeline(model="openai/clip-vit-large-patch14") >>> classifier( images=image, candidate_labels=["animals", "humans", "landscape"], ) >>> ## Initiate a HuggingFacePipelineModel instance >>> zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp()) >>> ## Prepare a model artifact >>> conda = "oci://bucket@namespace/path/to/conda/pack" >>> python_version = "3.8" >>> zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True) >>> ## Test data >>> data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]} >>> body = cloudpickle.dumps(data) # convert image to bytes >>> ## Verify >>> zero_shot_image_classification_model.verify(data=data) >>> zero_shot_image_classification_model.verify(data=body) >>> ## Save >>> zero_shot_image_classification_model.save() >>> ## Deploy >>> log_group_id = "<log_group_id>" >>> log_id = "<log_id>" >>> zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=100, wait_for_completion=False, deployment_log_group_id = log_group_id, deployment_access_log_id = log_id, deployment_predict_log_id = log_id) >>> zero_shot_image_classification_model.predict(image) >>> zero_shot_image_classification_model.predict(body) >>> ### Invoke the model by sending bytes >>> auth = ads.common.auth.default_signer()['signer'] >>> endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict" >>> headers = {"Content-Type": "application/octet-stream"} >>> requests.post(endpoint, data=body, auth=auth, headers=headers).json()
- model_save_serializer_type¶
alias of
HuggingFaceSerializerType
- serialize_model(as_onnx: bool = False, force_overwrite: bool = False, X_sample: Dict | str | List | Image | None = None, **kwargs) None [source]¶
Serialize and save HuggingFace model using model specific method.
- Parameters:
as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.
force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.
X_sample (Union[Dict, str, List, PIL.Image.Image]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect onnx_args.
- Returns:
Nothing.
- Return type:
None
- class ads.model.LightGBMModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs)[source]¶
Bases:
FrameworkSpecificModel
LightGBMModel class for estimators from Lightgbm framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained lightgbm estimator/model using Lightgbm.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> import lightgbm as lgb >>> import tempfile >>> from sklearn.model_selection import train_test_split >>> from sklearn.datasets import load_iris >>> from ads.model.framework.lightgbm_model import LightGBMModel
>>> iris = load_iris() >>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) >>> train = lgb.Dataset(X_train, label=y_train) >>> param = { ... 'objective': 'multiclass', 'num_class': 3, ... } >>> lightgbm_estimator = lgb.train(param, train)
>>> lightgbm_model = LightGBMModel(estimator=lightgbm_estimator, ... artifact_dir=tempfile.mkdtemp())
>>> lightgbm_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True) >>> lightgbm_model.reload() >>> lightgbm_model.verify(X_test) >>> lightgbm_model.save() >>> model_deployment = lightgbm_model.deploy(wait_for_completion=False) >>> lightgbm_model.predict(X_test)
Initiates a LightGBMModel instance. This class wraps the Lightgbm model as estimator. It’s primary purpose is to hold the trained model and do serialization.
- Parameters:
estimator – any model object generated by Lightgbm framework
artifact_dir (str) – Directory for generate artifact.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
LightGBMModel instance.
- Return type:
- Raises:
TypeError – If the input model is not a Lightgbm model or not supported for serialization.:
Examples
>>> import lightgbm as lgb >>> import tempfile >>> from sklearn.model_selection import train_test_split >>> from sklearn.datasets import load_iris >>> from ads.model.framework.lightgbm_model import LightGBMModel >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) >>> train = lgb.Dataset(X_train, label=y_train) >>> param = { ... 'objective': 'multiclass', 'num_class': 3, ... } >>> lightgbm_estimator = lgb.train(param, train) >>> lightgbm_model = LightGBMModel(estimator=lightgbm_estimator, artifact_dir=tempfile.mkdtemp()) >>> lightgbm_model.prepare(inference_conda_env="generalml_p37_cpu_v1") >>> lightgbm_model.verify(X_test) >>> lightgbm_model.save() >>> model_deployment = lightgbm_model.deploy() >>> lightgbm_model.predict(X_test) >>> lightgbm_model.delete_deployment()
- model_save_serializer_type¶
alias of
LightGBMModelSerializerType
- serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs: Dict)[source]¶
Serialize and save Lightgbm model.
- Parameters:
as_onnx ((boolean, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.
initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.
force_overwrite ((boolean, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.
X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.
- Returns:
Nothing.
- Return type:
None
- class ads.model.ModelDeployer(config: dict | None = None, ds_client: DataScienceClient | None = None)[source]¶
Bases:
object
ModelDeployer is the class responsible for deploying the ModelDeployment
- ds_client¶
data science client
- Type:
DataScienceClient
- ds_composite_client¶
composite data science client
- Type:
DataScienceCompositeClient
- deploy(model_deployment_details, \*\*kwargs)[source]¶
Deploy the model specified by model_deployment_details.
- get_model_deployment(model_deployment_id: str)[source]¶
Get the ModelDeployment specified by model_deployment_id.
- get_model_deployment_state(model_deployment_id)[source]¶
Get the state of the current deployment specified by id.
- delete(model_deployment_id, \*\*kwargs)[source]¶
Remove the model deployment specified by the id or Model Deployment Object
- list_deployments(status)[source]¶
lists the model deployments associated with current compartment and data science client
Initializes model deployer.
- Parameters:
config (dict, optional) –
- ADS auth dictionary for OCI authentication.
This can be generated by calling ads.common.auth.api_keys() or ads.common.auth.resource_principal(). If this is None, ads.common.default_signer(client_kwargs) will be used.
- ds_client: oci.data_science.data_science_client.DataScienceClient
The Oracle DataScience client.
- delete(model_deployment_id, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment [source]¶
Deletes the model deployment specified by OCID.
- Parameters:
model_deployment_id (str) – Model deployment OCID.
wait_for_completion (bool) – Wait for deletion to complete. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 600). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 60).
- Return type:
A ModelDeployment instance that was deleted
- deploy(properties: ModelDeploymentProperties | Dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment [source]¶
Deploys a model.
- Parameters:
properties (ModelDeploymentProperties or dict) – Properties to deploy the model. Properties can be None when kwargs are used for specifying properties.
wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Optional, defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds. Optional, defaults to 1200. Negative value implies infinite wait time.
poll_interval (int) – Poll interval in seconds. Optional, defaults to 30.
kwargs – Keyword arguments for initializing ModelDeploymentProperties. See ModelDeploymentProperties() for details.
- Returns:
A ModelDeployment instance.
- Return type:
- deploy_from_model_uri(model_uri: str, properties: ModelDeploymentProperties | Dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment [source]¶
Deploys a model.
- Parameters:
model_uri (str) – uri to model files, can be local or in cloud storage
properties (ModelDeploymentProperties or dict) – Properties to deploy the model. Properties can be None when kwargs are used for specifying properties.
wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 30).
kwargs – Keyword arguments for initializing ModelDeploymentProperties
- Returns:
A ModelDeployment instance
- Return type:
- get_model_deployment(model_deployment_id: str) ModelDeployment [source]¶
Gets a ModelDeployment by OCID.
- Parameters:
model_deployment_id (str) – Model deployment OCID
- Returns:
A ModelDeployment instance
- Return type:
- get_model_deployment_state(model_deployment_id: str) State [source]¶
Gets the state of a deployment specified by OCID
- list_deployments(status=None, compartment_id=None, **kwargs) list [source]¶
Lists the model deployments associated with current compartment and data science client
- Parameters:
status (str) – Status of deployment. Defaults to None.
compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.
kwargs – The values are passed to oci.data_science.DataScienceClient.list_model_deployments.
- Returns:
A list of ModelDeployment objects.
- Return type:
- Raises:
ValueError – If compartment_id is not specified and cannot be determined from the environment.
- show_deployments(status=None, compartment_id=None) DataFrame [source]¶
- Returns the model deployments associated with current compartment and data science client
as a Dataframe that can be easily visualized
- Parameters:
status (str) – Status of deployment. Defaults to None.
compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.
- Returns:
pandas Dataframe containing information about the ModelDeployments
- Return type:
DataFrame
- Raises:
ValueError – If compartment_id is not specified and cannot be determined from the environment.
- update(model_deployment_id: str, properties: ModelDeploymentProperties | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment [source]¶
Updates an existing model deployment.
- Parameters:
model_deployment_id (str) – Model deployment OCID.
properties (ModelDeploymentProperties) – An instance of ModelDeploymentProperties or dict to initialize the ModelDeploymentProperties. Defaults to None.
wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200).
poll_interval (int) – Poll interval in seconds (Defaults to 30).
kwargs – Keyword arguments for initializing ModelDeploymentProperties.
- Returns:
A ModelDeployment instance
- Return type:
- class ads.model.ModelDeployment(properties: ModelDeploymentProperties | Dict | None = None, config: Dict | None = None, model_deployment_id: str | None = None, model_deployment_url: str = '', spec: Dict | None = None, **kwargs)[source]¶
Bases:
Builder
A class used to represent a Model Deployment.
- properties¶
ModelDeploymentProperties object
- Type:
- dsc_model_deployment¶
The OCIDataScienceModelDeployment instance.
- Type:
(OCIDataScienceModelDeployment)
- time_created¶
The time when the model deployment is created
- Type:
(datetime)
- runtime¶
Model deployment runtime
- Type:
(ModelDeploymentRuntime)
- infrastructure¶
Model deployment infrastructure
- Type:
(ModelDeploymentInfrastructure)
- deactivate(wait_for_completion, max_wait_time, poll_interval)[source]¶
Deactivates a model deployment
- list(status, compartment_id, project_id, \*\*kwargs)[source]¶
List model deployment within given compartment and project.
Examples
>>> # Build model deployment from builder apis: >>> ds_model_deployment = (ModelDeployment() ... .with_display_name("TestModelDeployment") ... .with_description("Testing the test model deployment") ... .with_freeform_tags(tag1="val1", tag2="val2") ... .with_infrastructure( ... (ModelDeploymentInfrastructure() ... .with_project_id(<project_id>) ... .with_compartment_id(<compartment_id>) ... .with_shape_name("VM.Standard.E4.Flex") ... .with_shape_config_details( ... ocpus=1, ... memory_in_gbs=16 ... ) ... .with_replica(1) ... .with_bandwidth_mbps(10) ... .with_web_concurrency(10) ... .with_access_log( ... log_group_id=<log_group_id>, ... log_id=<log_id> ... ) ... .with_predict_log( ... log_group_id=<log_group_id>, ... log_id=<log_id> ... )) ... ) ... .with_runtime( ... (ModelDeploymentContainerRuntime() ... .with_image(<image>) ... .with_image_digest(<image_digest>) ... .with_entrypoint(<entrypoint>) ... .with_server_port(<server_port>) ... .with_health_check_port(<health_check_port>) ... .with_env({"key":"value"}) ... .with_deployment_mode("HTTPS_ONLY") ... .with_model_uri(<model_uri>) ... .with_bucket_uri(<bucket_uri>) ... .with_auth(<auth>) ... .with_timeout(<time_out>)) ... ) ... ) >>> ds_model_deployment.deploy() >>> ds_model_deployment.status >>> ds_model_deployment.with_display_name("new name").update() >>> ds_model_deployment.deactivate() >>> ds_model_deployment.sync() >>> ds_model_deployment.list(status="ACTIVE") >>> ds_model_deployment.delete()
>>> # Build model deployment from yaml >>> ds_model_deployment = ModelDeployment.from_yaml(uri=<path_to_yaml>)
Initializes a ModelDeployment object.
- Parameters:
properties ((Union[ModelDeploymentProperties, Dict], optional). Defaults to None.) – Object containing deployment properties. The properties can be None when kwargs are used for specifying properties.
config ((Dict, optional). Defaults to None.) – ADS auth dictionary for OCI authentication. This can be generated by calling ads.common.auth.api_keys() or ads.common.auth.resource_principal(). If this is None then the ads.common.default_signer(client_kwargs) will be used.
model_deployment_id ((str, optional). Defaults to None.) – Model deployment OCID.
model_deployment_url ((str, optional). Defaults to empty string.) – Model deployment url.
spec ((dict, optional). Defaults to None.) – Model deployment spec.
kwargs – Keyword arguments for initializing ModelDeploymentProperties or ModelDeployment.
- CONST_CREATED_BY = 'createdBy'¶
- CONST_DEFINED_TAG = 'definedTags'¶
- CONST_DESCRIPTION = 'description'¶
- CONST_DISPLAY_NAME = 'displayName'¶
- CONST_FREEFORM_TAG = 'freeformTags'¶
- CONST_ID = 'id'¶
- CONST_INFRASTRUCTURE = 'infrastructure'¶
- CONST_LIFECYCLE_DETAILS = 'lifecycleDetails'¶
- CONST_LIFECYCLE_STATE = 'lifecycleState'¶
- CONST_MODEL_DEPLOYMENT_URL = 'modelDeploymentUrl'¶
- CONST_RUNTIME = 'runtime'¶
- CONST_TIME_CREATED = 'timeCreated'¶
- property access_log: OCILog¶
Gets the model deployment access logs object.
- Returns:
The OCILog object containing the access logs.
- Return type:
- activate(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment [source]¶
Activates a model deployment
- Parameters:
wait_for_completion (bool) – Flag set for whether to wait for deployment to be activated before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
- Returns:
The instance of ModelDeployment.
- Return type:
- attribute_map = {'createdBy': 'created_by', 'definedTags': 'defined_tags', 'description': 'description', 'displayName': 'display_name', 'freeformTags': 'freeform_tags', 'id': 'id', 'infrastructure': 'infrastructure', 'lifecycleDetails': 'lifecycle_details', 'lifecycleState': 'lifecycle_state', 'modelDeploymentUrl': 'model_deployment_url', 'runtime': 'runtime', 'timeCreated': 'time_created'}¶
- build() ModelDeployment [source]¶
Load default values from the environment for the job infrastructure.
- property created_by: str¶
The user that creates the model deployment.
- Returns:
The user that creates the model deployment.
- Return type:
- deactivate(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment [source]¶
Deactivates a model deployment
- Parameters:
wait_for_completion (bool) – Flag set for whether to wait for deployment to be deactivated before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
- Returns:
The instance of ModelDeployment.
- Return type:
- property defined_tags: Dict¶
Model deployment defined tags.
- Returns:
Model deployment defined tags.
- Return type:
Dict
- delete(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10)[source]¶
Deletes the ModelDeployment
- Parameters:
wait_for_completion (bool) – Flag set for whether to wait for deployment to be deleted before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
- Returns:
The instance of ModelDeployment.
- Return type:
- deploy(wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10)[source]¶
Deploys the current ModelDeployment object
- Parameters:
wait_for_completion (bool) – Flag set for whether to wait for deployment to be deployed before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
- Returns:
The instance of ModelDeployment.
- Return type:
- property description: str¶
Model deployment description.
- Returns:
Model deployment description.
- Return type:
- property display_name: str¶
Model deployment display name.
- Returns:
Model deployment display name.
- Return type:
- property freeform_tags: Dict¶
Model deployment freeform tags.
- Returns:
Model deployment freeform tags.
- Return type:
Dict
- classmethod from_dict(obj_dict: Dict) ModelDeployment [source]¶
Loads model deployment instance from a dictionary of configurations.
- Parameters:
obj_dict (Dict) – A dictionary of configurations.
- Returns:
The model deployment instance.
- Return type:
- classmethod from_id(id: str) ModelDeployment [source]¶
Loads the model deployment instance from ocid.
- Parameters:
id (str) – The ocid of model deployment.
- Returns:
The ModelDeployment instance (self).
- Return type:
- property infrastructure: ModelDeploymentInfrastructure¶
Model deployment infrastructure.
- Returns:
Model deployment infrastructure.
- Return type:
ModelDeploymentInfrastructure
- initialize_spec_attributes = ['display_name', 'description', 'freeform_tags', 'defined_tags', 'infrastructure', 'runtime']¶
- property lifecycle_details: str¶
Model deployment lifecycle details.
- Returns:
Model deployment lifecycle details.
- Return type:
- property lifecycle_state: str¶
Model deployment lifecycle state.
- Returns:
Model deployment lifecycle state.
- Return type:
- classmethod list(status: str | None = None, compartment_id: str | None = None, project_id: str | None = None, **kwargs) List[ModelDeployment] [source]¶
Lists the model deployments associated with current compartment id and status
- Parameters:
status (str) – Status of deployment. Defaults to None. Allowed values: ACTIVE, CREATING, DELETED, DELETING, FAILED, INACTIVE and UPDATING.
compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.
project_id (str) – Target project to list deployments from. Defaults to the project id in the environment variable “PROJECT_OCID”.
kwargs – The values are passed to oci.data_science.DataScienceClient.list_model_deployments.
- Returns:
A list of ModelDeployment objects.
- Return type:
- classmethod list_df(status: str | None = None, compartment_id: str | None = None, project_id: str | None = None) DataFrame [source]¶
- Returns the model deployments associated with current compartment and status
as a Dataframe that can be easily visualized
- Parameters:
status (str) – Status of deployment. Defaults to None. Allowed values: ACTIVE, CREATING, DELETED, DELETING, FAILED, INACTIVE and UPDATING.
compartment_id (str) – Target compartment to list deployments from. Defaults to the compartment set in the environment variable “NB_SESSION_COMPARTMENT_OCID”. If “NB_SESSION_COMPARTMENT_OCID” is not set, the root compartment ID will be used. An ValueError will be raised if root compartment ID cannot be determined.
project_id (str) – Target project to list deployments from. Defaults to the project id in the environment variable “PROJECT_OCID”.
- Returns:
pandas Dataframe containing information about the ModelDeployments
- Return type:
DataFrame
- logs(log_type: str | None = None) ConsolidatedLog [source]¶
Gets the access or predict logs.
- Parameters:
log_type ((str, optional). Defaults to None.) – The log type. Can be “access”, “predict” or None.
- Returns:
The ConsolidatedLog object containing the logs.
- Return type:
- property model_deployment_id: str¶
The model deployment ocid.
- Returns:
The model deployment ocid.
- Return type:
- model_input_serializer = <ads.model.serde.model_input.JsonModelInputSERDE object>¶
- predict(json_input=None, data: ~typing.Any = None, serializer: ads.model.ModelInputSerializer = <ads.model.serde.model_input.JsonModelInputSERDE object>, auto_serialize_data: bool = False, model_name: str = None, model_version: str = None, **kwargs) dict [source]¶
Returns prediction of input data run against the model deployment endpoint.
Examples
>>> import numpy as np >>> from ads.model import ModelInputSerializer >>> class MySerializer(ModelInputSerializer): ... def serialize(self, data): ... serialized_data = 1 ... return serialized_data >>> model_deployment = ModelDeployment.from_id(<model_deployment_id>) >>> prediction = model_deployment.predict( ... data=np.array([1, 2, 3]), ... serializer=MySerializer(), ... auto_serialize_data=True, ... )['prediction']
- Parameters:
json_input (Json serializable) – JSON payload for the prediction.
data (Any) – Data for the prediction.
serializer (ads.model.ModelInputSerializer) – Defaults to ads.model.JsonModelInputSerializer.
auto_serialize_data (bool) – Defaults to False. Indicate whether to auto serialize input data using serializer. If auto_serialize_data=False, data required to be bytes or json serializable and json_input required to be json serializable. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
model_name (str) – Defaults to None. When the inference_server=”triton”, the name of the model to invoke.
model_version (str) – Defaults to None. When the inference_server=”triton”, the version of the model to invoke.
kwargs –
- content_type: str
Used to indicate the media type of the resource. By default, it will be application/octet-stream for bytes input and application/json otherwise. The content-type header will be set to this value when calling the model deployment endpoint.
- Returns:
Prediction results.
- Return type:
- property predict_log: OCILog¶
Gets the model deployment predict logs object.
- Returns:
The OCILog object containing the predict logs.
- Return type:
- property runtime: ModelDeploymentRuntime¶
Model deployment runtime.
- Returns:
Model deployment runtime.
- Return type:
ModelDeploymentRuntime
- show_logs(time_start: datetime | None = None, time_end: datetime | None = None, limit: int = 100, log_type: str | None = None)[source]¶
Shows deployment logs as a pandas dataframe.
- Parameters:
time_start ((datetime.datetime, optional). Defaults to None.) – Starting date and time in RFC3339 format for retrieving logs. Defaults to None. Logs will be retrieved 14 days from now.
time_end ((datetime.datetime, optional). Defaults to None.) – Ending date and time in RFC3339 format for retrieving logs. Defaults to None. Logs will be retrieved until now.
limit ((int, optional). Defaults to 100.) – The maximum number of items to return.
log_type ((str, optional). Defaults to None.) – The log type. Can be “access”, “predict” or None.
- Return type:
A pandas DataFrame containing logs.
- sync() ModelDeployment [source]¶
Updates the model deployment instance from backend.
- Returns:
The ModelDeployment instance (self).
- Return type:
- property time_created: <module 'datetime' from '/home/docs/.asdf/installs/python/3.9.20/lib/python3.9/datetime.py'>¶
The time when the model deployment is created.
- Returns:
The time when the model deployment is created.
- Return type:
datetime
- to_dict(**kwargs) Dict [source]¶
Serializes model deployment to a dictionary.
- Returns:
The model deployment serialized as a dictionary.
- Return type:
- update(properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs)[source]¶
Updates a model deployment
You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.
- Parameters:
properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.
wait_for_completion (bool) – Flag set for whether to wait for deployment to be updated before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
kwargs – dict
- Returns:
The instance of ModelDeployment.
- Return type:
- watch(log_type: str = None, time_start: <module 'datetime' from '/home/docs/.asdf/installs/python/3.9.20/lib/python3.9/datetime.py'> = None, interval: int = 3, log_filter: str = None) ModelDeployment [source]¶
Streams the access and/or predict logs of model deployment.
- Parameters:
log_type (str, optional) – The log type. Can be access, predict or None. Defaults to None.
time_start (datetime.datetime, optional) – Starting time for the log query. Defaults to None.
interval (int, optional) – The time interval between sending each request to pull logs from OCI logging service. Defaults to 3.
log_filter (str, optional) – Expression for filtering the logs. This will be the WHERE clause of the query. Defaults to None.
- Returns:
The instance of ModelDeployment.
- Return type:
- with_defined_tags(**kwargs) ModelDeployment [source]¶
Sets the defined tags of model deployment.
- Parameters:
kwargs – The defined tags of model deployment.
- Returns:
The ModelDeployment instance (self).
- Return type:
- with_description(description: str) ModelDeployment [source]¶
Sets the description of model deployment.
- Parameters:
description (str) – The description of model deployment.
- Returns:
The ModelDeployment instance (self).
- Return type:
- with_display_name(display_name: str) ModelDeployment [source]¶
Sets the name of model deployment.
- Parameters:
display_name (str) – The name of model deployment.
- Returns:
The ModelDeployment instance (self).
- Return type:
- with_freeform_tags(**kwargs) ModelDeployment [source]¶
Sets the freeform tags of model deployment.
- Parameters:
kwargs – The freeform tags of model deployment.
- Returns:
The ModelDeployment instance (self).
- Return type:
- with_infrastructure(infrastructure: ModelDeploymentInfrastructure) ModelDeployment [source]¶
Sets the infrastructure of model deployment.
- Parameters:
infrastructure (ModelDeploymentInfrastructure) – The infrastructure of model deployment.
- Returns:
The ModelDeployment instance (self).
- Return type:
- with_runtime(runtime: ModelDeploymentRuntime) ModelDeployment [source]¶
Sets the runtime of model deployment.
- Parameters:
runtime (ModelDeploymentRuntime) – The runtime of model deployment.
- Returns:
The ModelDeployment instance (self).
- Return type:
- class ads.model.ModelDeploymentProperties(model_id: str | None = None, model_uri: str | None = None, oci_model_deployment: ModelDeployment | CreateModelDeploymentDetails | UpdateModelDeploymentDetails | Dict | None = None, config: dict | None = None, **kwargs)[source]¶
Bases:
OCIDataScienceMixin
,ModelDeployment
Represents the details for a model deployment
- swagger_types¶
The property names and the corresponding types of OCI ModelDeployment model.
- Type:
- with_prop(property_name, value)[source]¶
Set the model deployment details property_name attribute to value
Initialize a ModelDeploymentProperties object by specifying one of the followings:
- Parameters:
model_id ((str, optiona). Defaults to None.) – Model Artifact OCID. The model_id must be specified either explicitly or as an attribute of the OCI object.
model_uri ((str, optiona). Defaults to None.) – Uri to model files, can be local or in cloud storage.
oci_model_deployment ((Union[ModelDeployment, CreateModelDeploymentDetails, UpdateModelDeploymentDetails, Dict], optional). Defaults to None.) – An OCI model or Dict containing model deployment details. The OCI model can be an instance of either ModelDeployment, CreateModelDeploymentDetails or UpdateModelConfigurationDetails.
config ((Dict, optional). Defaults to None.) – ADS auth dictionary for OCI authentication. This can be generated by calling ads.common.auth.api_keys() or ads.common.auth.resource_principal(). If this is None, ads.common.default_signer(client_kwargs) will be used.
kwargs –
Users can also initialize the object by using keyword arguments. The following keyword arguments are supported by oci.data_science.models.data_science_models.ModelDeployment:
display_name,
description,
project_id,
compartment_id,
model_deployment_configuration_details,
category_log_details,
freeform_tags,
defined_tags.
If display_name is not specified, a randomly generated easy to remember name will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
ModelDeploymentProperties also supports the following additional keyward arguments:
instance_shape,
instance_count,
bandwidth_mbps,
access_log_group_id,
access_log_id,
predict_log_group_id,
predict_log_id,
memory_in_gbs,
ocpus.
These additional arguments will be saved into appropriate properties in the OCI model.
- Raises:
ValueError – model_id is None AND not specified in oci_model_deployment.model_deployment_configuration_details.model_configuration_details.
- build() CreateModelDeploymentDetails [source]¶
Converts the deployment properties to OCI CreateModelDeploymentDetails object. Converts a model URI into a model OCID if user passed in a URI.
- Returns:
A CreateModelDeploymentDetails instance ready for OCI API.
- Return type:
CreateModelDeploymentDetails
- sub_properties = ['instance_shape', 'instance_count', 'bandwidth_mbps', 'access_log_group_id', 'access_log_id', 'predict_log_group_id', 'predict_log_id', 'memory_in_gbs', 'ocpus']¶
- to_oci_model(oci_model)[source]¶
Convert properties into an OCI data model
- Parameters:
oci_model (class) – The class of OCI data model, e.g., oci.data_science_models.CreateModelDeploymentDetails
- to_update_deployment() UpdateModelDeploymentDetails [source]¶
Converts the deployment properties to OCI UpdateModelDeploymentDetails object.
- Returns:
An UpdateModelDeploymentDetails instance ready for OCI API.
- Return type:
CreateModelDeploymentDetails
- with_access_log(log_group_id: str, log_id: str)[source]¶
Adds access log config
- Parameters:
- Returns:
self
- Return type:
- with_category_log(log_type: str, group_id: str, log_id: str)[source]¶
Adds category log configuration
- Parameters:
- Returns:
self
- Return type:
- Raises:
ValueError – When log_type is invalid
- with_instance_configuration(config)[source]¶
with_instance_configuration creates a ModelDeploymentDetails object with a specific config
- Parameters:
config (dict) –
dictionary containing instance configuration about the deployment. The following keys are supported:
instance_shape: str,
instance_count: int,
bandwidth_mbps: int,
memory_in_gbs: float,
ocpus: float
The instance_shape and instance_count are required when creating a new deployment. They are optional when updating an existing deployment.
- Returns:
self
- Return type:
- with_logging_configuration(access_log_group_id: str, access_log_id: str, predict_log_group_id: str | None = None, predict_log_id: str | None = None)[source]¶
Adds OCI logging configurations for OCI logging service
- Parameters:
- Returns:
self
- Return type:
- class ads.model.ModelInputSerializer[source]¶
Bases:
Serializer
Abstract base class for creation of new data serializers.
- class ads.model.ModelProperties(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, training_resource_id: str | None = None, training_script_path: str | None = None, training_id: str | None = None, compartment_id: str | None = None, project_id: str | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = None, overwrite_existing_artifact: bool | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_private_endpoint_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | int | None = None, deployment_ocpus: float | int | None = None, deployment_image: str | None = None)[source]¶
Bases:
BaseProperties
Represents properties required to save and deploy model.
- class ads.model.ModelState(value)[source]¶
Bases:
Enum
An enumeration.
- AVAILABLE = 'Available'¶
- DONE = 'Done'¶
- NEEDSACTION = 'Needs Action'¶
- NOTAPPLICABLE = 'Not Applicable'¶
- NOTAVAILABLE = 'Not Available'¶
- class ads.model.ModelVersionSet(spec: Dict | None = None, **kwargs)[source]¶
Bases:
Builder
Represents Model Version Set.
- delete(self, delete_model: bool | None = False) "ModelVersionSet": [source]¶
Removes a model version set.
- from_dict(cls, config: dict) 'ModelVersionSet' [source]¶
Load a model version set instance from a dictionary of configurations.
Examples
>>> mvs = (ModelVersionSet() ... .with_compartment_id(os.environ["PROJECT_COMPARTMENT_OCID"]) ... .with_project_id(os.environ["PROJECT_OCID"]) ... .with_name("test_experiment") ... .with_description("Experiment number one")) >>> mvs.create() >>> mvs.model_add(model_ocid, version_label="Version label 1") >>> mvs.model_list() >>> mvs.details_link ... https://console.<region>.oraclecloud.com/data-science/model-version-sets/<ocid> >>> mvs.delete()
Initializes a model version set.
- Parameters:
spec ((Dict, optional). Defaults to None.) – Object specification.
kwargs (Dict) –
Specification as keyword arguments. If ‘spec’ contains the same key as the one in kwargs, the value from kwargs will be used.
project_id: str
compartment_id: str
name: str
description: str
defined_tags: Dict[str, Dict[str, object]]
freeform_tags: Dict[str, str]
- CONST_COMPARTMENT_ID = 'compartmentId'¶
- CONST_DEFINED_TAG = 'definedTags'¶
- CONST_DESCRIPTION = 'description'¶
- CONST_FREEFORM_TAG = 'freeformTags'¶
- CONST_ID = 'id'¶
- CONST_NAME = 'name'¶
- CONST_PROJECT_ID = 'projectId'¶
- LIFECYCLE_STATE_ACTIVE = 'ACTIVE'¶
- LIFECYCLE_STATE_DELETED = 'DELETED'¶
- LIFECYCLE_STATE_DELETING = 'DELETING'¶
- LIFECYCLE_STATE_FAILED = 'FAILED'¶
- attribute_map = {'compartmentId': 'compartment_id', 'definedTags': 'defined_tags', 'description': 'description', 'freeformTags': 'freeform_tags', 'id': 'id', 'name': 'name', 'projectId': 'project_id'}¶
- create(**kwargs) ModelVersionSet [source]¶
Creates a model version set.
- Parameters:
kwargs – Additional keyword arguments.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- delete(delete_model: bool | None = False) ModelVersionSet [source]¶
Removes a model version set.
- Parameters:
delete_model ((bool, optional). Defaults to False.) – By default, this parameter is false. A model version set can only be deleted if all the models associate with it are already in the DELETED state. You can optionally specify the deleteRelatedModels boolean query parameters to true, which deletes all associated models for you.
- Returns:
The ModelVersionSet instance (self).
- Return type:
- property details_link: str¶
Link to details page in OCI console.
- Returns:
Link to details page in OCI console.
- Return type:
- classmethod from_dict(config: dict) ModelVersionSet [source]¶
Load a model version set instance from a dictionary of configurations.
- Parameters:
config (dict) – A dictionary of configurations.
- Returns:
The model version set instance.
- Return type:
- classmethod from_dsc_model_version_set(dsc_model_version_set: DataScienceModelVersionSet) ModelVersionSet [source]¶
Initialize a ModelVersionSet instance from a DataScienceModelVersionSet.
- Parameters:
dsc_model_version_set (DataScienceModelVersionSet) – An instance of DataScienceModelVersionSet.
- Returns:
An instance of ModelVersionSet.
- Return type:
- classmethod from_id(id: str) ModelVersionSet [source]¶
Gets an existing model version set by OCID.
- Parameters:
id (str) – The model version set OCID.
- Returns:
An instance of ModelVersionSet.
- Return type:
- classmethod from_name(name: str, compartment_id: str | None = None) ModelVersionSet [source]¶
Gets an existing model version set by name.
- Parameters:
- Returns:
An instance of ModelVersionSet.
- Return type:
- classmethod from_ocid(ocid: str) ModelVersionSet [source]¶
Gets an existing model version set by OCID.
- Parameters:
id (str) – The model version set OCID.
- Returns:
An instance of ModelVersionSet.
- Return type:
- property kind: str¶
The kind of the object as showing in YAML.
- Returns:
“modelVersionSet”
- Return type:
- classmethod list(compartment_id: str | None = None, category: str = 'USER', **kwargs) List[ModelVersionSet] [source]¶
List model version sets in a given compartment.
- Parameters:
compartment_id (str) – The OCID of compartment.
category ((str, optional). Defaults to USER.) – The category of Model. Allowed values are: “USER”, “SERVICE”
kwargs – Additional keyword arguments for filtering model version sets.
- Returns:
The list of model version sets.
- Return type:
List[ModelVersionSet]
- model_add(model_id: str, version_label: str | None = None, **kwargs) None [source]¶
Adds new model to model version set.
- Parameters:
- Returns:
Nothing.
- Return type:
None
- Raises:
ModelVersionSetNotSaved – If model version set has not been saved yet.:
- models(**kwargs) List[DataScienceModel] [source]¶
Gets list of models associated with a model version set.
- Parameters:
kwargs –
- project_id: str
Project OCID.
- lifecycle_state: str
Filter results by the specified lifecycle state. Must be a valid state for the resource type. Allowed values are: “ACTIVE”, “DELETED”, “FAILED”, “INACTIVE”
Can be any attribute that oci.data_science.data_science_client.DataScienceClient.list_models. accepts.
- Returns:
List of models associated with the model version set.
- Return type:
List[DataScienceModel]
- Raises:
ModelVersionSetNotSaved – If model version set has not been saved yet.:
- property status: str | None¶
Status of the model version set.
- Returns:
Status of the model version set.
- Return type:
- to_dict() dict [source]¶
Serializes model version set to a dictionary.
- Returns:
The model version set serialized as a dictionary.
- Return type:
- update() ModelVersionSet [source]¶
Updates a model version set.
- Returns:
The ModelVersionSet instance (self).
- Return type:
- with_compartment_id(compartment_id: str) ModelVersionSet [source]¶
Sets the compartment OCID.
- Parameters:
compartment_id (str) – The compartment OCID.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_defined_tags(**kwargs: Dict[str, Dict[str, object]]) ModelVersionSet [source]¶
Sets defined tags.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_description(description: str) ModelVersionSet [source]¶
Sets the description.
- Parameters:
description (str) – The description of the model version set.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_freeform_tags(**kwargs: Dict[str, str]) ModelVersionSet [source]¶
Sets freeform tags.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_name(name: str) ModelVersionSet [source]¶
Sets the name of the model version set.
- Parameters:
name (str) – The name of the model version set.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- with_project_id(project_id: str) ModelVersionSet [source]¶
Sets the project OCID.
- Parameters:
project_id (str) – The project OCID.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- class ads.model.PyTorchModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'torch', model_input_serializer: SERDE | None = None, **kwargs)[source]¶
Bases:
FrameworkSpecificModel
PyTorchModel class for estimators from Pytorch framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained pytorch estimator/model using Pytorch.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> torch_model = PyTorchModel(estimator=torch_estimator, ... artifact_dir=tmp_model_dir) >>> inference_conda_env = "generalml_p37_cpu_v1"
>>> torch_model.prepare(inference_conda_env=inference_conda_env, force_overwrite=True) >>> torch_model.reload() >>> torch_model.verify(...) >>> torch_model.save() >>> model_deployment = torch_model.deploy(wait_for_completion=False) >>> torch_model.predict(...)
Initiates a PyTorchModel instance.
- Parameters:
estimator (callable) – Any model object generated by pytorch framework
artifact_dir (str) – artifact directory to store the files needed for deployment.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
PyTorchModel instance.
- Return type:
- model_save_serializer_type¶
alias of
PyTorchModelSerializerType
- serialize_model(as_onnx: bool = False, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, use_torch_script: bool | None = None, **kwargs) None [source]¶
Serialize and save Pytorch model using ONNX or model specific method.
- Parameters:
as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.
force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect onnx_args.
use_torch_script ((bool, optional). Defaults to None (If the default value has not been changed, it will be set as False).) – If set as True, the model will be serialized as a TorchScript program. Check https://pytorch.org/tutorials/beginner/saving_loading_models.html#export-load-model-in-torchscript-format for more details. If set as False, it will only save the trained model’s learned parameters, and the score.py need to be modified to construct the model class instance first. Check https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended for more details.
**kwargs (optional params used to serialize pytorch model to onnx,)
following (including the) – onnx_args: (tuple or torch.Tensor), default to None Contains model inputs such that model(onnx_args) is a valid invocation of the model. Can be structured either as: 1) ONLY A TUPLE OF ARGUMENTS; 2) A TENSOR; 3) A TUPLE OF ARGUMENTS ENDING WITH A DICTIONARY OF NAMED ARGUMENTS input_names: (List[str], optional). Names to assign to the input nodes of the graph, in order. output_names: (List[str], optional). Names to assign to the output nodes of the graph, in order. dynamic_axes: (dict, optional), default to None. Specify axes of tensors as dynamic (i.e. known only at run-time).
- Returns:
Nothing.
- Return type:
None
- class ads.model.SERDE[source]¶
Bases:
Serializer
,Deserializer
A layer contains two groups which can interact with each other to serialize and deserialize supported data structure using supported data format.
- name = ''¶
- class ads.model.SklearnModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, model_save_serializer: SERDE | None = 'joblib', model_input_serializer: SERDE | None = None, **kwargs)[source]¶
Bases:
FrameworkSpecificModel
SklearnModel class for estimators from sklearn framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained sklearn estimator/model using scikit-learn.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> import tempfile >>> from sklearn.model_selection import train_test_split >>> from ads.model.framework.sklearn_model import SklearnModel >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.datasets import load_iris
>>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) >>> sklearn_estimator = LogisticRegression() >>> sklearn_estimator.fit(X_train, y_train)
>>> sklearn_model = SklearnModel(estimator=sklearn_estimator, ... artifact_dir=tmp_model_dir)
>>> sklearn_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True) >>> sklearn_model.reload() >>> sklearn_model.verify(X_test) >>> sklearn_model.save() >>> model_deployment = sklearn_model.deploy(wait_for_completion=False) >>> sklearn_model.predict(X_test)
Initiates a SklearnModel instance.
- Parameters:
estimator (Callable) – Sklearn Model
artifact_dir (str) – Directory for generate artifact.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
SklearnModel instance.
- Return type:
Examples
>>> import tempfile >>> from sklearn.model_selection import train_test_split >>> from ads.model.framework.sklearn_model import SklearnModel >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.datasets import load_iris
>>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) >>> sklearn_estimator = LogisticRegression() >>> sklearn_estimator.fit(X_train, y_train)
>>> sklearn_model = SklearnModel(estimator=sklearn_estimator, artifact_dir=tempfile.mkdtemp()) >>> sklearn_model.prepare(inference_conda_env="dataexpl_p37_cpu_v3") >>> sklearn_model.verify(X_test) >>> sklearn_model.save() >>> model_deployment = sklearn_model.deploy() >>> sklearn_model.predict(X_test) >>> sklearn_model.delete_deployment()
- model_save_serializer_type¶
alias of
SklearnModelSerializerType
- serialize_model(as_onnx: bool | None = False, initial_types: List[Tuple] | None = None, force_overwrite: bool | None = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs: Dict)[source]¶
Serialize and save scikit-learn model using ONNX or model specific method.
- Parameters:
as_onnx ((bool, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.
initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.
force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.
X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.
- Returns:
Nothing.
- Return type:
None
- class ads.model.SparkPipelineModel(estimator: Callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'spark', model_input_serializer: SERDE | None = 'spark', **kwargs)[source]¶
Bases:
FrameworkSpecificModel
SparkPipelineModel class for estimators from the pyspark framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained pyspark estimator/model using pyspark.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare. A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> import tempfile >>> from ads.model.framework.spark_model import SparkPipelineModel >>> from pyspark.ml.linalg import Vectors >>> from pyspark.ml.classification import LogisticRegression
>>> training = spark.createDataFrame([ >>> (1.0, Vectors.dense([0.0, 1.1, 0.1])), >>> (0.0, Vectors.dense([2.0, 1.0, -1.0])), >>> (0.0, Vectors.dense([2.0, 1.3, 1.0])), >>> (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"]) >>> lr_estimator = LogisticRegression(maxIter=10, regParam=0.001) >>> pipeline = Pipeline(stages=[lr_estimator]) >>> pipeline_model = pipeline.fit(training)
>>> spark_model = SparkPipelineModel(estimator=pipeline_model, artifact_dir=tempfile.mkdtemp()) >>> spark_model.prepare(inference_conda_env="dataexpl_p37_cpu_v3") >>> spark_model.verify(training) >>> spark_model.save() >>> model_deployment = spark_model.deploy() >>> spark_model.predict(training) >>> spark_model.delete_deployment()
Initiates a SparkPipelineModel instance.
- Parameters:
estimator (Callable) – SparkPipelineModel
artifact_dir (str) – The URI for the generated artifact, which can be local path or OCI object storage URI.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to ads.model.serde.model_input.SparkModelInputSERDE.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
SparkPipelineModel instance.
- Return type:
Examples
>>> import tempfile >>> from ads.model.framework.spark_model import SparkPipelineModel >>> from pyspark.ml.linalg import Vectors >>> from pyspark.ml.classification import LogisticRegression >>> from pyspark.ml import Pipeline
>>> training = spark.createDataFrame([ >>> (1.0, Vectors.dense([0.0, 1.1, 0.1])), >>> (0.0, Vectors.dense([2.0, 1.0, -1.0])), >>> (0.0, Vectors.dense([2.0, 1.3, 1.0])), >>> (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"]) >>> lr_estimator = LogisticRegression(maxIter=10, regParam=0.001) >>> pipeline = Pipeline(stages=[lr_estimator]) >>> pipeline_model = pipeline.fit(training)
>>> spark_model = SparkPipelineModel(estimator=pipeline_model, artifact_dir=tempfile.mkdtemp()) >>> spark_model.prepare(inference_conda_env="pyspark30_p37_cpu_v5") >>> spark_model.verify(training) >>> spark_model.save() >>> model_deployment = spark_model.deploy() >>> spark_model.predict(training) >>> spark_model.delete_deployment()
- model_input_serializer_type¶
alias of
SparkModelInputSerializerType
- model_save_serializer_type¶
alias of
SparkModelSerializerType
- serialize_model(as_onnx: bool = False, X_sample: Dict | str | List | ndarray | Series | DataFrame | pyspark.sql.DataFrame | pyspark.pandas.DataFrame | None = None, force_overwrite: bool = False, **kwargs) None [source]¶
Serialize and save pyspark model using spark serialization.
- Parameters:
force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.
- Return type:
None
- class ads.model.TensorFlowModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'tf', model_input_serializer: SERDE | None = None, **kwargs)[source]¶
Bases:
FrameworkSpecificModel
TensorFlowModel class for estimators from Tensorflow framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained tensorflow estimator/model using Tensorflow.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> from ads.model.framework.tensorflow_model import TensorFlowModel >>> import tempfile >>> import tensorflow as tf
>>> mnist = tf.keras.datasets.mnist >>> (x_train, y_train), (x_test, y_test) = mnist.load_data() >>> x_train, x_test = x_train / 255.0, x_test / 255.0
>>> tf_estimator = tf.keras.models.Sequential( ... [ ... tf.keras.layers.Flatten(input_shape=(28, 28)), ... tf.keras.layers.Dense(128, activation="relu"), ... tf.keras.layers.Dropout(0.2), ... tf.keras.layers.Dense(10), ... ] ... ) >>> loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) >>> tf_estimator.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"]) >>> tf_estimator.fit(x_train, y_train, epochs=1)
>>> tf_model = TensorFlowModel(estimator=tf_estimator, ... artifact_dir=tempfile.mkdtemp()) >>> inference_conda_env = "generalml_p37_cpu_v1"
>>> tf_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True) >>> tf_model.verify(x_test[:1]) >>> tf_model.save() >>> model_deployment = tf_model.deploy(wait_for_completion=False) >>> tf_model.predict(x_test[:1])
Initiates a TensorFlowModel instance.
- Parameters:
estimator (callable) – Any model object generated by tensorflow framework
artifact_dir (str) – Directory for generate artifact.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
TensorFlowModel instance.
- Return type:
- model_save_serializer_type¶
alias of
TensorflowModelSerializerType
- serialize_model(as_onnx: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, force_overwrite: bool = False, **kwargs) None [source]¶
Serialize and save Tensorflow model using ONNX or model specific method.
- Parameters:
as_onnx ((bool, optional). Defaults to False.) – If set as True, convert into ONNX model.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema and detect input_signature.
force_overwrite ((bool, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.
**kwargs (optional params used to serialize tensorflow model to onnx,)
following (including the) – input_signature: a tuple or a list of tf.TensorSpec objects). default to None. Define the shape/dtype of the input so that model(input_signature) is a valid invocation of the model. opset_version: int. Defaults to None. Used for the ONNX model.
- Returns:
Nothing.
- Return type:
None
- class ads.model.XGBoostModel(estimator: callable, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict = None, model_save_serializer: SERDE | None = 'xgboost', model_input_serializer: SERDE | None = None, **kwargs)[source]¶
Bases:
FrameworkSpecificModel
XGBoostModel class for estimators from xgboost framework.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
A trained xgboost estimator/model using Xgboost.
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- properties¶
ModelProperties object required to save and deploy model. For more details, check https://accelerated-data-science.readthedocs.io/en/latest/ads.model.html#module-ads.model.model_properties.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- delete_deployment(...)¶
Deletes the current model deployment.
- deploy(..., \*\*kwargs)¶
Deploys a model.
- from_model_artifact(uri, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from the specified folder, or zip/tar archive.
- from_model_catalog(model_id, model_file_name, artifact_dir, ..., \*\*kwargs)¶
Loads model from model catalog.
- introspect(...)¶
Runs model introspection.
- predict(data, ...)¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- reload(...)¶
Reloads the model artifact files: score.py and the runtime.yaml.
- save(..., \*\*kwargs)¶
Saves model artifacts to the model catalog.
- summary_status(...)¶
Gets a summary table of the current status.
- verify(data, ...)¶
Tests if deployment works in local environment.
Examples
>>> import xgboost as xgb >>> import tempfile >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import train_test_split >>> from sklearn.datasets import load_iris >>> from ads.model.framework.xgboost_model import XGBoostModel
>>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) >>> xgboost_estimator = xgb.XGBClassifier() >>> xgboost_estimator.fit(X_train, y_train)
>>> xgboost_model = XGBoostModel(estimator=xgboost_estimator, artifact_dir=tmp_model_dir) >>> xgboost_model.prepare(inference_conda_env="generalml_p37_cpu_v1", force_overwrite=True) >>> xgboost_model.reload() >>> xgboost_model.verify(X_test) >>> xgboost_model.save() >>> model_deployment = xgboost_model.deploy(wait_for_completion=False) >>> xgboost_model.predict(X_test)
Initiates a XGBoostModel instance. This class wraps the XGBoost model as estimator. It’s primary purpose is to hold the trained model and do serialization.
- Parameters:
estimator – XGBoostModel
artifact_dir (str) – artifact directory to store the files needed for deployment.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Returns:
XGBoostModel instance.
- Return type:
Examples
>>> import xgboost as xgb >>> import tempfile >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import train_test_split >>> from sklearn.datasets import load_iris >>> from ads.model.framework.xgboost_model import XGBoostModel
>>> iris = load_iris() >>> X, y = iris.data, iris.target
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) >>> train = xgb.DMatrix(X_train, y_train) >>> test = xgb.DMatrix(X_test, y_test) >>> xgboost_estimator = XGBClassifier() >>> xgboost_estimator.fit(X_train, y_train) >>> xgboost_model = XGBoostModel(estimator=xgboost_estimator, artifact_dir=tempfile.mkdtemp()) >>> xgboost_model.prepare(inference_conda_env="generalml_p37_cpu_v1") >>> xgboost_model.verify(X_test) >>> xgboost_model.save() >>> model_deployment = xgboost_model.deploy() >>> xgboost_model.predict(X_test) >>> xgboost_model.delete_deployment()
- model_save_serializer_type¶
alias of
XgboostModelSerializerType
- serialize_model(as_onnx: bool = False, initial_types: List[Tuple] = None, force_overwrite: bool = False, X_sample: Dict | str | List | Tuple | ndarray | Series | DataFrame | None = None, **kwargs)[source]¶
Serialize and save Xgboost model using ONNX or model specific method.
- Parameters:
artifact_dir (str) – Directory for generate artifact.
as_onnx ((boolean, optional). Defaults to False.) – If set as True, provide initial_types or X_sample to convert into ONNX.
initial_types ((List[Tuple], optional). Defaults to None.) – Each element is a tuple of a variable name and a type.
force_overwrite ((boolean, optional). Defaults to False.) – If set as True, overwrite serialized model if exists.
X_sample (Union[Dict, str, List, np.ndarray, pd.core.series.Series, pd.core.frame.DataFrame,]. Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model. Used to generate initial_types.
- Returns:
Nothing.
- Return type:
None
- ads.model.experiment(name: str, create_if_not_exists: bool | None = True, **kwargs: Dict)[source]¶
Context manager helping to operate with model version set.
- Parameters:
name (str) – The name of the model version set.
create_if_not_exists ((bool, optional). Defaults to True.) – Creates model version set if not exists.
kwargs ((Dict, optional).) –
- compartment_id: (str, optional). Defaults to value from the environment variables.
The compartment OCID.
- project_id: (str, optional). Defaults to value from the environment variables.
The project OCID.
- description: (str, optional). Defaults to None.
The description of the model version set.
- Yields:
ModelVersionSet – The model version set object.