ads.model package¶
Subpackages¶
- ads.model.common package
- ads.model.deployment package
- Subpackages
- Submodules
- ads.model.deployment.model_deployer module
ModelDeployer
ModelDeployer.config
ModelDeployer.ds_client
ModelDeployer.ds_composite_client
ModelDeployer.deploy()
ModelDeployer.get_model_deployment()
ModelDeployer.get_model_deployment_state()
ModelDeployer.delete()
ModelDeployer.list_deployments()
ModelDeployer.show_deployments()
ModelDeployer.delete()
ModelDeployer.deploy()
ModelDeployer.deploy_from_model_uri()
ModelDeployer.get_model_deployment()
ModelDeployer.get_model_deployment_state()
ModelDeployer.list_deployments()
ModelDeployer.show_deployments()
ModelDeployer.update()
- ads.model.deployment.model_deployment module
LogNotConfiguredError
ModelDeployment
ModelDeployment.config
ModelDeployment.properties
ModelDeployment.workflow_state_progress
ModelDeployment.workflow_steps
ModelDeployment.dsc_model_deployment
ModelDeployment.state
ModelDeployment.created_by
ModelDeployment.lifecycle_state
ModelDeployment.lifecycle_details
ModelDeployment.time_created
ModelDeployment.display_name
ModelDeployment.description
ModelDeployment.freeform_tags
ModelDeployment.defined_tags
ModelDeployment.runtime
ModelDeployment.infrastructure
ModelDeployment.deploy()
ModelDeployment.delete()
ModelDeployment.update()
ModelDeployment.activate()
ModelDeployment.deactivate()
ModelDeployment.list()
ModelDeployment.with_display_name()
ModelDeployment.with_description()
ModelDeployment.with_freeform_tags()
ModelDeployment.with_defined_tags()
ModelDeployment.with_runtime()
ModelDeployment.with_infrastructure()
ModelDeployment.from_dict()
ModelDeployment.from_id()
ModelDeployment.sync()
ModelDeployment.CONST_CREATED_BY
ModelDeployment.CONST_DEFINED_TAG
ModelDeployment.CONST_DESCRIPTION
ModelDeployment.CONST_DISPLAY_NAME
ModelDeployment.CONST_FREEFORM_TAG
ModelDeployment.CONST_ID
ModelDeployment.CONST_INFRASTRUCTURE
ModelDeployment.CONST_LIFECYCLE_DETAILS
ModelDeployment.CONST_LIFECYCLE_STATE
ModelDeployment.CONST_MODEL_DEPLOYMENT_URL
ModelDeployment.CONST_RUNTIME
ModelDeployment.CONST_TIME_CREATED
ModelDeployment.access_log
ModelDeployment.activate()
ModelDeployment.attribute_map
ModelDeployment.build()
ModelDeployment.created_by
ModelDeployment.deactivate()
ModelDeployment.defined_tags
ModelDeployment.delete()
ModelDeployment.deploy()
ModelDeployment.description
ModelDeployment.display_name
ModelDeployment.freeform_tags
ModelDeployment.from_dict()
ModelDeployment.from_id()
ModelDeployment.id
ModelDeployment.infrastructure
ModelDeployment.initialize_spec_attributes
ModelDeployment.kind
ModelDeployment.lifecycle_details
ModelDeployment.lifecycle_state
ModelDeployment.list()
ModelDeployment.list_df()
ModelDeployment.logs()
ModelDeployment.model_deployment_id
ModelDeployment.model_input_serializer
ModelDeployment.predict()
ModelDeployment.predict_log
ModelDeployment.runtime
ModelDeployment.show_logs()
ModelDeployment.state
ModelDeployment.status
ModelDeployment.sync()
ModelDeployment.time_created
ModelDeployment.to_dict()
ModelDeployment.type
ModelDeployment.update()
ModelDeployment.url
ModelDeployment.watch()
ModelDeployment.with_defined_tags()
ModelDeployment.with_description()
ModelDeployment.with_display_name()
ModelDeployment.with_freeform_tags()
ModelDeployment.with_infrastructure()
ModelDeployment.with_runtime()
ModelDeploymentLogType
ModelDeploymentPredictError
- ads.model.deployment.model_deployment_properties module
ModelDeploymentProperties
ModelDeploymentProperties.swagger_types
ModelDeploymentProperties.model_id
ModelDeploymentProperties.model_uri
ModelDeploymentProperties.with_prop()
ModelDeploymentProperties.with_instance_configuration()
ModelDeploymentProperties.with_access_log()
ModelDeploymentProperties.with_predict_log()
ModelDeploymentProperties.build()
ModelDeploymentProperties.build()
ModelDeploymentProperties.sub_properties
ModelDeploymentProperties.to_oci_model()
ModelDeploymentProperties.to_update_deployment()
ModelDeploymentProperties.with_access_log()
ModelDeploymentProperties.with_category_log()
ModelDeploymentProperties.with_instance_configuration()
ModelDeploymentProperties.with_logging_configuration()
ModelDeploymentProperties.with_predict_log()
ModelDeploymentProperties.with_prop()
- Module contents
- ads.model.extractor package
- Submodules
- ads.model.extractor.keras_extractor module
- ads.model.extractor.lightgbm_extractor module
- ads.model.extractor.model_info_extractor module
ModelInfoExtractor
ModelInfoExtractor.framework()
ModelInfoExtractor.algorithm()
ModelInfoExtractor.version()
ModelInfoExtractor.hyperparameter()
ModelInfoExtractor.info()
ModelInfoExtractor.algorithm()
ModelInfoExtractor.framework()
ModelInfoExtractor.hyperparameter()
ModelInfoExtractor.info()
ModelInfoExtractor.version()
normalize_hyperparameter()
- ads.model.extractor.model_info_extractor_factory module
- ads.model.extractor.pytorch_extractor module
- ads.model.extractor.sklearn_extractor module
- ads.model.extractor.spark_extractor module
- ads.model.extractor.tensorflow_extractor module
TensorflowExtractor
TensorflowExtractor.model
TensorflowExtractor.estimator
TensorflowExtractor.framework()
TensorflowExtractor.algorithm()
TensorflowExtractor.version()
TensorflowExtractor.hyperparameter()
TensorflowExtractor.algorithm
TensorflowExtractor.framework
TensorflowExtractor.hyperparameter
TensorflowExtractor.version
- ads.model.extractor.xgboost_extractor module
- Module contents
- ads.model.framework package
- Submodules
- ads.model.framework.huggingface_model module
HuggingFacePipelineModel
HuggingFacePipelineModel.algorithm
HuggingFacePipelineModel.artifact_dir
HuggingFacePipelineModel.auth
HuggingFacePipelineModel.estimator
HuggingFacePipelineModel.framework
HuggingFacePipelineModel.hyperparameter
HuggingFacePipelineModel.metadata_custom
HuggingFacePipelineModel.metadata_provenance
HuggingFacePipelineModel.metadata_taxonomy
HuggingFacePipelineModel.model_artifact
HuggingFacePipelineModel.model_deployment
HuggingFacePipelineModel.model_file_name
HuggingFacePipelineModel.model_id
HuggingFacePipelineModel.properties
HuggingFacePipelineModel.runtime_info
HuggingFacePipelineModel.schema_input
HuggingFacePipelineModel.schema_output
HuggingFacePipelineModel.serialize
HuggingFacePipelineModel.version
HuggingFacePipelineModel.delete_deployment()
HuggingFacePipelineModel.deploy()
HuggingFacePipelineModel.from_model_artifact()
HuggingFacePipelineModel.from_model_catalog()
HuggingFacePipelineModel.introspect()
HuggingFacePipelineModel.predict()
HuggingFacePipelineModel.prepare()
HuggingFacePipelineModel.reload()
HuggingFacePipelineModel.save()
HuggingFacePipelineModel.summary_status()
HuggingFacePipelineModel.verify()
HuggingFacePipelineModel.delete()
HuggingFacePipelineModel.delete_deployment()
HuggingFacePipelineModel.deploy()
HuggingFacePipelineModel.download_artifact()
HuggingFacePipelineModel.evaluate()
HuggingFacePipelineModel.from_id()
HuggingFacePipelineModel.from_model_artifact()
HuggingFacePipelineModel.from_model_catalog()
HuggingFacePipelineModel.from_model_deployment()
HuggingFacePipelineModel.get_data_serializer()
HuggingFacePipelineModel.get_model_serializer()
HuggingFacePipelineModel.introspect()
HuggingFacePipelineModel.metadata_custom
HuggingFacePipelineModel.metadata_provenance
HuggingFacePipelineModel.metadata_taxonomy
HuggingFacePipelineModel.model_deployment_id
HuggingFacePipelineModel.model_id
HuggingFacePipelineModel.model_input_serializer_type
HuggingFacePipelineModel.model_save_serializer_type
HuggingFacePipelineModel.populate_metadata()
HuggingFacePipelineModel.populate_schema()
HuggingFacePipelineModel.predict()
HuggingFacePipelineModel.prepare()
HuggingFacePipelineModel.prepare_save_deploy()
HuggingFacePipelineModel.reload()
HuggingFacePipelineModel.reload_runtime_info()
HuggingFacePipelineModel.restart_deployment()
HuggingFacePipelineModel.save()
HuggingFacePipelineModel.schema_input
HuggingFacePipelineModel.schema_output
HuggingFacePipelineModel.serialize_model()
HuggingFacePipelineModel.set_model_input_serializer()
HuggingFacePipelineModel.set_model_save_serializer()
HuggingFacePipelineModel.summary_status()
HuggingFacePipelineModel.update()
HuggingFacePipelineModel.update_deployment()
HuggingFacePipelineModel.update_summary_action()
HuggingFacePipelineModel.update_summary_status()
HuggingFacePipelineModel.upload_artifact()
HuggingFacePipelineModel.verify()
- ads.model.framework.lightgbm_model module
LightGBMModel
LightGBMModel.algorithm
LightGBMModel.artifact_dir
LightGBMModel.auth
LightGBMModel.estimator
LightGBMModel.framework
LightGBMModel.hyperparameter
LightGBMModel.metadata_custom
LightGBMModel.metadata_provenance
LightGBMModel.metadata_taxonomy
LightGBMModel.model_artifact
LightGBMModel.model_deployment
LightGBMModel.model_file_name
LightGBMModel.model_id
LightGBMModel.properties
LightGBMModel.runtime_info
LightGBMModel.schema_input
LightGBMModel.schema_output
LightGBMModel.serialize
LightGBMModel.version
LightGBMModel.delete_deployment()
LightGBMModel.deploy()
LightGBMModel.from_model_artifact()
LightGBMModel.from_model_catalog()
LightGBMModel.introspect()
LightGBMModel.predict()
LightGBMModel.prepare()
LightGBMModel.reload()
LightGBMModel.save()
LightGBMModel.summary_status()
LightGBMModel.verify()
LightGBMModel.delete()
LightGBMModel.delete_deployment()
LightGBMModel.deploy()
LightGBMModel.download_artifact()
LightGBMModel.evaluate()
LightGBMModel.from_id()
LightGBMModel.from_model_artifact()
LightGBMModel.from_model_catalog()
LightGBMModel.from_model_deployment()
LightGBMModel.get_data_serializer()
LightGBMModel.get_model_serializer()
LightGBMModel.introspect()
LightGBMModel.metadata_custom
LightGBMModel.metadata_provenance
LightGBMModel.metadata_taxonomy
LightGBMModel.model_deployment_id
LightGBMModel.model_id
LightGBMModel.model_input_serializer_type
LightGBMModel.model_save_serializer_type
LightGBMModel.populate_metadata()
LightGBMModel.populate_schema()
LightGBMModel.predict()
LightGBMModel.prepare()
LightGBMModel.prepare_save_deploy()
LightGBMModel.reload()
LightGBMModel.reload_runtime_info()
LightGBMModel.restart_deployment()
LightGBMModel.save()
LightGBMModel.schema_input
LightGBMModel.schema_output
LightGBMModel.serialize_model()
LightGBMModel.set_model_input_serializer()
LightGBMModel.set_model_save_serializer()
LightGBMModel.summary_status()
LightGBMModel.update()
LightGBMModel.update_deployment()
LightGBMModel.update_summary_action()
LightGBMModel.update_summary_status()
LightGBMModel.upload_artifact()
LightGBMModel.verify()
- ads.model.framework.pytorch_model module
PyTorchModel
PyTorchModel.algorithm
PyTorchModel.artifact_dir
PyTorchModel.auth
PyTorchModel.estimator
PyTorchModel.framework
PyTorchModel.hyperparameter
PyTorchModel.metadata_custom
PyTorchModel.metadata_provenance
PyTorchModel.metadata_taxonomy
PyTorchModel.model_artifact
PyTorchModel.model_deployment
PyTorchModel.model_file_name
PyTorchModel.model_id
PyTorchModel.properties
PyTorchModel.runtime_info
PyTorchModel.schema_input
PyTorchModel.schema_output
PyTorchModel.serialize
PyTorchModel.version
PyTorchModel.delete_deployment()
PyTorchModel.deploy()
PyTorchModel.from_model_artifact()
PyTorchModel.from_model_catalog()
PyTorchModel.introspect()
PyTorchModel.predict()
PyTorchModel.prepare()
PyTorchModel.reload()
PyTorchModel.save()
PyTorchModel.summary_status()
PyTorchModel.verify()
PyTorchModel.delete()
PyTorchModel.delete_deployment()
PyTorchModel.deploy()
PyTorchModel.download_artifact()
PyTorchModel.evaluate()
PyTorchModel.from_id()
PyTorchModel.from_model_artifact()
PyTorchModel.from_model_catalog()
PyTorchModel.from_model_deployment()
PyTorchModel.get_data_serializer()
PyTorchModel.get_model_serializer()
PyTorchModel.introspect()
PyTorchModel.metadata_custom
PyTorchModel.metadata_provenance
PyTorchModel.metadata_taxonomy
PyTorchModel.model_deployment_id
PyTorchModel.model_id
PyTorchModel.model_input_serializer_type
PyTorchModel.model_save_serializer_type
PyTorchModel.populate_metadata()
PyTorchModel.populate_schema()
PyTorchModel.predict()
PyTorchModel.prepare()
PyTorchModel.prepare_save_deploy()
PyTorchModel.reload()
PyTorchModel.reload_runtime_info()
PyTorchModel.restart_deployment()
PyTorchModel.save()
PyTorchModel.schema_input
PyTorchModel.schema_output
PyTorchModel.serialize_model()
PyTorchModel.set_model_input_serializer()
PyTorchModel.set_model_save_serializer()
PyTorchModel.summary_status()
PyTorchModel.update()
PyTorchModel.update_deployment()
PyTorchModel.update_summary_action()
PyTorchModel.update_summary_status()
PyTorchModel.upload_artifact()
PyTorchModel.verify()
- ads.model.framework.sklearn_model module
SklearnModel
SklearnModel.algorithm
SklearnModel.artifact_dir
SklearnModel.auth
SklearnModel.estimator
SklearnModel.framework
SklearnModel.hyperparameter
SklearnModel.metadata_custom
SklearnModel.metadata_provenance
SklearnModel.metadata_taxonomy
SklearnModel.model_artifact
SklearnModel.model_deployment
SklearnModel.model_file_name
SklearnModel.model_id
SklearnModel.properties
SklearnModel.runtime_info
SklearnModel.schema_input
SklearnModel.schema_output
SklearnModel.serialize
SklearnModel.version
SklearnModel.delete_deployment()
SklearnModel.deploy()
SklearnModel.from_model_artifact()
SklearnModel.from_model_catalog()
SklearnModel.introspect()
SklearnModel.predict()
SklearnModel.prepare()
SklearnModel.reload()
SklearnModel.save()
SklearnModel.summary_status()
SklearnModel.verify()
SklearnModel.delete()
SklearnModel.delete_deployment()
SklearnModel.deploy()
SklearnModel.download_artifact()
SklearnModel.evaluate()
SklearnModel.from_id()
SklearnModel.from_model_artifact()
SklearnModel.from_model_catalog()
SklearnModel.from_model_deployment()
SklearnModel.get_data_serializer()
SklearnModel.get_model_serializer()
SklearnModel.introspect()
SklearnModel.metadata_custom
SklearnModel.metadata_provenance
SklearnModel.metadata_taxonomy
SklearnModel.model_deployment_id
SklearnModel.model_id
SklearnModel.model_input_serializer_type
SklearnModel.model_save_serializer_type
SklearnModel.populate_metadata()
SklearnModel.populate_schema()
SklearnModel.predict()
SklearnModel.prepare()
SklearnModel.prepare_save_deploy()
SklearnModel.reload()
SklearnModel.reload_runtime_info()
SklearnModel.restart_deployment()
SklearnModel.save()
SklearnModel.schema_input
SklearnModel.schema_output
SklearnModel.serialize_model()
SklearnModel.set_model_input_serializer()
SklearnModel.set_model_save_serializer()
SklearnModel.summary_status()
SklearnModel.update()
SklearnModel.update_deployment()
SklearnModel.update_summary_action()
SklearnModel.update_summary_status()
SklearnModel.upload_artifact()
SklearnModel.verify()
- ads.model.framework.spark_model module
SparkPipelineModel
SparkPipelineModel.algorithm
SparkPipelineModel.artifact_dir
SparkPipelineModel.auth
SparkPipelineModel.estimator
SparkPipelineModel.framework
SparkPipelineModel.hyperparameter
SparkPipelineModel.metadata_custom
SparkPipelineModel.metadata_provenance
SparkPipelineModel.metadata_taxonomy
SparkPipelineModel.model_artifact
SparkPipelineModel.model_file_name
SparkPipelineModel.model_id
SparkPipelineModel.properties
SparkPipelineModel.runtime_info
SparkPipelineModel.schema_input
SparkPipelineModel.schema_output
SparkPipelineModel.serialize
SparkPipelineModel.version
SparkPipelineModel.delete_deployment()
SparkPipelineModel.deploy()
SparkPipelineModel.from_model_artifact()
SparkPipelineModel.from_model_catalog()
SparkPipelineModel.introspect()
SparkPipelineModel.predict()
SparkPipelineModel.prepare()
SparkPipelineModel.reload()
SparkPipelineModel.save()
SparkPipelineModel.summary_status()
SparkPipelineModel.verify()
SparkPipelineModel.delete()
SparkPipelineModel.delete_deployment()
SparkPipelineModel.deploy()
SparkPipelineModel.download_artifact()
SparkPipelineModel.evaluate()
SparkPipelineModel.from_id()
SparkPipelineModel.from_model_artifact()
SparkPipelineModel.from_model_catalog()
SparkPipelineModel.from_model_deployment()
SparkPipelineModel.get_data_serializer()
SparkPipelineModel.get_model_serializer()
SparkPipelineModel.introspect()
SparkPipelineModel.metadata_custom
SparkPipelineModel.metadata_provenance
SparkPipelineModel.metadata_taxonomy
SparkPipelineModel.model_deployment_id
SparkPipelineModel.model_id
SparkPipelineModel.model_input_serializer_type
SparkPipelineModel.model_save_serializer_type
SparkPipelineModel.populate_metadata()
SparkPipelineModel.populate_schema()
SparkPipelineModel.predict()
SparkPipelineModel.prepare()
SparkPipelineModel.prepare_save_deploy()
SparkPipelineModel.reload()
SparkPipelineModel.reload_runtime_info()
SparkPipelineModel.restart_deployment()
SparkPipelineModel.save()
SparkPipelineModel.schema_input
SparkPipelineModel.schema_output
SparkPipelineModel.serialize_model()
SparkPipelineModel.set_model_input_serializer()
SparkPipelineModel.set_model_save_serializer()
SparkPipelineModel.summary_status()
SparkPipelineModel.update()
SparkPipelineModel.update_deployment()
SparkPipelineModel.update_summary_action()
SparkPipelineModel.update_summary_status()
SparkPipelineModel.upload_artifact()
SparkPipelineModel.verify()
- ads.model.framework.tensorflow_model module
TensorFlowModel
TensorFlowModel.algorithm
TensorFlowModel.artifact_dir
TensorFlowModel.auth
TensorFlowModel.estimator
TensorFlowModel.framework
TensorFlowModel.hyperparameter
TensorFlowModel.metadata_custom
TensorFlowModel.metadata_provenance
TensorFlowModel.metadata_taxonomy
TensorFlowModel.model_artifact
TensorFlowModel.model_deployment
TensorFlowModel.model_file_name
TensorFlowModel.model_id
TensorFlowModel.properties
TensorFlowModel.runtime_info
TensorFlowModel.schema_input
TensorFlowModel.schema_output
TensorFlowModel.serialize
TensorFlowModel.version
TensorFlowModel.delete_deployment()
TensorFlowModel.deploy()
TensorFlowModel.from_model_artifact()
TensorFlowModel.from_model_catalog()
TensorFlowModel.introspect()
TensorFlowModel.predict()
TensorFlowModel.prepare()
TensorFlowModel.reload()
TensorFlowModel.save()
TensorFlowModel.summary_status()
TensorFlowModel.verify()
TensorFlowModel.delete()
TensorFlowModel.delete_deployment()
TensorFlowModel.deploy()
TensorFlowModel.download_artifact()
TensorFlowModel.evaluate()
TensorFlowModel.from_id()
TensorFlowModel.from_model_artifact()
TensorFlowModel.from_model_catalog()
TensorFlowModel.from_model_deployment()
TensorFlowModel.get_data_serializer()
TensorFlowModel.get_model_serializer()
TensorFlowModel.introspect()
TensorFlowModel.metadata_custom
TensorFlowModel.metadata_provenance
TensorFlowModel.metadata_taxonomy
TensorFlowModel.model_deployment_id
TensorFlowModel.model_id
TensorFlowModel.model_input_serializer_type
TensorFlowModel.model_save_serializer_type
TensorFlowModel.populate_metadata()
TensorFlowModel.populate_schema()
TensorFlowModel.predict()
TensorFlowModel.prepare()
TensorFlowModel.prepare_save_deploy()
TensorFlowModel.reload()
TensorFlowModel.reload_runtime_info()
TensorFlowModel.restart_deployment()
TensorFlowModel.save()
TensorFlowModel.schema_input
TensorFlowModel.schema_output
TensorFlowModel.serialize_model()
TensorFlowModel.set_model_input_serializer()
TensorFlowModel.set_model_save_serializer()
TensorFlowModel.summary_status()
TensorFlowModel.update()
TensorFlowModel.update_deployment()
TensorFlowModel.update_summary_action()
TensorFlowModel.update_summary_status()
TensorFlowModel.upload_artifact()
TensorFlowModel.verify()
- ads.model.framework.xgboost_model module
XGBoostModel
XGBoostModel.algorithm
XGBoostModel.artifact_dir
XGBoostModel.auth
XGBoostModel.estimator
XGBoostModel.framework
XGBoostModel.hyperparameter
XGBoostModel.metadata_custom
XGBoostModel.metadata_provenance
XGBoostModel.metadata_taxonomy
XGBoostModel.model_artifact
XGBoostModel.model_deployment
XGBoostModel.model_file_name
XGBoostModel.model_id
XGBoostModel.properties
XGBoostModel.runtime_info
XGBoostModel.schema_input
XGBoostModel.schema_output
XGBoostModel.serialize
XGBoostModel.version
XGBoostModel.delete_deployment()
XGBoostModel.deploy()
XGBoostModel.from_model_artifact()
XGBoostModel.from_model_catalog()
XGBoostModel.introspect()
XGBoostModel.predict()
XGBoostModel.prepare()
XGBoostModel.reload()
XGBoostModel.save()
XGBoostModel.summary_status()
XGBoostModel.verify()
XGBoostModel.delete()
XGBoostModel.delete_deployment()
XGBoostModel.deploy()
XGBoostModel.download_artifact()
XGBoostModel.evaluate()
XGBoostModel.from_id()
XGBoostModel.from_model_artifact()
XGBoostModel.from_model_catalog()
XGBoostModel.from_model_deployment()
XGBoostModel.get_data_serializer()
XGBoostModel.get_model_serializer()
XGBoostModel.introspect()
XGBoostModel.metadata_custom
XGBoostModel.metadata_provenance
XGBoostModel.metadata_taxonomy
XGBoostModel.model_deployment_id
XGBoostModel.model_id
XGBoostModel.model_input_serializer_type
XGBoostModel.model_save_serializer_type
XGBoostModel.populate_metadata()
XGBoostModel.populate_schema()
XGBoostModel.predict()
XGBoostModel.prepare()
XGBoostModel.prepare_save_deploy()
XGBoostModel.reload()
XGBoostModel.reload_runtime_info()
XGBoostModel.restart_deployment()
XGBoostModel.save()
XGBoostModel.schema_input
XGBoostModel.schema_output
XGBoostModel.serialize_model()
XGBoostModel.set_model_input_serializer()
XGBoostModel.set_model_save_serializer()
XGBoostModel.summary_status()
XGBoostModel.update()
XGBoostModel.update_deployment()
XGBoostModel.update_summary_action()
XGBoostModel.update_summary_status()
XGBoostModel.upload_artifact()
XGBoostModel.verify()
- Module contents
- ads.model.model_artifact_boilerplate package
- ads.model.runtime package
- Submodules
- ads.model.runtime.env_info module
- ads.model.runtime.model_deployment_details module
- ads.model.runtime.model_provenance_details module
ModelProvenanceDetails
ModelProvenanceDetails.project_ocid
ModelProvenanceDetails.tenancy_ocid
ModelProvenanceDetails.training_code
ModelProvenanceDetails.training_compartment_ocid
ModelProvenanceDetails.training_conda_env
ModelProvenanceDetails.training_region
ModelProvenanceDetails.training_resource_ocid
ModelProvenanceDetails.user_ocid
ModelProvenanceDetails.vm_image_internal_id
TrainingCode
- ads.model.runtime.runtime_info module
- ads.model.runtime.utils module
- Module contents
- ads.model.service package
- Submodules
- ads.model.service.oci_datascience_model module
ModelArtifactNotFoundError
ModelNotSavedError
ModelProvenanceNotFoundError
ModelWithActiveDeploymentError
OCIDataScienceModel
OCIDataScienceModel.create()
OCIDataScienceModel.create_model_provenance()
OCIDataScienceModel.get_model_provenance()
OCIDataScienceModel.get_artifact_info()
OCIDataScienceModel.create_model_artifact()
OCIDataScienceModel.import_model_artifact()
OCIDataScienceModel.update()
OCIDataScienceModel.delete()
OCIDataScienceModel.model_deployment()
OCIDataScienceModel.from_id()
OCIDataScienceModel.create()
OCIDataScienceModel.create_model_artifact()
OCIDataScienceModel.create_model_provenance()
OCIDataScienceModel.delete()
OCIDataScienceModel.export_model_artifact()
OCIDataScienceModel.from_id()
OCIDataScienceModel.get_artifact_info()
OCIDataScienceModel.get_model_artifact_content()
OCIDataScienceModel.get_model_provenance()
OCIDataScienceModel.import_model_artifact()
OCIDataScienceModel.is_model_by_reference()
OCIDataScienceModel.model_deployment()
OCIDataScienceModel.update()
OCIDataScienceModel.update_model_provenance()
check_for_model_id()
- ads.model.service.oci_datascience_model_version_set module
- Module contents
- ads.model.transformer package
Submodules¶
ads.model.artifact module¶
- exception ads.model.artifact.AritfactFolderStructureError(required_files: Tuple[str])[source]¶
Bases:
Exception
- exception ads.model.artifact.ArtifactRequiredFilesError(required_files: Tuple[str])[source]¶
Bases:
Exception
- class ads.model.artifact.ModelArtifact(artifact_dir: str, model_file_name: str | None = None, reload: bool | None = False, ignore_conda_error: bool | None = False, local_copy_dir: str | None = None, auth: dict | None = None)[source]¶
Bases:
object
The class that represents model artifacts. It is designed to help to generate and manage model artifacts.
Initializes a ModelArtifact instance.
- Parameters:
artifact_dir (str) – The artifact folder to store the files needed for deployment.
model_file_name ((str, optional). Defaults to None.) – The file name of the serialized model.
reload ((bool, optional). Defaults to False.) – Determine whether will reload the Model into the env.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
local_copy_dir ((str, optional). Defaults to None.) – The local back up directory of the model artifacts.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Returns:
A ModelArtifact instance.
- Return type:
- Raises:
ValueError – If artifact_dir not provided.
- classmethod from_uri(uri: str, artifact_dir: str, model_file_name: str | None = None, force_overwrite: bool | None = False, auth: Dict | None = None, ignore_conda_error: bool | None = False, reload: bool | None = False)[source]¶
Constructs a ModelArtifact object from the existing model artifacts.
- Parameters:
uri (str) – The URI of source artifact folder or achive. Can be local path or OCI object storage URI.
artifact_dir (str) – The local artifact folder to store the files needed for deployment.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
model_file_name ((str, optional). Defaults to None) – The file name of the serialized model.
reload ((bool, optional). Defaults to False.) – Whether to reload the Model into the environment.
- Returns:
A ModelArtifact instance
- Return type:
- Raises:
ValueError – If uri is equal to artifact_dir, and it not exists. If artifact_dir is not provided.
- prepare_runtime_yaml(inference_conda_env: str, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', bucketname: str = 'service-conda-packs', auth: dict | None = None, ignore_conda_error: bool = False) None [source]¶
Generate a runtime yaml file and save it to the artifact directory.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – The object storage path of conda pack which will be used in deployment. Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – The python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – The object storage path of conda pack used during training. Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
training_python_version ((str, optional). Defaults to None.) – The python version used during training.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional)) – The namespace of region. Defaults to environment variable CONDA_BUCKET_NS.
bucketname ((str, optional)) – The bucketname of service pack. Defaults to environment variable CONDA_BUCKET_NAME.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Raises:
ValueError – If neither slug or conda_env_uri is provided.
- Returns:
A RuntimeInfo instance.
- Return type:
- prepare_score_py(jinja_template_filename: str, model_file_name: str | None = None, **kwargs)[source]¶
Prepares score.py file.
- Parameters:
jinja_template_filename (str.) – The jinja template file name.
model_file_name ((str, optional). Defaults to None.) – The file name of the serialized model.
**kwargs ((dict)) – use_torch_script: bool data_deserializer: str
- Return type:
None
- Raises:
ValueError – If model_file_name not provided.
ads.model.artifact_downloader module¶
- class ads.model.artifact_downloader.ArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, force_overwrite: bool | None = False)[source]¶
Bases:
ABC
The abstract class to download model artifacts.
Initializes ArtifactDownloader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
target_dir (str) – The target location of model after download.
force_overwrite (bool) – Overwrite target_dir if exists.
- PROGRESS_STEPS_COUNT = 1¶
- download()[source]¶
Downloads model artifacts.
- Return type:
None
- Raises:
ValueError – If target directory does not exist.
- class ads.model.artifact_downloader.LargeArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, auth: Dict | None = None, force_overwrite: bool | None = False, region: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_file_description: dict | None = None)[source]¶
Bases:
ArtifactDownloader
Initializes LargeArtifactDownloader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
target_dir (str) – The target location of model after download.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Overwrite target_dir if exists.
region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
model_file_description ((dict, optional). Defaults to None.) – Contains object path details for models created by reference.
- PROGRESS_STEPS_COUNT = 4¶
- class ads.model.artifact_downloader.SmallArtifactDownloader(dsc_model: OCIDataScienceModel, target_dir: str, force_overwrite: bool | None = False)[source]¶
Bases:
ArtifactDownloader
Initializes ArtifactDownloader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
target_dir (str) – The target location of model after download.
force_overwrite (bool) – Overwrite target_dir if exists.
- PROGRESS_STEPS_COUNT = 3¶
ads.model.artifact_uploader module¶
- class ads.model.artifact_uploader.ArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str)[source]¶
Bases:
ABC
The abstract class to upload model artifacts.
Initializes ArtifactUploader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
artifact_path (str) – The model artifact location.
- PROGRESS_STEPS_COUNT = 3¶
- class ads.model.artifact_uploader.LargeArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str, bucket_uri: str | None = None, auth: Dict | None = None, region: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, parallel_process_count: int = 9)[source]¶
Bases:
ArtifactUploader
The class helper to upload large model artifacts.
- artifact_path¶
- The model artifact location. Possible values are:
object storage path to zip archive. Example: oci://<bucket_name>@<namespace>/prefix/mymodel.zip.
local path to zip archive. Example: ./mymodel.zip.
local path to folder with artifacts. Example: ./mymodel.
- Type:
- auth¶
The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Type:
- bucket_uri¶
The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
Added in version 2.8.10: If artifact_path is object storage path to a zip archive, bucket_uri will be ignored.
- Type:
- dsc_model¶
The data scince model instance.
- Type:
- progress¶
An instance of the TqdmProgressBar.
- Type:
- region¶
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Type:
- remove_existing_artifact¶
Wether artifacts uploaded to object storage bucket need to be removed or not.
- Type:
- upload_manager¶
The uploadManager simplifies interaction with the Object Storage service.
- Type:
UploadManager
Initializes LargeArtifactUploader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
artifact_path (str) –
- The model artifact location. Possible values are:
object storage path to zip archive. Example: oci://<bucket_name>@<namespace>/prefix/mymodel.zip.
local path to zip archive. Example: ./mymodel.zip.
local path to folder with artifacts. Example: ./mymodel.
bucket_uri ((str, optional). Defaults to None.) –
The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts from local which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
Added in version 2.8.10: If artifact_path is object storage path to a zip archive, bucket_uri will be ignored.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
region ((str, optional). Defaults to None.) – The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
parallel_process_count ((int, optional).) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
- PROGRESS_STEPS_COUNT = 4¶
- class ads.model.artifact_uploader.SmallArtifactUploader(dsc_model: OCIDataScienceModel, artifact_path: str)[source]¶
Bases:
ArtifactUploader
The class helper to upload small model artifacts.
Initializes ArtifactUploader instance.
- Parameters:
dsc_model (OCIDataScienceModel) – The data scince model instance.
artifact_path (str) – The model artifact location.
- PROGRESS_STEPS_COUNT = 1¶
ads.model.base_properties module¶
- class ads.model.base_properties.BaseProperties[source]¶
Bases:
Serializable
Represents base properties class.
- with_prop(name: str, value: Any) BaseProperties [source]¶
Sets property value.
- with_dict(obj_dict: Dict) BaseProperties [source]¶
Populates properties values from dict.
- with_env() BaseProperties [source]¶
Populates properties values from environment variables.
- with_config(config: ads.config.ConfigSection) BaseProperties [source]¶
Sets properties values from the config profile.
- from_dict(obj_dict: Dict[str, Any]) 'BaseProperties' [source]¶
Creates an instance of the properties class from a dictionary.
- from_config(uri: str, profile: str, auth: Dict | None = None) "BaseProperties": [source]¶
Loads properties from the config file.
- to_config(uri: str, profile: str, force_overwrite: bool | None = False, auth: Dict | None = None) None [source]¶
Saves properties to the config file.
- classmethod from_config(uri: str, profile: str, auth: Dict | None = None) BaseProperties [source]¶
Loads properties from the config file.
- Parameters:
uri (str) – The URI of the config file. Can be local path or OCI object storage URI.
profile (str) – The config profile name.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Returns:
Instance of the BaseProperties.
- Return type:
- classmethod from_dict(obj_dict: Dict[str, Any]) BaseProperties [source]¶
Creates an instance of the properties class from a dictionary.
- Parameters:
obj_dict (Dict[str, Any]) – List of properties and values in dictionary format.
- Returns:
Instance of the BaseProperties.
- Return type:
- to_config(uri: str, profile: str, force_overwrite: bool | None = False, auth: Dict | None = None) None [source]¶
Saves properties to the config file.
- Parameters:
uri (str) – The URI of the config file. Can be local path or OCI object storage URI.
profile (str) – The config profile name.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- Returns:
Nothing
- Return type:
None
- to_dict(**kwargs)[source]¶
Serializes instance of class into a dictionary.
- Returns:
A dictionary.
- Return type:
Dict
- with_config(config: ConfigSection) BaseProperties [source]¶
Sets properties values from the config profile.
- Returns:
Instance of the BaseProperties.
- Return type:
- with_dict(obj_dict: Dict[str, Any]) BaseProperties [source]¶
Sets properties from a dict.
- with_env() BaseProperties [source]¶
Sets properties values from environment variables.
- Returns:
Instance of the BaseProperties.
- Return type:
- exception ads.model.generic_model.ArtifactsNotAvailableError(msg='Model artifacts are either not generated or not available locally.')[source]¶
Bases:
Exception
- class ads.model.generic_model.DataScienceModelType[source]¶
Bases:
str
- MODEL = 'datasciencemodel'¶
- MODEL_DEPLOYMENT = 'datasciencemodeldeployment'¶
- class ads.model.generic_model.FrameworkSpecificModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]¶
Bases:
GenericModel
GenericModel Constructor.
- Parameters:
estimator ((Callable).) – Trained model.
artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.
- predict(data: Any | None = None, auto_serialize_data: bool = True, **kwargs) Dict[str, Any] [source]¶
Returns prediction of input data run against the model deployment endpoint.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.predict( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
Dictionary with the predicted values.
- Return type:
Dict[str, Any]
- Raises:
NotActiveDeploymentError – If model deployment process was not started or not finished yet.
ValueError – If data is empty or not JSON serializable.
- verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = True, **kwargs) Dict[str, Any] [source]¶
Test if deployment works in local environment.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.verify( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data used to test if deployment works in local environment.
reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
A dictionary which contains prediction results.
- Return type:
Dict
- class ads.model.generic_model.GenericModel(estimator: Callable | None = None, artifact_dir: str | None = None, properties: ModelProperties | None = None, auth: Dict | None = None, serialize: bool = True, model_save_serializer: SERDE | None = None, model_input_serializer: SERDE | None = None, **kwargs: dict)[source]¶
Bases:
MetadataMixin
,Introspectable
,EvaluatorMixin
Generic Model class which is the base class for all the frameworks including the unsupported frameworks.
- auth¶
Default authentication is set using the ads.set_auth API. To override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create an authentication signer to instantiate an IdentityClient object.
- Type:
Dict
- estimator¶
Any model object generated by sklearn framework
- Type:
Callable
- metadata_custom¶
The model custom metadata.
- Type:
- metadata_provenance¶
The model provenance metadata.
- Type:
- metadata_taxonomy¶
The model taxonomy metadata.
- Type:
- model_artifact¶
This is built by calling prepare.
- Type:
- model_deployment¶
A ModelDeployment instance.
- Type:
- model_input_serializer¶
Instance of ads.model.SERDE. Used for serialize/deserialize data.
- Type:
- properties¶
ModelProperties object required to save and deploy model.
- Type:
- runtime_info¶
A RuntimeInfo instance.
- Type:
- serialize¶
Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
- Type:
- from_model_artifact(uri, ..., \*\*kwargs)[source]¶
Loads model from the specified folder, or zip/tar archive.
- from_model_deployment(model_deployment_id, ..., \*\*kwargs)[source]¶
Loads model from model deployment.
- predict(data, ...)[source]¶
Returns prediction of input data run against the model deployment endpoint.
- prepare(..., \*\*kwargs)[source]¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- set_model_input_serializer(serde)[source]¶
Registers serializer used for serializing data passed in verify/predict.
Examples
>>> import tempfile >>> from ads.model.generic_model import GenericModel
>>> class Toy: ... def predict(self, x): ... return x ** 2 >>> estimator = Toy()
>>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp()) >>> model.summary_status() >>> model.prepare( ... inference_conda_env="dbexp_p38_cpu_v1", ... inference_python_version="3.8", ... model_file_name="toy_model.pkl", ... training_id=None, ... force_overwrite=True ... ) >>> model.verify(2) >>> model.save() >>> model.deploy() >>> # Update access log id, freeform tags and description for the model deployment >>> model.update_deployment( ... access_log={ ... log_id=<log_ocid> ... }, ... description="Description for Custom Model", ... freeform_tags={"key": "value"}, ... ) >>> model.predict(2) >>> # Uncomment the line below to delete the model and the associated model deployment >>> # model.delete(delete_associated_model_deployment = True)
GenericModel Constructor.
- Parameters:
estimator ((Callable).) – Trained model.
artifact_dir ((str, optional). Defaults to None.) – Artifact directory to store the files needed for deployment.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
serialize ((bool, optional). Defaults to True.) – Whether to serialize the model to pkl file by default. If False, you need to serialize the model manually, save it under artifact_dir and update the score.py manually.
model_save_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model.
model_input_serializer ((SERDE or str, optional). Defaults to None.) – Instance of ads.model.SERDE. Used for serialize/deserialize model input.
- classmethod delete(model_id: str | None = None, delete_associated_model_deployment: bool | None = False, delete_model_artifact: bool | None = False, artifact_dir: str | None = None, **kwargs: Dict) None [source]¶
Deletes a model from Model Catalog.
- Parameters:
model_id ((str, optional). Defaults to None.) – The model OCID to be deleted. If the method called on instance level, then self.model_id will be used.
delete_associated_model_deployment ((bool, optional). Defaults to False.) – Whether associated model deployments need to be deleted or not.
delete_model_artifact ((bool, optional). Defaults to False.) – Whether associated model artifacts need to be deleted or not.
artifact_dir ((str, optional). Defaults to None) – The local path to the model artifacts folder. If the method called on instance level, the self.artifact_dir will be used by default.
- Return type:
None
- Raises:
ValueError – If model_id not provided.
- delete_deployment(wait_for_completion: bool = True) None [source]¶
Deletes the current deployment.
- Parameters:
wait_for_completion ((bool, optional). Defaults to True.) – Whether to wait till completion.
- Return type:
None
- Raises:
ValueError – if there is not deployment attached yet.:
- deploy(wait_for_completion: bool | None = True, display_name: str | None = None, description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, **kwargs: Dict) ModelDeployment [source]¶
Deploys a model. The model needs to be saved to the model catalog at first. You can deploy the model on either conda or container runtime. The customized runtime allows you to bring your own service container. To deploy model on container runtime, make sure to build the container and push it to OCIR. For more information, see https://docs.oracle.com/en-us/iaas/data-science/using/mod-dep-byoc.htm.
Example
>>> # This is an example to deploy model on container runtime >>> model = GenericModel(estimator=estimator, artifact_dir=tempfile.mkdtemp()) >>> model.summary_status() >>> model.prepare( ... model_file_name="toy_model.pkl", ... ignore_conda_error=True, # set ignore_conda_error=True for container runtime ... force_overwrite=True ... ) >>> model.verify() >>> model.save() >>> model.deploy( ... deployment_image="iad.ocir.io/<namespace>/<image>:<tag>", ... entrypoint=["python", "/opt/ds/model/deployed_model/api.py"], ... server_port=5000, ... health_check_port=5000, ... environment_variables={"key":"value"} ... )
- Parameters:
wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.
display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.
deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.
deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.
deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.
deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.
deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.
deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.
deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.
kwargs –
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken from the environment variables.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken from the environment variables.
- max_wait_time(int, optional). Defaults to 1200 seconds.
Maximum amount of time to wait in seconds. Negative implies infinite wait time.
- poll_interval(int, optional). Defaults to 10 seconds.
Poll interval in seconds.
- freeform_tags: (Dict[str, str], optional). Defaults to None.
Freeform tags of the model deployment.
- defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.
Defined tags of the model deployment.
- image_digest: (str, optional). Defaults to None.
The digest of docker container image.
- cmd: (List, optional). Defaults to empty.
The command line arguments for running docker container image.
- entrypoint: (List, optional). Defaults to empty.
The entrypoint for running docker container image.
- server_port: (int, optional). Defaults to 8080.
The server port for docker container image.
- health_check_port: (int, optional). Defaults to 8080.
The health check port for docker container image.
- deployment_mode: (str, optional). Defaults to HTTPS_ONLY.
The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.
- input_stream_ids: (List, optional). Defaults to empty.
The input stream ids. Required for STREAM_ONLY mode.
- output_stream_ids: (List, optional). Defaults to empty.
The output stream ids. Required for STREAM_ONLY mode.
- environment_variables: (Dict, optional). Defaults to empty.
The environment variables for model deployment.
Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.
- Returns:
The ModelDeployment instance.
- Return type:
- Raises:
ValueError – If model_id is not specified.
- download_artifact(artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, **kwargs) GenericModel [source]¶
Downloads model artifacts from the model catalog.
- Parameters:
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authentication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
- Returns:
An instance of GenericModel class.
- Return type:
- Raises:
ValueError – If model_id is not available in the GenericModel object.
- classmethod from_id(ocid: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model OCID or model deployment OCID.
- Parameters:
ocid (str) – The model OCID or model deployment OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- Returns:
An instance of GenericModel class.
- Return type:
- classmethod from_model_artifact(uri: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | None = None, ignore_conda_error: bool | None = False, **kwargs: dict) Self [source]¶
Loads model from a folder, or zip/tar archive.
- Parameters:
uri (str) – The folder path, ZIP file path, or TAR file path. It could contain a seriliazed model(required) as well as any files needed for deployment including: serialized model, runtime.yaml, score.py and etc. The content of the folder will be copied to the artifact_dir folder.
model_file_name ((str, optional). Defaults to None.) – The serialized model file name. Will be extracted from artifacts if not provided.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
- Returns:
An instance of GenericModel class.
- Return type:
- Raises:
ValueError – If model_file_name not provided.
- classmethod from_model_catalog(model_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model catalog.
- Parameters:
model_id (str) – The model OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Returns:
An instance of GenericModel class.
- Return type:
- classmethod from_model_deployment(model_deployment_id: str, model_file_name: str | None = None, artifact_dir: str | None = None, auth: Dict | None = None, force_overwrite: bool | None = False, properties: ModelProperties | Dict | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = True, ignore_conda_error: bool | None = False, download_artifact: bool | None = True, **kwargs) Self [source]¶
Loads model from model deployment.
- Parameters:
model_deployment_id (str) – The model deployment OCID.
model_file_name ((str, optional). Defaults to None.) – The name of the serialized model.
artifact_dir ((str, optional). Defaults to None.) – The artifact directory to store the files needed for deployment. Will be created if not exists.
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files or not.
properties ((ModelProperties, optional). Defaults to None.) – ModelProperties object required to save and deploy model.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
download_artifact ((bool, optional). Defaults to True.) – Whether to download the model pickle or checkpoints
kwargs –
- compartment_id(str, optional)
Compartment OCID. If not specified, the value will be taken from the environment variables.
- timeout(int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- Returns:
An instance of GenericModel class.
- Return type:
- get_data_serializer()[source]¶
Gets data serializer.
- Returns:
object
- Return type:
ads.model.Serializer object.
- introspect() DataFrame [source]¶
Conducts instrospection.
- Returns:
A pandas DataFrame which contains the instrospection results.
- Return type:
pandas.DataFrame
- property metadata_custom¶
- property metadata_provenance¶
- property metadata_taxonomy¶
- property model_deployment_id¶
- property model_id¶
- model_input_serializer_type¶
alias of
ModelInputSerializerType
- model_save_serializer_type¶
alias of
ModelSerializerType
- predict(data: Any | None = None, auto_serialize_data: bool = False, local: bool = False, **kwargs) Dict[str, Any] [source]¶
Returns prediction of input data run against the model deployment endpoint.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.predict(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.predict( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data for the prediction for onnx models, for local serialization method, data can be the data types that each framework support.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. If auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
local (bool.) – Whether to invoke the prediction locally. Default to False.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
Dictionary with the predicted values.
- Return type:
Dict[str, Any]
- Raises:
NotActiveDeploymentError – If model deployment process was not started or not finished yet.
ValueError – If model is not deployed yet or the endpoint information is not available.
- prepare(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, score_py_uri: str | None = None, **kwargs: Dict) GenericModel [source]¶
Prepare and save the score.py, serialized model and runtime.yaml file.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.
training_python_version ((str, optional). Defaults to None.) – Python version used during training.
model_file_name ((str, optional). Defaults to None.) – Name of the serialized model. Will be auto generated if not provided.
as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.
initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.
use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.
ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
score_py_uri ((str, optional). Defaults to None.) – The uri of the customized score.py, which can be local path or OCI object storage URI. When provide with this attibute, the score.py will not be auto generated, and the provided score.py will be added into artifact_dir.
kwargs –
- impute_values: (dict, optional).
The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.
- Raises:
FileExistsError – If files already exist but force_overwrite is False.
ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.
- Returns:
An instance of GenericModel class.
- Return type:
- prepare_save_deploy(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, model_file_name: str | None = None, as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, namespace: str = 'id19sfcrra6z', use_case_type: str | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, model_display_name: str | None = None, model_description: str | None = None, model_freeform_tags: dict | None = None, model_defined_tags: dict | None = None, ignore_introspection: bool | None = False, wait_for_completion: bool | None = True, deployment_display_name: str | None = None, deployment_description: str | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | None = None, deployment_ocpus: float | None = None, deployment_image: str | None = None, bucket_uri: str | None = None, overwrite_existing_artifact: bool | None = True, remove_existing_artifact: bool | None = True, model_version_set: str | ModelVersionSet | None = None, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs: Dict) ModelDeployment [source]¶
Shortcut for prepare, save and deploy steps.
- Parameters:
inference_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack.
inference_python_version ((str, optional). Defaults to None.) – Python version which will be used in deployment.
training_conda_env ((str, optional). Defaults to None.) – Can be either slug or object storage path of the conda pack. You can only pass in slugs if the conda pack is a service pack. If training_conda_env is not provided, training_conda_env will use the same value of training_conda_env.
training_python_version ((str, optional). Defaults to None.) – Python version used during training.
model_file_name ((str, optional). Defaults to None.) – Name of the serialized model.
as_onnx ((bool, optional). Defaults to False.) – Whether to serialize as onnx model.
initial_types ((list[Tuple], optional).) – Defaults to None. Only used for SklearnModel, LightGBMModel and XGBoostModel. Each element is a tuple of a variable name and a type. Check this link http://onnx.ai/sklearn-onnx/api_summary.html#id2 for more explanation and examples for initial_types.
force_overwrite ((bool, optional). Defaults to False.) – Whether to overwrite existing files.
namespace ((str, optional).) – Namespace of region. This is used for identifying which region the service pack is from when you pass a slug to inference_conda_env and training_conda_env.
use_case_type (str) – The use case type of the model. Use it through UserCaseType class or string provided in UseCaseType. For example, use_case_type=UseCaseType.BINARY_CLASSIFICATION or use_case_type=”binary_classification”. Check with UseCaseType class to see all supported types.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to value from environment variables.) – The training OCID for model. Can be notebook session or job OCID.
ignore_pending_changes (bool. Defaults to False.) – whether to ignore the pending changes in the git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – Do not generate the input schema if the input has more than this number of features(columns).
ignore_conda_error ((bool, optional). Defaults to False.) – Parameter to ignore error when collecting conda information.
model_display_name ((str, optional). Defaults to None.) – The name of the model. If a model_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
model_description ((str, optional). Defaults to None.) – The description of the model.
model_freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.
model_defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.
ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.
wait_for_completion ((bool, optional). Defaults to True.) – Flag set for whether to wait for deployment to complete before proceeding.
deployment_display_name ((str, optional). Defaults to None.) – The name of the model deployment. If a deployment_display_name is not provided in kwargs, a randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
deployment_instance_shape ((str, optional). Default to VM.Standard2.1.) – The shape of the instance used for deployment.
deployment_instance_subnet_id ((str, optional). Default to None.) – The subnet id of the instance used for deployment.
deployment_instance_count ((int, optional). Defaults to 1.) – The number of instance used for deployment.
deployment_bandwidth_mbps ((int, optional). Defaults to 10.) – The bandwidth limit on the load balancer in Mbps.
deployment_log_group_id ((str, optional). Defaults to None.) – The oci logging group id. The access log and predict log share the same log group.
deployment_access_log_id ((str, optional). Defaults to None.) – The access log OCID for the access logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_predict_log_id ((str, optional). Defaults to None.) – The predict log OCID for the predict logs. https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm
deployment_memory_in_gbs ((float, optional). Defaults to None.) – Specifies the size of the memory of the model deployment instance in GBs.
deployment_ocpus ((float, optional). Defaults to None.) – Specifies the ocpus count of the model deployment instance.
deployment_image ((str, optional). Defaults to None.) – The OCIR path of docker container image. Required for deploying model on container runtime.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for downloading large artifacts with size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Wether artifacts uploaded to object storage bucket need to be removed or not.
model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The Model version set OCID, or name, or ModelVersionSet instance.
version_label ((str, optional). Defaults to None.) – The model version lebel.
model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.
kwargs –
- impute_values: (dict, optional).
The dictionary where the key is the column index(or names is accepted for pandas dataframe) and the value is the impute value for the corresponding column.
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken either from the environment variables or model properties.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.
- image_digest: (str, optional). Defaults to None.
The digest of docker container image.
- cmd: (List, optional). Defaults to empty.
The command line arguments for running docker container image.
- entrypoint: (List, optional). Defaults to empty.
The entrypoint for running docker container image.
- server_port: (int, optional). Defaults to 8080.
The server port for docker container image.
- health_check_port: (int, optional). Defaults to 8080.
The health check port for docker container image.
- deployment_mode: (str, optional). Defaults to HTTPS_ONLY.
The deployment mode. Allowed values are: HTTPS_ONLY and STREAM_ONLY.
- input_stream_ids: (List, optional). Defaults to empty.
The input stream ids. Required for STREAM_ONLY mode.
- output_stream_ids: (List, optional). Defaults to empty.
The output stream ids. Required for STREAM_ONLY mode.
- environment_variables: (Dict, optional). Defaults to empty.
The environment variables for model deployment.
- timeout: (int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
- max_wait_time(int, optional). Defaults to 1200 seconds.
Maximum amount of time to wait in seconds. Negative implies infinite wait time.
- poll_interval(int, optional). Defaults to 10 seconds.
Poll interval in seconds.
- freeform_tags: (Dict[str, str], optional). Defaults to None.
Freeform tags of the model deployment.
- defined_tags: (Dict[str, dict[str, object]], optional). Defaults to None.
Defined tags of the model deployment.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
Also can be any keyword argument for initializing the ads.model.deployment.ModelDeploymentProperties. See ads.model.deployment.ModelDeploymentProperties() for details.
- Returns:
The ModelDeployment instance.
- Return type:
- Raises:
FileExistsError – If files already exist but force_overwrite is False.
ValueError – If inference_python_version is not provided, but also cannot be found through manifest file.
- reload() GenericModel [source]¶
Reloads the model artifact files: score.py and the runtime.yaml.
- Returns:
An instance of GenericModel class.
- Return type:
- reload_runtime_info() None [source]¶
Reloads the model artifact file: runtime.yaml.
- Returns:
Nothing.
- Return type:
None
- restart_deployment(max_wait_time: int = 1200, poll_interval: int = 10) ModelDeployment [source]¶
Restarts the current deployment.
- Parameters:
max_wait_time ((int, optional). Defaults to 1200 seconds.) – Maximum amount of time to wait for activate or deactivate in seconds. Total amount of time to wait for restart deployment is twice as the value. Negative implies infinite wait time.
poll_interval ((int, optional). Defaults to 10 seconds.) – Poll interval in seconds.
- Returns:
The ModelDeployment instance.
- Return type:
- save(bucket_uri: str | None = None, defined_tags: dict | None = None, description: str | None = None, display_name: str | None = None, featurestore_dataset=None, freeform_tags: dict | None = None, ignore_introspection: bool | None = False, model_version_set: str | ModelVersionSet | None = None, overwrite_existing_artifact: bool | None = True, parallel_process_count: int = 9, remove_existing_artifact: bool | None = True, reload: bool | None = True, version_label: str | None = None, model_by_reference: bool | None = False, **kwargs) str [source]¶
Saves model artifacts to the model catalog.
- Parameters:
display_name ((str, optional). Defaults to None.) – The name of the model. If a display_name is not provided in kwargs, randomly generated easy to remember name with timestamp will be generated, like ‘strange-spider-2022-08-17-23:55.02’.
description ((str, optional). Defaults to None.) – The description of the model.
freeform_tags (Dict(str, str), Defaults to None.) – Freeform tags for the model.
defined_tags ((Dict(str, dict(str, object)), optional). Defaults to None.) – Defined tags for the model.
ignore_introspection ((bool, optional). Defaults to None.) – Determine whether to ignore the result of model introspection or not. If set to True, the save will ignore all model introspection errors.
bucket_uri ((str, optional). Defaults to None.) – The OCI Object Storage URI where model artifacts will be copied to. The bucket_uri is only necessary for uploading large artifacts which size is greater than 2GB. Example: oci://<bucket_name>@<namespace>/prefix/.
overwrite_existing_artifact ((bool, optional). Defaults to True.) – Overwrite target bucket artifact if exists.
remove_existing_artifact ((bool, optional). Defaults to True.) – Whether artifacts uploaded to object storage bucket need to be removed or not.
model_version_set ((Union[str, ModelVersionSet], optional). Defaults to None.) – The model version set OCID, or model version set name, or ModelVersionSet instance.
version_label ((str, optional). Defaults to None.) – The model version lebel.
featurestore_dataset ((Dataset, optional).) – The feature store dataset
parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
reload ((bool, optional)) – Whether to reload to check if load_model() works in score.py. Default to True.
model_by_reference ((bool, optional)) – Whether model artifact is made available to Model Store by reference.
kwargs –
- project_id: (str, optional).
Project OCID. If not specified, the value will be taken either from the environment variables or model properties.
- compartment_id(str, optional).
Compartment OCID. If not specified, the value will be taken either from the environment variables or model properties.
- region: (str, optional). Defaults to None.
The destination Object Storage bucket region. By default the value will be extracted from the OCI_REGION_METADATA environment variables.
- timeout: (int, optional). Defaults to 10 seconds.
The connection timeout in seconds for the client.
Also can be any attribute that oci.data_science.models.Model accepts.
- Raises:
RuntimeInfoInconsistencyError – When .runtime_info is not synched with runtime.yaml file.
- Returns:
The model id.
- Return type:
Examples
Example for saving large model artifacts (>2GB): >>> model.save( … bucket_uri=”oci://my-bucket@my-tenancy/”, … overwrite_existing_artifact=True, … remove_existing_artifact=True, … parallel_process_count=9, … )
- property schema_input¶
- property schema_output¶
- serialize_model(as_onnx: bool = False, initial_types: List[Tuple] | None = None, force_overwrite: bool = False, X_sample: any | None = None, **kwargs)[source]¶
Serialize and save model using ONNX or model specific method.
- Parameters:
as_onnx ((boolean, optional)) – If set as True, convert into ONNX model.
initial_types ((List[Tuple], optional)) – a python list. Each element is a tuple of a variable name and a data type.
force_overwrite ((boolean, optional)) – If set as True, overwrite serialized model if exists.
X_sample ((any, optional). Defaults to None.) – Contains model inputs such that model(X_sample) is a valid invocation of the model, used to valid model input type.
- Returns:
Nothing
- Return type:
None
- set_model_input_serializer(model_input_serializer: str | SERDE)[source]¶
Registers serializer used for serializing data passed in verify/predict.
Examples
>>> generic_model.set_model_input_serializer(GenericModel.model_input_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it. >>> generic_model.set_model_input_serializer("cloudpickle")
>>> # Example of creating customized model input serializer and registering it. >>> from ads.model import SERDE >>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE): ... def __init__(self): ... super().__init__() ... def serialize(self, data): ... serialized_data = 1 ... return serialized_data ... def deserialize(self, data): ... deserialized_data = 2 ... return deserialized_data
>>> class Toy: ... def predict(self, x): ... return x ** 2
>>> generic_model = GenericModel( ... estimator=Toy(), ... artifact_dir=tempfile.mkdtemp(), ... model_input_serializer=MySERDE() ... )
>>> # Or register the serializer after creating model instance. >>> generic_model.set_model_input_serializer(MySERDE())
- Parameters:
model_input_serializer ((str, or ads.model.SERDE)) – name of the serializer, or instance of SERDE.
- set_model_save_serializer(model_save_serializer: str | SERDE)[source]¶
Registers serializer used for saving model.
Examples
>>> generic_model.set_model_save_serializer(GenericModel.model_save_serializer_type.CLOUDPICKLE)
>>> # Register serializer by passing the name of it. >>> generic_model.set_model_save_serializer("cloudpickle")
>>> # Example of creating customized model save serializer and registing it. >>> from ads.model import SERDE >>> from ads.model.generic_model import GenericModel
>>> class MySERDE(SERDE): ... def __init__(self): ... super().__init__() ... def serialize(self, data): ... serialized_data = 1 ... return serialized_data ... def deserialize(self, data): ... deserialized_data = 2 ... return deserialized_data
>>> class Toy: ... def predict(self, x): ... return x ** 2
>>> generic_model = GenericModel( ... estimator=Toy(), ... artifact_dir=tempfile.mkdtemp(), ... model_save_serializer=MySERDE() ... )
>>> # Or register the serializer after creating model instance. >>> generic_model.set_model_save_serializer(MySERDE())
- Parameters:
model_save_serializer ((ads.model.SERDE or str)) – name of the serializer or instance of SERDE.
- summary_status() DataFrame [source]¶
A summary table of the current status.
- Returns:
The summary stable of the current status.
- Return type:
pd.DataFrame
- update(**kwargs) GenericModel [source]¶
Updates model metadata in the Model Catalog. Updates only metadata information. The model artifacts are immutable and cannot be updated.
- Parameters:
kwargs –
- display_name: (str, optional). Defaults to None.
The name of the model.
- description: (str, optional). Defaults to None.
The description of the model.
- freeform_tagsDict(str, str), Defaults to None.
Freeform tags for the model.
- defined_tags(Dict(str, dict(str, object)), optional). Defaults to None.
Defined tags for the model.
- version_label: (str, optional). Defaults to None.
The model version lebel.
Additional kwargs arguments. Can be any attribute that oci.data_science.models.Model accepts.
- Returns:
An instance of GenericModel (self).
- Return type:
- Raises:
ValueError – if model not saved to the Model Catalog.
- classmethod update_deployment(model_deployment_id: str | None = None, properties: ModelDeploymentProperties | dict | None = None, wait_for_completion: bool = True, max_wait_time: int = 1200, poll_interval: int = 10, **kwargs) ModelDeployment [source]¶
Updates a model deployment.
You can update model_deployment_configuration_details and change instance_shape and model_id when the model deployment is in the ACTIVE lifecycle state. The bandwidth_mbps or instance_count can only be updated while the model deployment is in the INACTIVE state. Changes to the bandwidth_mbps or instance_count will take effect the next time the ActivateModelDeployment action is invoked on the model deployment resource.
Examples
>>> # Update access log id, freeform tags and description for the model deployment >>> model.update_deployment( ... access_log={ ... log_id=<log_ocid> ... }, ... description="Description for Custom Model", ... freeform_tags={"key": "value"}, ... )
- Parameters:
model_deployment_id (str.) – The model deployment OCID. Defaults to None. If the method called on instance level, then self.model_deployment.model_deployment_id will be used.
properties (ModelDeploymentProperties or dict) – The properties for updating the deployment.
wait_for_completion (bool) – Flag set for whether to wait for deployment to complete before proceeding. Defaults to True.
max_wait_time (int) – Maximum amount of time to wait in seconds (Defaults to 1200). Negative implies infinite wait time.
poll_interval (int) – Poll interval in seconds (Defaults to 10).
kwargs –
- auth: (Dict, optional). Defaults to None.
The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
- display_name: (str)
Model deployment display name
- description: (str)
Model deployment description
- freeform_tags: (dict)
Model deployment freeform tags
- defined_tags: (dict)
Model deployment defined tags
Additional kwargs arguments. Can be any attribute that ads.model.deployment.ModelDeploymentCondaRuntime, ads.model.deployment.ModelDeploymentContainerRuntime and ads.model.deployment.ModelDeploymentInfrastructure accepts.
- Returns:
An instance of ModelDeployment class.
- Return type:
- update_summary_action(detail: str, action: str)[source]¶
Update the actions needed from the user in the summary table.
- upload_artifact(uri: str, auth: Dict | None = None, force_overwrite: bool | None = False, parallel_process_count: int = 9) None [source]¶
Uploads model artifacts to the provided uri. The artifacts will be zipped before uploading.
- Parameters:
uri (str) –
The destination location for the model artifacts, which can be a local path or OCI object storage URI. Examples:
>>> upload_artifact(uri="/some/local/folder/") >>> upload_artifact(uri="oci://bucket@namespace/prefix/")
auth ((Dict, optional). Defaults to None.) – The default authetication is set using ads.set_auth API. If you need to override the default, use the ads.common.auth.api_keys or ads.common.auth.resource_principal to create appropriate authentication signer and kwargs required to instantiate IdentityClient object.
force_overwrite (bool) – Overwrite target_dir if exists.
parallel_process_count ((int, optional)) – The number of worker processes to use in parallel for uploading individual parts of a multipart upload.
- verify(data: Any | None = None, reload_artifacts: bool = True, auto_serialize_data: bool = False, **kwargs) Dict[str, Any] [source]¶
Test if deployment works in local environment.
Examples
>>> uri = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" >>> prediction = model.verify(image=uri)['prediction']
>>> # examples on storage options >>> prediction = model.verify( ... image="oci://<bucket>@<tenancy>/myimage.png", ... storage_options=ads.auth.default_signer() ... )['prediction']
- Parameters:
data (Any) – Data used to test if deployment works in local environment.
reload_artifacts (bool. Defaults to True.) – Whether to reload artifacts or not.
is_json_payload (bool) – Defaults to False. Indicate whether to send data with a application/json MIME TYPE.
auto_serialize_data (bool.) – Whether to auto serialize input data. Defauls to False for GenericModel, and True for other frameworks. data required to be json serializable if auto_serialize_data=False. if auto_serialize_data set to True, data will be serialized before sending to model deployment endpoint.
kwargs –
content_type: str, used to indicate the media type of the resource. image: PIL.Image Object or uri for the image.
A valid string path for image file can be local path, http(s), oci, s3, gs.
- storage_options: dict
Passed to fsspec.open for a particular storage connection. Please see fsspec (https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open) for more details.
- Returns:
A dictionary which contains prediction results.
- Return type:
Dict
- class ads.model.generic_model.ModelDeploymentRuntimeType[source]¶
Bases:
object
- CONDA = 'conda'¶
- CONTAINER = 'container'¶
- class ads.model.generic_model.ModelState(value)[source]¶
Bases:
Enum
An enumeration.
- AVAILABLE = 'Available'¶
- DONE = 'Done'¶
- NEEDSACTION = 'Needs Action'¶
- NOTAPPLICABLE = 'Not Applicable'¶
- NOTAVAILABLE = 'Not Available'¶
- exception ads.model.generic_model.SerializeInputNotImplementedError[source]¶
Bases:
NotImplementedError
- exception ads.model.generic_model.SerializeModelNotImplementedError[source]¶
Bases:
NotImplementedError
- class ads.model.generic_model.SummaryStatus[source]¶
Bases:
object
SummaryStatus class which track the status of the Model frameworks.
- update_action(detail: str, action: str) None [source]¶
Updates the action of the summary status table of the corresponding detail.
ads.model.model_introspect module¶
The module that helps to minimize the number of errors of the model post-deployment process. The model provides a simple testing harness to ensure that model artifacts are thoroughly tested before being saved to the model catalog.
Classes¶
- ModelIntrospect
Class to introspect model artifacts.
Examples
>>> model_introspect = ModelIntrospect(artifact=model_artifact)
>>> model_introspect()
... Test key Test name Result Message
... ----------------------------------------------------------------------------
... test_key_1 test_name_1 Passed test passed
... test_key_2 test_name_2 Not passed some error occured
>>> model_introspect.status
... Passed
- class ads.model.model_introspect.Introspectable[source]¶
Bases:
ABC
Base class that represents an introspectable object.
- exception ads.model.model_introspect.IntrospectionNotPassed[source]¶
Bases:
ValueError
- class ads.model.model_introspect.ModelIntrospect(artifact: Introspectable)[source]¶
Bases:
object
Class to introspect model artifacts.
- Parameters:
Examples
>>> model_introspect = ModelIntrospect(artifact=model_artifact) >>> result = model_introspect() ... Test key Test name Result Message ... ---------------------------------------------------------------------------- ... test_key_1 test_name_1 Passed test passed ... test_key_2 test_name_2 Not passed some error occured
Initializes the Model Introspect.
- Parameters:
artifact (Introspectable) – The instance of ModelArtifact object.
- Raises:
ValueError – If model artifact object not provided.:
TypeError – If provided input paramater not a ModelArtifact instance.:
- property failures: int¶
Calculates the number of failures.
- Returns:
The number of failures.
- Return type:
ads.model.model_metadata module¶
- class ads.model.model_metadata.Framework[source]¶
Bases:
str
- BERT = 'bert'¶
- CUML = 'cuml'¶
- EMCEE = 'emcee'¶
- ENSEMBLE = 'ensemble'¶
- FLAIR = 'flair'¶
- GENSIM = 'gensim'¶
- H20 = 'h2o'¶
- KERAS = 'keras'¶
- LIGHT_GBM = 'lightgbm'¶
- MXNET = 'mxnet'¶
- NLTK = 'nltk'¶
- ORACLE_AUTOML = 'oracle_automl'¶
- OTHER = 'other'¶
- PROPHET = 'prophet'¶
- PYMC3 = 'pymc3'¶
- PYOD = 'pyod'¶
- PYSTAN = 'pystan'¶
- PYTORCH = 'pytorch'¶
- SCIKIT_LEARN = 'scikit-learn'¶
- SKTIME = 'sktime'¶
- SPACY = 'spacy'¶
- SPARK = 'pyspark'¶
- STATSMODELS = 'statsmodels'¶
- TENSORFLOW = 'tensorflow'¶
- TRANSFORMERS = 'transformers'¶
- WORD2VEC = 'word2vec'¶
- XGBOOST = 'xgboost'¶
- class ads.model.model_metadata.MetadataCustomCategory[source]¶
Bases:
str
- OTHER = 'Other'¶
- PERFORMANCE = 'Performance'¶
- TRAINING_AND_VALIDATION_DATASETS = 'Training and Validation Datasets'¶
- TRAINING_ENV = 'Training Environment'¶
- TRAINING_PROFILE = 'Training Profile'¶
- class ads.model.model_metadata.MetadataCustomKeys[source]¶
Bases:
str
- CLIENT_LIBRARY = 'ClientLibrary'¶
- CONDA_ENVIRONMENT = 'CondaEnvironment'¶
- CONDA_ENVIRONMENT_PATH = 'CondaEnvironmentPath'¶
- ENVIRONMENT_TYPE = 'EnvironmentType'¶
- MODEL_ARTIFACTS = 'ModelArtifacts'¶
- MODEL_FILE_NAME = 'ModelFileName'¶
- MODEL_SERIALIZATION_FORMAT = 'ModelSerializationFormat'¶
- SLUG_NAME = 'SlugName'¶
- TRAINING_DATASET = 'TrainingDataset'¶
- TRAINING_DATASET_NUMBER_OF_COLS = 'TrainingDatasetNumberOfCols'¶
- TRAINING_DATASET_NUMBER_OF_ROWS = 'TrainingDatasetNumberOfRows'¶
- TRAINING_DATASET_SIZE = 'TrainingDatasetSize'¶
- VALIDATION_DATASET = 'ValidationDataset'¶
- VALIDATION_DATASET_NUMBER_OF_COLS = 'ValidationDataSetNumberOfCols'¶
- VALIDATION_DATASET_NUMBER_OF_ROWS = 'ValidationDatasetNumberOfRows'¶
- VALIDATION_DATASET_SIZE = 'ValidationDatasetSize'¶
- class ads.model.model_metadata.MetadataCustomPrintColumns[source]¶
Bases:
str
- CATEGORY = 'Category'¶
- DESCRIPTION = 'Description'¶
- KEY = 'Key'¶
- VALUE = 'Value'¶
- exception ads.model.model_metadata.MetadataDescriptionTooLong(key: str, length: int)[source]¶
Bases:
ValueError
Maximum allowed length of metadata description has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.
- exception ads.model.model_metadata.MetadataSizeTooLarge(size: int)[source]¶
Bases:
ValueError
Maximum allowed size for model metadata has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.
- class ads.model.model_metadata.MetadataTaxonomyKeys[source]¶
Bases:
str
- ALGORITHM = 'Algorithm'¶
- ARTIFACT_TEST_RESULT = 'ArtifactTestResults'¶
- FRAMEWORK = 'Framework'¶
- FRAMEWORK_VERSION = 'FrameworkVersion'¶
- HYPERPARAMETERS = 'Hyperparameters'¶
- USE_CASE_TYPE = 'UseCaseType'¶
- class ads.model.model_metadata.MetadataTaxonomyPrintColumns[source]¶
Bases:
str
- KEY = 'Key'¶
- VALUE = 'Value'¶
- exception ads.model.model_metadata.MetadataValueTooLong(key: str, length: int)[source]¶
Bases:
ValueError
Maximum allowed length of metadata value has been exceeded. See https://docs.oracle.com/en-us/iaas/data-science/using/models_saving_catalog.htm for more details.
- class ads.model.model_metadata.ModelCustomMetadata[source]¶
Bases:
ModelMetadata
Class that represents Model Custom Metadata.
- get(self, key: str) ModelCustomMetadataItem ¶
Returns the model metadata item by provided key.
- to_dict(self)¶
Serializes model metadata into a dictionary.
- from_dict(cls) ModelCustomMetadata [source]¶
Constructs model metadata from dictionary.
- to_yaml(self)¶
Serializes model metadata into a YAML.
- add(self, key: str, value: str, description: str = '', category: str = MetadataCustomCategory.OTHER, replace: bool = False) None: [source]¶
Adds a new model metadata item. Replaces existing one if replace flag is True.
- to_json(self)¶
Serializes model metadata into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata to a local file or object storage.
Examples
>>> metadata_custom = ModelCustomMetadata() >>> metadata_custom.add(key="format", value="pickle") >>> metadata_custom.add(key="note", value="important note", description="some description") >>> metadata_custom["format"].description = "some description" >>> metadata_custom.to_dataframe() Key Value Description Category ---------------------------------------------------------------------------- 0 format pickle some description user defined 1 note important note some description user defined >>> metadata_custom metadata: - category: user defined description: some description key: format value: pickle - category: user defined description: some description key: note value: important note >>> metadata_custom.remove("format") >>> metadata_custom metadata: - category: user defined description: some description key: note value: important note >>> metadata_custom.to_dict() {'metadata': [{ 'key': 'note', 'value': 'important note', 'category': 'user defined', 'description': 'some description' }]} >>> metadata_custom.reset() >>> metadata_custom metadata: - category: None description: None key: note value: None >>> metadata_custom.clear() >>> metadata_custom.to_dataframe() Key Value Description Category ----------------------------------------------------------------------------
Initializes custom model metadata.
- add(key: str, value: str, description: str = '', category: str = 'Other', replace: bool = False) None [source]¶
Adds a new model metadata item. Overrides the existing one if replace flag is True.
- Parameters:
- Returns:
Nothing.
- Return type:
None
- Raises:
TypeError – If provided key is not a string. If provided description not a string.
ValueError – If provided key is empty. If provided value is empty. If provided value cannot be serialized to JSON. If item with provided key is already registered and replace flag is False. If provided category is not supported.
MetadataValueTooLong – If the length of provided value exceeds 255 charracters.
MetadataDescriptionTooLong – If the length of provided description exceeds 255 charracters.
- classmethod from_dict(data: Dict) ModelCustomMetadata [source]¶
Constructs an instance of ModelCustomMetadata from a dictionary.
- Parameters:
data (Dict) – Model metadata in a dictionary format.
- Returns:
An instance of model custom metadata.
- Return type:
- Raises:
ValueError – In case of the wrong input data format.
- isempty() bool [source]¶
Checks if metadata is empty.
- Returns:
True if metadata is empty, False otherwise.
- Return type:
- remove(key: str) None [source]¶
Removes a model metadata item.
- Parameters:
key (str) – The key of the metadata item that should be removed.
- Returns:
Nothing.
- Return type:
None
- set_training_data(path: str, data_size: str | None = None)[source]¶
Adds training_data path and data size information into model custom metadata.
- class ads.model.model_metadata.ModelCustomMetadataItem(key: str, value: str | None = None, description: str | None = None, category: str | None = None)[source]¶
Bases:
ModelTaxonomyMetadataItem
Class that represents model custom metadata item.
- from_dict(cls) ModelCustomMetadataItem ¶
Constructs model metadata item from dictionary.
- to_yaml(self)¶
Serializes model metadata item to YAML.
- update(self, value: str = '', description: str = '', category: str = '') None [source]¶
Updates metadata item information.
- to_json(self) JSON ¶
Serializes metadata item into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata item value to a local file or object storage.
- reset() None [source]¶
Resets model metadata item.
Resets value, description and category to None.
- Returns:
Nothing.
- Return type:
None
- validate() bool [source]¶
Validates metadata item.
- Returns:
True if validation passed.
- Return type:
- Raises:
ValueError – If invalid category provided.
MetadataValueTooLong – If value exceeds the length limit.
- class ads.model.model_metadata.ModelMetadata[source]¶
Bases:
ABC
The base abstract class representing model metadata.
- get(self, key: str) ModelMetadataItem [source]¶
Returns the model metadata item by provided key.
- from_dict(cls) ModelMetadata [source]¶
Constructs model metadata from dictionary.
- to_json_file(self, file_path: str, storage_options: dict = None) None [source]¶
Saves the metadata to a local file or object storage.
Initializes Model Metadata.
- abstract from_dict(data: Dict) ModelMetadata [source]¶
Constructs an instance of ModelMetadata from a dictionary.
- Parameters:
data (Dict) – Model metadata in a dictionary format.
- Returns:
An instance of model metadata.
- Return type:
- get(key: str, value: ~typing.Any | None = <object object>) ModelMetadataItem | Any [source]¶
Returns the model metadata item by provided key.
- Parameters:
- Returns:
The model metadata item.
- Return type:
- Raises:
ValueError – If provided key is empty or metadata item not found.
- property keys: Tuple[str]¶
Returns all registered metadata keys.
- Returns:
The list of metadata keys.
- Return type:
Tuple[str]
- reset() None [source]¶
Resets all model metadata items to empty values.
Resets value, description and category to None for every metadata item.
- size() int [source]¶
Returns the size of the model metadata in bytes.
- Returns:
The size of model metadata in bytes.
- Return type:
- abstract to_dataframe() DataFrame [source]¶
Returns the model metadata list in a data frame format.
- Returns:
The model metadata in a dataframe format.
- Return type:
pandas.DataFrame
- to_dict()[source]¶
Serializes model metadata into a dictionary.
- Returns:
The model metadata in a dictionary representation.
- Return type:
Dict
- to_json()[source]¶
Serializes model metadata into a JSON.
- Returns:
The model metadata in a JSON representation.
- Return type:
JSON
- to_json_file(file_path: str, storage_options: dict | None = None) None [source]¶
Saves the metadata to a local file or object storage.
- Parameters:
file_path (str) – The file path to store the data. “oci://bucket_name@namespace/folder_name/” “oci://bucket_name@namespace/folder_name/metadata.json” “path/to/local/folder” “path/to/local/folder/metadata.json”
storage_options (dict. Default None) – Parameters passed on to the backend filesystem class. Defaults to options set using DatasetFactory.set_default_storage().
- Returns:
Nothing.
- Return type:
None
- Raises:
ValueError – When file path is empty.:
TypeError – When file path not a string.:
Examples
>>> metadata = ModelTaxonomyMetadataItem() >>> storage_options = {"config": oci.config.from_file(os.path.join("~/.oci", "config"))} >>> storage_options {'log_requests': False, 'additional_user_agent': '', 'pass_phrase': None, 'user': '<user-id>', 'fingerprint': '05:15:2b:b1:46:8a:32:ec:e2:69:5b:32:01:**:**:**)', 'tenancy': '<tenancy-id>', 'region': 'us-ashburn-1', 'key_file': '/home/datascience/.oci/oci_api_key.pem'} >>> metadata.to_json_file(file_path = 'oci://bucket_name@namespace/folder_name/metadata_taxonomy.json', storage_options=storage_options) >>> metadata_item.to_json_file("path/to/local/folder/metadata_taxonomy.json")
- to_yaml()[source]¶
Serializes model metadata into a YAML.
- Returns:
The model metadata in a YAML representation.
- Return type:
Yaml
- validate() bool [source]¶
Validates model metadata.
- Returns:
True if metadata is valid.
- Return type:
- validate_size() bool [source]¶
Validates model metadata size.
Validates the size of metadata. Throws an error if the size of the metadata exceeds expected value.
- Returns:
True if metadata size is valid.
- Return type:
- Raises:
MetadataSizeTooLarge – If the size of the metadata exceeds expected value.
- class ads.model.model_metadata.ModelMetadataItem[source]¶
Bases:
ABC
The base abstract class representing model metadata item.
- from_dict(cls, data: Dict) ModelMetadataItem [source]¶
Constructs an instance of ModelMetadataItem from a dictionary.
- to_json_file(self, file_path: str, storage_options: dict = None) None [source]¶
Saves the metadata item value to a local file or object storage.
- classmethod from_dict(data: Dict) ModelMetadataItem [source]¶
Constructs an instance of ModelMetadataItem from a dictionary.
- Parameters:
data (Dict) – Metadata item in a dictionary format.
- Returns:
An instance of model metadata item.
- Return type:
- size() int [source]¶
Returns the size of the model metadata in bytes.
- Returns:
The size of model metadata in bytes.
- Return type:
- to_dict() dict [source]¶
Serializes model metadata item to dictionary.
- Returns:
The dictionary representation of model metadata item.
- Return type:
- to_json()[source]¶
Serializes metadata item into a JSON.
- Returns:
The metadata item in a JSON representation.
- Return type:
JSON
- to_json_file(file_path: str, storage_options: dict | None = None) None [source]¶
Saves the metadata item value to a local file or object storage.
- Parameters:
file_path (str) – The file path to store the data. “oci://bucket_name@namespace/folder_name/” “oci://bucket_name@namespace/folder_name/result.json” “path/to/local/folder” “path/to/local/folder/result.json”
storage_options (dict. Default None) – Parameters passed on to the backend filesystem class. Defaults to options set using DatasetFactory.set_default_storage().
- Returns:
Nothing.
- Return type:
None
- Raises:
ValueError – When file path is empty.:
TypeError – When file path not a string.:
Examples
>>> metadata_item = ModelCustomMetadataItem(key="key1", value="value1") >>> storage_options = {"config": oci.config.from_file(os.path.join("~/.oci", "config"))} >>> storage_options {'log_requests': False, 'additional_user_agent': '', 'pass_phrase': None, 'user': '<user-id>', 'fingerprint': '05:15:2b:b1:46:8a:32:ec:e2:69:5b:32:01:**:**:**)', 'tenancy': '<tenency-id>', 'region': 'us-ashburn-1', 'key_file': '/home/datascience/.oci/oci_api_key.pem'} >>> metadata_item.to_json_file(file_path = 'oci://bucket_name@namespace/folder_name/file.json', storage_options=storage_options) >>> metadata_item.to_json_file("path/to/local/folder/file.json")
- class ads.model.model_metadata.ModelProvenanceMetadata(repo: str | None = None, git_branch: str | None = None, git_commit: str | None = None, repository_url: str | None = None, training_script_path: str | None = None, training_id: str | None = None, artifact_dir: str | None = None)[source]¶
Bases:
DataClassSerializable
ModelProvenanceMetadata class.
Examples
>>> provenance_metadata = ModelProvenanceMetadata.fetch_training_code_details() ModelProvenanceMetadata(repo=<git.repo.base.Repo '/home/datascience/.git'>, git_branch='master', git_commit='99ad04c31803f1d4ffcc3bf4afbd6bcf69a06af2', repository_url='file:///home/datascience', "", "") >>> provenance_metadata.assert_path_not_dirty("your_path", ignore=False)
- assert_path_not_dirty(path: str, ignore: bool)[source]¶
Checks if all the changes in this path has been commited.
- Parameters:
path ((str)) – path.
(bool) (ignore) – whether to ignore the changes or not.
- Raises:
ChangesNotCommitted – if there are changes not being commited.:
- Returns:
Nothing.
- Return type:
None
- classmethod fetch_training_code_details(training_script_path: str | None = None, training_id: str | None = None, artifact_dir: str | None = None)[source]¶
Fetches the training code details: repo, git_branch, git_commit, repository_url, training_script_path and training_id.
- Parameters:
- Returns:
A ModelProvenanceMetadata instance.
- Return type:
- classmethod from_dict(data: Dict[str, str]) ModelProvenanceMetadata [source]¶
Constructs an instance of ModelProvenanceMetadata from a dictionary.
- class ads.model.model_metadata.ModelTaxonomyMetadata[source]¶
Bases:
ModelMetadata
Class that represents Model Taxonomy Metadata.
- get(self, key: str) ModelTaxonomyMetadataItem ¶
Returns the model metadata item by provided key.
- to_dict(self)¶
Serializes model metadata into a dictionary.
- from_dict(cls) ModelTaxonomyMetadata [source]¶
Constructs model metadata from dictionary.
- to_yaml(self)¶
Serializes model metadata into a YAML.
- to_json(self)¶
Serializes model metadata into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata to a local file or object storage.
Examples
>>> metadata_taxonomy = ModelTaxonomyMetadata() >>> metadata_taxonomy.to_dataframe() Key Value -------------------------------------------- 0 UseCaseType binary_classification 1 Framework sklearn 2 FrameworkVersion 0.2.2 3 Algorithm algorithm 4 Hyperparameters {}
>>> metadata_taxonomy.reset() >>> metadata_taxonomy.to_dataframe() Key Value -------------------------------------------- 0 UseCaseType None 1 Framework None 2 FrameworkVersion None 3 Algorithm None 4 Hyperparameters None
>>> metadata_taxonomy metadata: - key: UseCaseType category: None description: None value: None
Initializes Model Metadata.
- classmethod from_dict(data: Dict) ModelTaxonomyMetadata [source]¶
Constructs an instance of ModelTaxonomyMetadata from a dictionary.
- Parameters:
data (Dict) – Model metadata in a dictionary format.
- Returns:
An instance of model taxonomy metadata.
- Return type:
- Raises:
ValueError – In case of the wrong input data format.
- class ads.model.model_metadata.ModelTaxonomyMetadataItem(key: str, value: str | None = None)[source]¶
Bases:
ModelMetadataItem
Class that represents model taxonomy metadata item.
- to_dict(self) Dict ¶
Serializes model metadata item to dictionary.
- from_dict(cls) ModelTaxonomyMetadataItem ¶
Constructs model metadata item from dictionary.
- to_yaml(self)¶
Serializes model metadata item to YAML.
- to_json(self) JSON ¶
Serializes metadata item into a JSON.
- to_json_file(self, file_path: str, storage_options: dict = None) None ¶
Saves the metadata item value to a local file or object storage.
- reset() None [source]¶
Resets model metadata item.
Resets value to None.
- Returns:
Nothing.
- Return type:
None
- update(value: str) None [source]¶
Updates metadata item value.
- Parameters:
value (str) – The value of model metadata item.
- Returns:
Nothing.
- Return type:
None
- validate() bool [source]¶
Validates metadata item.
- Returns:
True if validation passed.
- Return type:
- Raises:
ValueError – If invalid UseCaseType provided. If invalid Framework provided.
- class ads.model.model_metadata.UseCaseType[source]¶
Bases:
str
- ANOMALY_DETECTION = 'anomaly_detection'¶
- BINARY_CLASSIFICATION = 'binary_classification'¶
- CLUSTERING = 'clustering'¶
- DIMENSIONALITY_REDUCTION = 'dimensionality_reduction/representation'¶
- IMAGE_CLASSIFICATION = 'image_classification'¶
- MULTINOMIAL_CLASSIFICATION = 'multinomial_classification'¶
- NER = 'ner'¶
- OBJECT_LOCALIZATION = 'object_localization'¶
- OTHER = 'other'¶
- RECOMMENDER = 'recommender'¶
- REGRESSION = 'regression'¶
- SENTIMENT_ANALYSIS = 'sentiment_analysis'¶
- TIME_SERIES_FORECASTING = 'time_series_forecasting'¶
- TOPIC_MODELING = 'topic_modeling'¶
ads.model.model_metadata_mixin module¶
- class ads.model.model_metadata_mixin.MetadataMixin[source]¶
Bases:
object
MetadataMixin class which populates the custom metadata, taxonomy metadata, input/output schema and provenance metadata.
- populate_metadata(use_case_type: str | None = None, data_sample: ADSData | None = None, X_sample: list | tuple | DataFrame | Series | ndarray | None = None, y_sample: list | tuple | DataFrame | Series | ndarray | None = None, training_script_path: str | None = None, training_id: str | None = None, ignore_pending_changes: bool = True, max_col_num: int = 2000, ignore_conda_error: bool = False, **kwargs)[source]¶
Populates input schema and output schema. If the schema exceeds the limit of 32kb, save as json files to the artifact directory.
- Parameters:
use_case_type ((str, optional). Defaults to None.) – The use case type of the model.
data_sample ((ADSData, optional). Defaults to None.) – A sample of the data that will be used to generate intput_schema and output_schema.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of input data that will be used to generate input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]. Defaults to None.) – A sample of output data that will be used to generate output schema.
training_script_path (str. Defaults to None.) – Training script path.
training_id ((str, optional). Defaults to None.) – The training model OCID.
ignore_pending_changes (bool. Defaults to False.) – Ignore the pending changes in git.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.
- Returns:
Nothing.
- Return type:
None
- populate_schema(data_sample: ADSData | None = None, X_sample: List | Tuple | DataFrame | Series | ndarray | None = None, y_sample: List | Tuple | DataFrame | Series | ndarray | None = None, max_col_num: int = 2000, **kwargs)[source]¶
Populate input and output schemas. If the schema exceeds the limit of 32kb, save as json files to the artifact dir.
- Parameters:
data_sample (ADSData) – A sample of the data that will be used to generate input_schema and output_schema.
X_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of input data that will be used to generate the input schema.
y_sample (Union[list, tuple, pd.Series, np.ndarray, pd.DataFrame]) – A sample of output data that will be used to generate the output schema.
max_col_num ((int, optional). Defaults to utils.DATA_SCHEMA_MAX_COL_NUM.) – The maximum number of columns allowed in auto generated schema.
ads.model.model_properties module¶
- class ads.model.model_properties.ModelProperties(inference_conda_env: str | None = None, inference_python_version: str | None = None, training_conda_env: str | None = None, training_python_version: str | None = None, training_resource_id: str | None = None, training_script_path: str | None = None, training_id: str | None = None, compartment_id: str | None = None, project_id: str | None = None, bucket_uri: str | None = None, remove_existing_artifact: bool | None = None, overwrite_existing_artifact: bool | None = None, deployment_instance_shape: str | None = None, deployment_instance_subnet_id: str | None = None, deployment_instance_count: int | None = None, deployment_bandwidth_mbps: int | None = None, deployment_log_group_id: str | None = None, deployment_access_log_id: str | None = None, deployment_predict_log_id: str | None = None, deployment_memory_in_gbs: float | int | None = None, deployment_ocpus: float | int | None = None, deployment_image: str | None = None)[source]¶
Bases:
BaseProperties
Represents properties required to save and deploy model.
ads.model.model_version_set module¶
- class ads.model.model_version_set.ModelVersionSet(spec: Dict | None = None, **kwargs)[source]¶
Bases:
Builder
Represents Model Version Set.
- delete(self, delete_model: bool | None = False) "ModelVersionSet": [source]¶
Removes a model version set.
- from_dict(cls, config: dict) 'ModelVersionSet' [source]¶
Load a model version set instance from a dictionary of configurations.
Examples
>>> mvs = (ModelVersionSet() ... .with_compartment_id(os.environ["PROJECT_COMPARTMENT_OCID"]) ... .with_project_id(os.environ["PROJECT_OCID"]) ... .with_name("test_experiment") ... .with_description("Experiment number one")) >>> mvs.create() >>> mvs.model_add(model_ocid, version_label="Version label 1") >>> mvs.model_list() >>> mvs.details_link ... https://console.<region>.oraclecloud.com/data-science/model-version-sets/<ocid> >>> mvs.delete()
Initializes a model version set.
- Parameters:
spec ((Dict, optional). Defaults to None.) – Object specification.
kwargs (Dict) –
Specification as keyword arguments. If ‘spec’ contains the same key as the one in kwargs, the value from kwargs will be used.
project_id: str
compartment_id: str
name: str
description: str
defined_tags: Dict[str, Dict[str, object]]
freeform_tags: Dict[str, str]
- CONST_COMPARTMENT_ID = 'compartmentId'¶
- CONST_DEFINED_TAG = 'definedTags'¶
- CONST_DESCRIPTION = 'description'¶
- CONST_FREEFORM_TAG = 'freeformTags'¶
- CONST_ID = 'id'¶
- CONST_NAME = 'name'¶
- CONST_PROJECT_ID = 'projectId'¶
- LIFECYCLE_STATE_ACTIVE = 'ACTIVE'¶
- LIFECYCLE_STATE_DELETED = 'DELETED'¶
- LIFECYCLE_STATE_DELETING = 'DELETING'¶
- LIFECYCLE_STATE_FAILED = 'FAILED'¶
- attribute_map = {'compartmentId': 'compartment_id', 'definedTags': 'defined_tags', 'description': 'description', 'freeformTags': 'freeform_tags', 'id': 'id', 'name': 'name', 'projectId': 'project_id'}¶
- create(**kwargs) ModelVersionSet [source]¶
Creates a model version set.
- Parameters:
kwargs – Additional keyword arguments.
- Returns:
The ModelVersionSet instance (self)
- Return type:
- delete(delete_model: bool | None = False) ModelVersionSet [source]¶
Removes a model version set.
- Parameters:
delete_model ((bool, optional). Defaults to False.) – By default, this parameter is false. A model version set can only be deleted if all the models associate with it are already in the DELETED state. You can optionally specify the deleteRelatedModels boolean query parameters to true, which deletes all associated models for you.
- Returns:
The ModelVersionSet instance (self).
- Return type:
- property details_link: str¶
Link to details page in OCI console.
- Returns:
Link to details page in OCI console.
- Return type:
- classmethod from_dict(config: dict) ModelVersionSet