hsml.model #
[source] Model #
[source] NOT_FOUND_ERROR_CODE class-attribute instance-attribute #
NOT_FOUND_ERROR_CODE = 360000
Metadata object representing a model in the Model Registry.
[source] model_path property #
Path of the model with version folder omitted.
Resolves to /Projects/{project_name}/Models/{name}.
[source] version_path property #
Path of the model including version folder.
Resolves to /Projects/{project_name}/Models/{name}/{version}.
[source] model_files_path property #
Path of the model files including version and files folder.
Resolves to /Projects/{project_name}/Models/{name}/{version}/Files.
[source] shared_registry_project_name property writable #
shared_registry_project_name of the model.
[source] save #
save(
model_path,
await_registration=480,
keep_original_files=False,
upload_configuration: dict[str, Any] | None = None,
)
Persist this model including model files and metadata to the model registry.
| PARAMETER | DESCRIPTION |
|---|---|
model_path | Local or remote (Hopsworks file system) path to the folder where the model files are located, or path to a specific model file.
|
await_registration | Awaiting time for the model to be registered in Hopsworks. DEFAULT: |
keep_original_files | If the model files are located in hopsfs, whether to move or copy those files into the Models dataset. Default is False (i.e., model files will be moved) DEFAULT: |
upload_configuration | When saving a model from outside Hopsworks, the model is uploaded to the model registry using the REST APIs. Each model artifact is divided into chunks and each chunk uploaded independently. This parameter can be used to control the upload chunk size, the parallelism and the number of retries. |
| RETURNS | DESCRIPTION |
|---|---|
|
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |
[source] download #
download(local_path=None) -> str
Download the model files.
| PARAMETER | DESCRIPTION |
|---|---|
local_path | path where to download the model files in the local filesystem DEFAULT: |
Returns: str: Absolute path to local folder containing the model files.
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |
[source] delete #
delete()
Delete the model.
Potentially dangerous operation
This operation drops all metadata associated with this version of the model and deletes the model files.
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |
[source] deploy #
deploy(
name: str | None = None,
description: str | None = None,
artifact_version: str | None = None,
serving_tool: str | None = None,
script_file: str | None = None,
config_file: str | None = None,
resources: PredictorResources | dict | None = None,
inference_logger: InferenceLogger | dict | None = None,
inference_batcher: InferenceBatcher
| dict
| None = None,
scaling_configuration: PredictorScalingConfig
| dict
| None = None,
transformer: Transformer | dict | None = None,
api_protocol: str | None = IE.API_PROTOCOL_REST,
environment: str | None = None,
) -> deployment.Deployment
Deploy the model.
Example
import hopsworks
project = hopsworks.login()
# get Hopsworks Model Registry handle
mr = project.get_model_registry()
# retrieve the trained model you want to deploy
my_model = mr.get_model("my_model", version=1)
my_deployment = my_model.deploy()
Parameters: name: Name of the deployment. description: Description of the deployment. artifact_version: (Deprecated) Version number of the model artifact to deploy, CREATE to create a new model artifact or MODEL-ONLY to reuse the shared artifact containing only the model files. serving_tool: Serving tool used to deploy the model server. script_file: Path to a custom predictor script implementing the Predict class. config_file: Model server configuration file to be passed to the model deployment. It can be accessed via CONFIG_FILE_PATH environment variable from a predictor or transformer script. For LLM deployments without a predictor script, this file is used to configure the vLLM engine. resources: Resources to be allocated for the predictor. inference_logger: Inference logger configuration. inference_batcher: Inference batcher configuration. scaling_configuration: Scaling configuration for the predictor. transformer: Transformer to be deployed together with the predictor. api_protocol: API protocol to be enabled in the deployment (i.e., 'REST' or 'GRPC'). Defaults to 'REST'. environment: The inference environment to use.
| RETURNS | DESCRIPTION |
|---|---|
deployment.Deployment |
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |
[source] add_tag #
Attach a tag to a model.
A tag consists of a
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the tag to be added. TYPE: |
value | Value of the tag to be added. |
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to add the tag. |
[source] get_feature_view #
Get the parent feature view of this model, based on explicit provenance.
Only accessible, usable feature view objects are returned. Otherwise an Exception is raised. For more details, call the base method - get_feature_view_provenance
Parameters: init: By default this is set to True. If you require a more complex initialization of the feature view for online or batch scenarios, you should set init to False to retrieve a non initialized feature view and then call init_batch_scoring() or init_serving() with the required parameters. online: By default this is set to False and the initialization for batch scoring is considered the default scenario. If you set online to True, the online scenario is enabled and the init_serving() method is called. When inside a deployment, the only available scenario is the online one, thus the parameter is ignored and init_serving is always called (if init is set to True). If you want to override this behaviour, you should set init to False and proceed with a custom initialization.
| RETURNS | DESCRIPTION |
|---|---|
|
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to retrieve the feature view. |
[source] get_feature_view_provenance #
get_feature_view_provenance() -> explicit_provenance.Links
Get the parent feature view of this model, based on explicit provenance.
This feature view can be accessible, deleted or inaccessible. For deleted and inaccessible feature views, only a minimal information is returned.
| RETURNS | DESCRIPTION |
|---|---|
explicit_provenance.Links |
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to retrieve the feature view provenance. |
[source] get_training_dataset_provenance #
get_training_dataset_provenance() -> (
explicit_provenance.Links
)
Get the parent training dataset of this model, based on explicit provenance.
This training dataset can be accessible, deleted or inaccessible. For deleted and inaccessible training datasets, only a minimal information is returned.
| RETURNS | DESCRIPTION |
|---|---|
explicit_provenance.Links |
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to retrieve the training dataset provenance. |