Skip to content

hsml.model #

[source] Model #

[source] NOT_FOUND_ERROR_CODE class-attribute instance-attribute #

NOT_FOUND_ERROR_CODE = 360000

Metadata object representing a model in the Model Registry.

[source] id property writable #

Id of the model.

[source] name property writable #

Name of the model.

[source] version property writable #

Version of the model.

[source] description property writable #

Description of the model.

[source] created property writable #

Creation date of the model.

[source] creator property writable #

Creator of the model.

[source] environment property writable #

Input example of the model.

[source] training_metrics property writable #

Training metrics of the model.

[source] program property writable #

Executable used to export the model.

[source] user property writable #

User of the model.

[source] input_example property writable #

input_example of the model.

[source] framework property writable #

Framework of the model.

[source] model_schema property writable #

Model schema of the model.

[source] project_name property writable #

project_name of the model.

[source] model_registry_id property writable #

model_registry_id of the model.

[source] model_path property #

Path of the model with version folder omitted.

Resolves to /Projects/{project_name}/Models/{name}.

[source] version_path property #

Path of the model including version folder.

Resolves to /Projects/{project_name}/Models/{name}/{version}.

[source] model_files_path property #

Path of the model files including version and files folder.

Resolves to /Projects/{project_name}/Models/{name}/{version}/Files.

[source] shared_registry_project_name property writable #

shared_registry_project_name of the model.

[source] save #

save(
    model_path,
    await_registration=480,
    keep_original_files=False,
    upload_configuration: dict[str, Any] | None = None,
)

Persist this model including model files and metadata to the model registry.

PARAMETER DESCRIPTION
model_path

Local or remote (Hopsworks file system) path to the folder where the model files are located, or path to a specific model file.

await_registration

Awaiting time for the model to be registered in Hopsworks.

DEFAULT: 480

keep_original_files

If the model files are located in hopsfs, whether to move or copy those files into the Models dataset. Default is False (i.e., model files will be moved)

DEFAULT: False

upload_configuration

When saving a model from outside Hopsworks, the model is uploaded to the model registry using the REST APIs. Each model artifact is divided into chunks and each chunk uploaded independently. This parameter can be used to control the upload chunk size, the parallelism and the number of retries. upload_configuration can contain the following keys: * key chunk_size: size of each chunk in megabytes. Default 10. * key simultaneous_uploads: number of chunks to upload in parallel. Default 3. * key max_chunk_retries: number of times to retry the upload of a chunk in case of failure. Default 1.

TYPE: dict[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION

Model: The model metadata object.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

In case the backend encounters an issue

[source] download #

download(local_path=None) -> str

Download the model files.

PARAMETER DESCRIPTION
local_path

path where to download the model files in the local filesystem

DEFAULT: None

Returns: str: Absolute path to local folder containing the model files.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

In case the backend encounters an issue

[source] delete #

delete()

Delete the model.

Potentially dangerous operation

This operation drops all metadata associated with this version of the model and deletes the model files.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

In case the backend encounters an issue

[source] deploy #

deploy(
    name: str | None = None,
    description: str | None = None,
    artifact_version: str | None = None,
    serving_tool: str | None = None,
    script_file: str | None = None,
    config_file: str | None = None,
    resources: PredictorResources | dict | None = None,
    inference_logger: InferenceLogger | dict | None = None,
    inference_batcher: InferenceBatcher
    | dict
    | None = None,
    scaling_configuration: PredictorScalingConfig
    | dict
    | None = None,
    transformer: Transformer | dict | None = None,
    api_protocol: str | None = IE.API_PROTOCOL_REST,
    environment: str | None = None,
) -> deployment.Deployment

Deploy the model.

Example
import hopsworks

project = hopsworks.login()

# get Hopsworks Model Registry handle
mr = project.get_model_registry()

# retrieve the trained model you want to deploy
my_model = mr.get_model("my_model", version=1)

my_deployment = my_model.deploy()

Parameters: name: Name of the deployment. description: Description of the deployment. artifact_version: (Deprecated) Version number of the model artifact to deploy, CREATE to create a new model artifact or MODEL-ONLY to reuse the shared artifact containing only the model files. serving_tool: Serving tool used to deploy the model server. script_file: Path to a custom predictor script implementing the Predict class. config_file: Model server configuration file to be passed to the model deployment. It can be accessed via CONFIG_FILE_PATH environment variable from a predictor or transformer script. For LLM deployments without a predictor script, this file is used to configure the vLLM engine. resources: Resources to be allocated for the predictor. inference_logger: Inference logger configuration. inference_batcher: Inference batcher configuration. scaling_configuration: Scaling configuration for the predictor. transformer: Transformer to be deployed together with the predictor. api_protocol: API protocol to be enabled in the deployment (i.e., 'REST' or 'GRPC'). Defaults to 'REST'. environment: The inference environment to use.

RETURNS DESCRIPTION
deployment.Deployment

Deployment: The deployment metadata object of a new or existing deployment.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

In case the backend encounters an issue

[source] add_tag #

add_tag(name: str, value: str | dict)

Attach a tag to a model.

A tag consists of a pair. Tag names are unique identifiers across the whole cluster. The value of a tag can be any valid json - primitives, arrays or json objects.

PARAMETER DESCRIPTION
name

Name of the tag to be added.

TYPE: str

value

Value of the tag to be added.

TYPE: str | dict

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

in case the backend fails to add the tag.

[source] set_tag #

set_tag(name: str, value: str | dict)

Deprecated: Use add_tag instead.

[source] delete_tag #

delete_tag(name: str)

Delete a tag attached to a model.

PARAMETER DESCRIPTION
name

Name of the tag to be removed.

TYPE: str

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

in case the backend fails to delete the tag.

[source] get_tag #

get_tag(name: str) -> str | None

Get the tags of a model.

PARAMETER DESCRIPTION
name

Name of the tag to get.

TYPE: str

RETURNS DESCRIPTION
str | None

tag value or None if it does not exist.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

in case the backend fails to retrieve the tag.

[source] get_tags #

get_tags() -> dict[str, tag.Tag]

Retrieves all tags attached to a model.

RETURNS DESCRIPTION
dict[str, tag.Tag]

Dict[str, obj] of tags.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

In case of a server error.

[source] get_url #

get_url()

Get url to the model in Hopsworks.

[source] get_feature_view #

get_feature_view(init: bool = True, online: bool = False)

Get the parent feature view of this model, based on explicit provenance.

Only accessible, usable feature view objects are returned. Otherwise an Exception is raised. For more details, call the base method - get_feature_view_provenance

Parameters: init: By default this is set to True. If you require a more complex initialization of the feature view for online or batch scenarios, you should set init to False to retrieve a non initialized feature view and then call init_batch_scoring() or init_serving() with the required parameters. online: By default this is set to False and the initialization for batch scoring is considered the default scenario. If you set online to True, the online scenario is enabled and the init_serving() method is called. When inside a deployment, the only available scenario is the online one, thus the parameter is ignored and init_serving is always called (if init is set to True). If you want to override this behaviour, you should set init to False and proceed with a custom initialization.

RETURNS DESCRIPTION

FeatureView: Feature View Object or None if it does not exist.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

in case the backend fails to retrieve the feature view.

[source] get_feature_view_provenance #

get_feature_view_provenance() -> explicit_provenance.Links

Get the parent feature view of this model, based on explicit provenance.

This feature view can be accessible, deleted or inaccessible. For deleted and inaccessible feature views, only a minimal information is returned.

RETURNS DESCRIPTION
explicit_provenance.Links

Links: Object containing the section of provenance graph requested or None if it does not exist.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

in case the backend fails to retrieve the feature view provenance.

[source] get_training_dataset_provenance #

get_training_dataset_provenance() -> (
    explicit_provenance.Links
)

Get the parent training dataset of this model, based on explicit provenance.

This training dataset can be accessible, deleted or inaccessible. For deleted and inaccessible training datasets, only a minimal information is returned.

RETURNS DESCRIPTION
explicit_provenance.Links

Links: Object containing the section of provenance graph requested or None if it does not exist.

RAISES DESCRIPTION
hopsworks.client.exceptions.RestAPIError

in case the backend fails to retrieve the training dataset provenance.