Skip to content

Model#

Creation of a TensorFlow model#

[source]

create_model#

hsml.model_registry.ModelRegistry.tensorflow.create_model(
    name,
    version=None,
    metrics=None,
    description=None,
    input_example=None,
    model_schema=None,
    feature_view=None,
    training_dataset_version=None,
)

Create a TensorFlow model metadata object.

Lazy

This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.

Arguments

  • name str: Name of the model to create.
  • version int | None: Optionally version of the model to create, defaults to None and will create the model with incremented version from the last version in the model registry.
  • metrics dict | None: Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE)
  • description str | None: Optionally a string describing the model, defaults to empty string "".
  • input_example pandas.DataFrame | pandas.core.series.Series | numpy.ndarray | list | None: Optionally an input example that represents a single input for the model, defaults to None.
  • model_schema hsml.model_schema.ModelSchema | None: Optionally a model schema for the model inputs and/or outputs.

Returns

Model. The model metadata object.


Creation of a Torch model#

[source]

create_model#

hsml.model_registry.ModelRegistry.torch.create_model(
    name,
    version=None,
    metrics=None,
    description=None,
    input_example=None,
    model_schema=None,
    feature_view=None,
    training_dataset_version=None,
)

Create a Torch model metadata object.

Lazy

This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.

Arguments

  • name str: Name of the model to create.
  • version int | None: Optionally version of the model to create, defaults to None and will create the model with incremented version from the last version in the model registry.
  • metrics dict | None: Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE)
  • description str | None: Optionally a string describing the model, defaults to empty string "".
  • input_example pandas.DataFrame | pandas.core.series.Series | numpy.ndarray | list | None: Optionally an input example that represents a single input for the model, defaults to None.
  • model_schema hsml.model_schema.ModelSchema | None: Optionally a model schema for the model inputs and/or outputs.

Returns

Model. The model metadata object.


Creation of a scikit-learn model#

[source]

create_model#

hsml.model_registry.ModelRegistry.sklearn.create_model(
    name,
    version=None,
    metrics=None,
    description=None,
    input_example=None,
    model_schema=None,
    feature_view=None,
    training_dataset_version=None,
)

Create an SkLearn model metadata object.

Lazy

This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.

Arguments

  • name str: Name of the model to create.
  • version int | None: Optionally version of the model to create, defaults to None and will create the model with incremented version from the last version in the model registry.
  • metrics dict | None: Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE)
  • description str | None: Optionally a string describing the model, defaults to empty string "".
  • input_example pandas.DataFrame | pandas.core.series.Series | numpy.ndarray | list | None: Optionally an input example that represents a single input for the model, defaults to None.
  • model_schema hsml.model_schema.ModelSchema | None: Optionally a model schema for the model inputs and/or outputs.

Returns

Model. The model metadata object.


Creation of a generic model#

[source]

create_model#

hsml.model_registry.ModelRegistry.python.create_model(
    name,
    version=None,
    metrics=None,
    description=None,
    input_example=None,
    model_schema=None,
    feature_view=None,
    training_dataset_version=None,
)

Create a generic Python model metadata object.

Lazy

This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.

Arguments

  • name str: Name of the model to create.
  • version int | None: Optionally version of the model to create, defaults to None and will create the model with incremented version from the last version in the model registry.
  • metrics dict | None: Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE)
  • description str | None: Optionally a string describing the model, defaults to empty string "".
  • input_example pandas.DataFrame | pandas.core.series.Series | numpy.ndarray | list | None: Optionally an input example that represents a single input for the model, defaults to None.
  • model_schema hsml.model_schema.ModelSchema | None: Optionally a model schema for the model inputs and/or outputs.

Returns

Model. The model metadata object.


Retrieval#

[source]

get_model#

ModelRegistry.get_model(name, version=None)

Get a model entity from the model registry. Getting a model from the Model Registry means getting its metadata handle so you can subsequently download the model directory.

Arguments

  • name str: Name of the model to get.
  • version int: Version of the model to retrieve, defaults to None and will return the version=1.

Returns

Model: The model metadata object.

Raises

  • RestAPIError: If unable to retrieve model from the model registry.

Properties#

[source]

created#

Creation date of the model.


[source]

creator#

Creator of the model.


[source]

description#

Description of the model.


[source]

environment#

Input example of the model.


[source]

framework#

framework of the model.


[source]

id#

Id of the model.


[source]

input_example#

input_example of the model.


[source]

model_files_path#

path of the model files including version and files folder. Resolves to /Projects/{project_name}/Models/{name}/{version}/Files


[source]

model_path#

path of the model with version folder omitted. Resolves to /Projects/{project_name}/Models/{name}


[source]

model_registry_id#

model_registry_id of the model.


[source]

model_schema#

model schema of the model.


[source]

name#

Name of the model.


[source]

program#

Executable used to export the model.


[source]

project_name#

project_name of the model.


[source]

shared_registry_project_name#

shared_registry_project_name of the model.


[source]

training_dataset#

training_dataset of the model.


[source]

training_dataset_version#


[source]

training_metrics#

Training metrics of the model.


[source]

user#

user of the model.


[source]

version#

Version of the model.


[source]

version_path#

path of the model including version folder. Resolves to /Projects/{project_name}/Models/{name}/{version}


Methods#

[source]

delete#

Model.delete()

Delete the model

Potentially dangerous operation

This operation drops all metadata associated with this version of the model and deletes the model files.

Raises

RestAPIError.


[source]

delete_tag#

Model.delete_tag(name)

Delete a tag attached to a model.

Arguments

  • name str: Name of the tag to be removed.

Raises

RestAPIError in case the backend fails to delete the tag.


[source]

deploy#

Model.deploy(
    name=None,
    description=None,
    artifact_version="CREATE",
    serving_tool=None,
    script_file=None,
    resources=None,
    inference_logger=None,
    inference_batcher=None,
    transformer=None,
    api_protocol="REST",
    environment=None,
)

Deploy the model.

Example

import hopsworks

project = hopsworks.login()

# get Hopsworks Model Registry handle
mr = project.get_model_registry()

# retrieve the trained model you want to deploy
my_model = mr.get_model("my_model", version=1)

my_deployment = my_model.deploy()

Arguments

  • name str | None: Name of the deployment.
  • description str | None: Description of the deployment.
  • artifact_version str | None: Version number of the model artifact to deploy, CREATE to create a new model artifact or MODEL-ONLY to reuse the shared artifact containing only the model files.
  • serving_tool str | None: Serving tool used to deploy the model server.
  • script_file str | None: Path to a custom predictor script implementing the Predict class.
  • resources hsml.resources.PredictorResources | dict | None: Resources to be allocated for the predictor.
  • inference_logger hsml.inference_logger.InferenceLogger | dict | None: Inference logger configuration.
  • inference_batcher hsml.inference_batcher.InferenceBatcher | dict | None: Inference batcher configuration.
  • transformer hsml.transformer.Transformer | dict | None: Transformer to be deployed together with the predictor.
  • api_protocol str | None: API protocol to be enabled in the deployment (i.e., 'REST' or 'GRPC'). Defaults to 'REST'.
  • environment str | None: The inference environment to use.

Returns

Deployment: The deployment metadata object of a new or existing deployment.


[source]

download#

Model.download(local_path=None)

Download the model files.

Arguments

  • local_path: path where to download the model files in the local filesystem

Returns

str: Absolute path to local folder containing the model files.


[source]

get_feature_view#

Model.get_feature_view(init=True, online=None)

Get the parent feature view of this model, based on explicit provenance. Only accessible, usable feature view objects are returned. Otherwise an Exception is raised. For more details, call the base method - get_feature_view_provenance

Returns

FeatureView: Feature View Object.

Raises

Exception in case the backend fails to retrieve the tags.


[source]

get_feature_view_provenance#

Model.get_feature_view_provenance()

Get the parent feature view of this model, based on explicit provenance. This feature view can be accessible, deleted or inaccessible. For deleted and inaccessible feature views, only a minimal information is returned.

Returns

ProvenanceLinks: Object containing the section of provenance graph requested.


[source]

get_tag#

Model.get_tag(name)

Get the tags of a model.

Arguments

  • name str: Name of the tag to get.

Returns

tag value

Raises

RestAPIError in case the backend fails to retrieve the tag.


[source]

get_tags#

Model.get_tags()

Retrieves all tags attached to a model.

Returns

Dict[str, obj] of tags.

Raises

RestAPIError in case the backend fails to retrieve the tags.


[source]

get_training_dataset_provenance#

Model.get_training_dataset_provenance()

Get the parent training dataset of this model, based on explicit provenance. This training dataset can be accessible, deleted or inaccessible. For deleted and inaccessible training datasets, only a minimal information is returned.

Returns

ProvenanceLinks: Object containing the section of provenance graph requested.


[source]

get_url#

Model.get_url()

[source]

save#

Model.save(
    model_path, await_registration=480, keep_original_files=False, upload_configuration=None
)

Persist this model including model files and metadata to the model registry.

Arguments

  • model_path: Local or remote (Hopsworks file system) path to the folder where the model files are located, or path to a specific model file.
  • await_registration: Awaiting time for the model to be registered in Hopsworks.
  • keep_original_files: If the model files are located in hopsfs, whether to move or copy those files into the Models dataset. Default is False (i.e., model files will be moved)
  • upload_configuration Dict[str, Any] | None: When saving a model from outside Hopsworks, the model is uploaded to the model registry using the REST APIs. Each model artifact is divided into chunks and each chunk uploaded independently. This parameter can be used to control the upload chunk size, the parallelism and the number of retries. upload_configuration can contain the following keys:
    • key chunk_size: size of each chunk in megabytes. Default 10.
    • key simultaneous_uploads: number of chunks to upload in parallel. Default 3.
    • key max_chunk_retries: number of times to retry the upload of a chunk in case of failure. Default 1.

Returns

Model: The model metadata object.


[source]

set_tag#

Model.set_tag(name, value)

Attach a tag to a model.

A tag consists of a pair. Tag names are unique identifiers across the whole cluster. The value of a tag can be any valid json - primitives, arrays or json objects.

Arguments

  • name str: Name of the tag to be added.
  • value str | dict: Value of the tag to be added.

Raises

RestAPIError in case the backend fails to add the tag.