Predictor#
Handle#
get_model_serving#
Connection.get_model_serving()
Get a reference to model serving to perform operations on. Model serving operates on top of a model registry, defaulting to the project's default model registry.
Example
import hopsworks
project = hopsworks.login()
ms = project.get_model_serving()
Returns
ModelServing
. A model serving handle object to perform operations on.
Creation#
create_predictor#
ModelServing.create_predictor(
model,
name=None,
artifact_version="CREATE",
serving_tool=None,
script_file=None,
resources=None,
inference_logger=None,
inference_batcher=None,
transformer=None,
api_protocol="REST",
)
Create a Predictor metadata object.
Example
# login into Hopsworks using hopsworks.login()
# get Hopsworks Model Registry handle
mr = project.get_model_registry()
# retrieve the trained model you want to deploy
my_model = mr.get_model("my_model", version=1)
# get Hopsworks Model Serving handle
ms = project.get_model_serving()
my_predictor = ms.create_predictor(my_model)
my_deployment = my_predictor.deploy()
Lazy
This method is lazy and does not persist any metadata or deploy any model on its own. To create a deployment using this predictor, call the deploy()
method.
Arguments
- model
hsml.model.Model
: Model to be deployed. - name
str | None
: Name of the predictor. - artifact_version
str | None
: Version number of the model artifact to deploy,CREATE
to create a new model artifact orMODEL-ONLY
to reuse the shared artifact containing only the model files. - serving_tool
str | None
: Serving tool used to deploy the model server. - script_file
str | None
: Path to a custom predictor script implementing the Predict class. - resources
hsml.resources.PredictorResources | dict | None
: Resources to be allocated for the predictor. - inference_logger
hsml.inference_logger.InferenceLogger | dict | str | None
: Inference logger configuration. - inference_batcher
hsml.inference_batcher.InferenceBatcher | dict | None
: Inference batcher configuration. - transformer
hsml.transformer.Transformer | dict | None
: Transformer to be deployed together with the predictor. - api_protocol
str | None
: API protocol to be enabled in the deployment (i.e., 'REST' or 'GRPC'). Defaults to 'REST'.
Returns
Predictor
. The predictor metadata object.
Retrieval#
deployment.predictor#
Predictors can be accessed from the deployment metadata objects.
deployment.predictor
To retrieve a deployment, see the Deployment Reference.
Properties#
api_protocol#
API protocol enabled in the predictor (e.g., HTTP or GRPC).
artifact_files_path#
artifact_path#
Path of the model artifact deployed by the predictor. Resolves to /Projects/{project_name}/Models/{name}/{version}/Artifacts/{artifact_version}/{name}{version}.zip
artifact_version#
Artifact version deployed by the predictor.
created_at#
Created at date of the predictor.
creator#
Creator of the predictor.
description#
Description of the predictor.
environment#
Name of the inference environment
id#
Id of the predictor.
inference_batcher#
Configuration of the inference batcher attached to the deployment component (i.e., predictor or transformer).
inference_logger#
Configuration of the inference logger attached to this predictor.
model_framework#
Model framework of the model to be deployed by the predictor.
model_name#
Name of the model deployed by the predictor.
model_path#
Model path deployed by the predictor.
model_server#
Model server used by the predictor.
model_version#
Model version deployed by the predictor.
name#
Name of the predictor.
project_namespace#
Kubernetes project namespace
requested_instances#
Total number of requested instances in the predictor.
resources#
Resource configuration for the deployment component (i.e., predictor or transformer).
script_file#
Script file used to load and run the model.
serving_tool#
Serving tool used to run the model server.
transformer#
Transformer configuration attached to the predictor.
Methods#
deploy#
Predictor.deploy()
Create a deployment for this predictor and persists it in the Model Serving.
Example
import hopsworks
project = hopsworks.login()
# get Hopsworks Model Registry handle
mr = project.get_model_registry()
# retrieve the trained model you want to deploy
my_model = mr.get_model("my_model", version=1)
# get Hopsworks Model Serving handle
ms = project.get_model_serving()
my_predictor = ms.create_predictor(my_model)
my_deployment = my_predictor.deploy()
print(my_deployment.get_state())
Returns
Deployment
. The deployment metadata object of a new or existing deployment.
describe#
Predictor.describe()
Print a description of the predictor
to_dict#
Predictor.to_dict()
To be implemented by the component type