Feature View#
FeatureView#
hsfs.feature_view.FeatureView(
name,
query,
featurestore_id,
id=None,
version=None,
description="",
labels=None,
inference_helper_columns=None,
training_helper_columns=None,
transformation_functions=None,
featurestore_name=None,
serving_keys=None,
logging_enabled=False,
**kwargs
)
Creation#
create_feature_view#
FeatureStore.create_feature_view(
name,
query,
version=None,
description="",
labels=None,
inference_helper_columns=None,
training_helper_columns=None,
transformation_functions=None,
logging_enabled=False,
)
Create a feature view metadata object and saved it to hopsworks.
Example
# connect to the Feature Store
fs = ...
# get the feature group instances
fg1 = fs.get_or_create_feature_group(...)
fg2 = fs.get_or_create_feature_group(...)
# construct the query
query = fg1.select_all().join(fg2.select_all())
# define the transformation function as a Hopsworks's UDF
@udf(int)
def plus_one(value):
return value + 1
# construct list of "transformation functions" on features
transformation_functions = [plus_one("feature1"), plus_one("feature1"))]
feature_view = fs.create_feature_view(
name='air_quality_fv',
version=1,
transformation_functions=transformation_functions,
query=query
)
Example
# get feature store instance
fs = ...
# define query object
query = ...
# define list of transformation functions
mapping_transformers = ...
# create feature view
feature_view = fs.create_feature_view(
name='feature_view_name',
version=1,
transformation_functions=mapping_transformers,
query=query
)
Warning
as_of
argument in the Query
will be ignored because feature view does not support time travel query.
Arguments
- name
str
: Name of the feature view to create. - query
hsfs.constructor.query.Query
: Feature storeQuery
. - version
int | None
: Version of the feature view to create, defaults toNone
and will create the feature view with incremented version from the last version in the feature store. - description
str | None
: A string describing the contents of the feature view to improve discoverability for Data Scientists, defaults to empty string""
. - labels
List[str] | None
: A list of feature names constituting the prediction label/feature of the feature view. When replaying aQuery
during model inference, the label features can be omitted from the feature vector retrieval. Defaults to[]
, no label. - inference_helper_columns
List[str] | None
: A list of feature names that are not used in training the model itself but can be used during batch or online inference for extra information. Inference helper column name(s) must be part of theQuery
object. If inference helper column name(s) belong to feature group that is part of aJoin
withprefix
defined, then this prefix needs to be prepended to the original column name when defininginference_helper_columns
list. When replaying aQuery
during model inference, the inference helper columns optionally can be omitted during batch (get_batch_data
) and will be omitted during online inference (get_feature_vector(s)
). To get inference helper column(s) during online inference useget_inference_helper(s)
method. Defaults to `[], no helper columns. - training_helper_columns
List[str] | None
: A list of feature names that are not the part of the model schema itself but can be used during training as a helper for extra information. Training helper column name(s) must be part of theQuery
object. If training helper column name(s) belong to feature group that is part of aJoin
withprefix
defined, then this prefix needs to prepended to the original column name when definingtraining_helper_columns
list. When replaying aQuery
during model inference, the training helper columns will be omitted during both batch and online inference. Training helper columns can be optionally fetched with training data. For more details see documentation for feature view's get training data methods. Defaults to `[], no training helper columns. - transformation_functions
List[hsfs.transformation_function.TransformationFunction | hsfs.hopsworks_udf.HopsworksUdf] | None
: Model Dependent Transformation functions attached to the feature view. It can be a list of list of user defined functions defined using the hopsworks@udf
decorator. Defaults toNone
, no transformations.
Returns:
FeatureView
: The feature view metadata object.
Retrieval#
get_feature_view#
FeatureStore.get_feature_view(name, version=None)
Get a feature view entity from the feature store.
Getting a feature view from the Feature Store means getting its metadata.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(
name='feature_view_name',
version=1
)
Arguments
- name
str
: Name of the feature view to get. - version
int
: Version of the feature view to retrieve, defaults toNone
and will return theversion=1
.
Returns
FeatureView
: The feature view metadata object.
Raises
hsfs.client.exceptions.RestAPIError
: If unable to retrieve feature view from the feature store.
get_feature_views#
FeatureStore.get_feature_views(name)
Get a list of all versions of a feature view entity from the feature store.
Getting a feature view from the Feature Store means getting its metadata.
Example
# get feature store instance
fs = ...
# get a list of all versions of a feature view
feature_view = fs.get_feature_views(
name='feature_view_name'
)
Arguments
- name
str
: Name of the feature view to get.
Returns
FeatureView
: List of feature view metadata objects.
Raises
hsfs.client.exceptions.RestAPIError
: If unable to retrieve feature view from the feature store.
Properties#
description#
Description of the feature view.
feature_logging#
feature_store_name#
Name of the feature store in which the feature group is located.
features#
Schema of untransformed features in the Feature view. (alias)
featurestore_id#
Feature store id.
id#
Feature view id.
inference_helper_columns#
The helper column sof the feature view.
Can be a composite of multiple features.
labels#
The labels/prediction feature of the feature view.
Can be a composite of multiple features.
logging_enabled#
model_dependent_transformations#
Get Model-Dependent transformations as a dictionary mapping transformed feature names to transformation function
name#
Name of the feature view.
on_demand_transformations#
Get On-Demand transformations as a dictionary mapping on-demand feature names to transformation function
primary_keys#
Set of primary key names that is required as keys in input dict object for get_feature_vector(s)
method. When there are duplicated primary key names and prefix is not defined in the query, prefix is generated and prepended to the primary key name in this format "fgId_{feature_group_id}_{join_index}" where join_index
is the order of the join.
query#
Query of the feature view.
schema#
Schema of untransformed features in the Feature view.
serving_keys#
All primary keys of the feature groups included in the query.
training_helper_columns#
The helper column sof the feature view.
Can be a composite of multiple features.
transformation_functions#
Get transformation functions.
version#
Version number of the feature view.
Methods#
add_tag#
FeatureView.add_tag(name, value)
Attach a tag to a feature view.
A tag consists of a name and value pair. Tag names are unique identifiers across the whole cluster. The value of a tag can be any valid json - primitives, arrays or json objects.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# attach a tag to a feature view
feature_view.add_tag(name="tag_schema", value={"key", "value"})
Arguments
- name
str
: Name of the tag to be added. - value
Any
: Value of the tag to be added.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to add the tag.
add_training_dataset_tag#
FeatureView.add_training_dataset_tag(training_dataset_version, name, value)
Attach a tag to a training dataset.
Example
# get feature store instance
fs = ...
# get feature feature view instance
feature_view = fs.get_feature_view(...)
# attach a tag to a training dataset
feature_view.add_training_dataset_tag(
training_dataset_version=1,
name="tag_schema",
value={"key", "value"}
)
Arguments
- training_dataset_version
int
: training dataset version - name
str
: Name of the tag to be added. - value
Dict[str, Any] | hopsworks_common.tag.Tag
: Value of the tag to be added.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to add the tag.
clean#
FeatureView.clean(feature_store_id, feature_view_name, feature_view_version)
Delete the feature view and all associated metadata and training data. This can delete corrupted feature view which cannot be retrieved due to a corrupted query for example.
Example
# delete a feature view and all associated metadata
from hsfs.feature_view import FeatureView
FeatureView.clean(
feature_store_id=1,
feature_view_name='feature_view_name',
feature_view_version=1
)
Potentially dangerous operation
This operation drops all metadata associated with this version of the feature view and related training dataset and materialized data in HopsFS.
Arguments
- feature_store_id
int
: int. Id of feature store. - feature_view_name
str
: str. Name of feature view. - feature_view_version
str
: str. Version of feature view.
Raises
hsfs.client.exceptions.RestAPIError
.
compute_on_demand_features#
FeatureView.compute_on_demand_features(feature_vector, request_parameters=None, external=None)
Function computes on-demand features present in the feature view.
Arguments
- feature_vector
List[Any] | List[List[Any]] | pandas.DataFrame | polars.dataframe.frame.DataFrame
:Union[List[Any], List[List[Any]], pd.DataFrame, pl.DataFrame]
. The feature vector to be transformed. - request_parameters
List[Dict[str, Any]] | Dict[str, Any] | None
: Request parameters required by on-demand transformation functions to compute on-demand features present in the feature view.
Returns
Union[List[Any], List[List[Any]], pd.DataFrame, pl.DataFrame]
: The feature vector that contains all on-demand features in the feature view.
create_feature_monitoring#
FeatureView.create_feature_monitoring(
name,
feature_name,
description=None,
start_date_time=None,
end_date_time=None,
cron_expression="0 0 12 ? * * *",
)
Enable feature monitoring to compare statistics on snapshots of feature data over time.
Experimental
Public API is subject to change, this feature is not suitable for production use-cases.
Example
# fetch feature view
fg = fs.get_feature_view(name="my_feature_view", version=1)
# enable feature monitoring
my_config = fg.create_feature_monitoring(
name="my_monitoring_config",
feature_name="my_feature",
description="my monitoring config description",
cron_expression="0 0 12 ? * * *",
).with_detection_window(
# Data inserted in the last day
time_offset="1d",
window_length="1d",
).with_reference_window(
# compare to a given value
specific_value=0.5,
).compare_on(
metric="mean",
threshold=0.5,
).save()
Arguments
- name
str
: Name of the feature monitoring configuration. name must be unique for all configurations attached to the feature group. - feature_name
str
: Name of the feature to monitor. - description
str | None
: Description of the feature monitoring configuration. - start_date_time
int | str | datetime.datetime | datetime.date | pandas._libs.tslibs.timestamps.Timestamp | None
: Start date and time from which to start computing statistics. - end_date_time
int | str | datetime.datetime | datetime.date | pandas._libs.tslibs.timestamps.Timestamp | None
: End date and time at which to stop computing statistics. - cron_expression
str | None
: Cron expression to use to schedule the job. The cron expression must be in UTC and follow the Quartz specification. Default is '0 0 12 ? * * *', every day at 12pm UTC.
Raises
hsfs.client.exceptions.FeatureStoreException
.
Return
FeatureMonitoringConfig
Configuration with minimal information about the feature monitoring. Additional information are required before feature monitoring is enabled.
create_statistics_monitoring#
FeatureView.create_statistics_monitoring(
name,
feature_name=None,
description=None,
start_date_time=None,
end_date_time=None,
cron_expression="0 0 12 ? * * *",
)
Run a job to compute statistics on snapshot of feature data on a schedule.
Experimental
Public API is subject to change, this feature is not suitable for production use-cases.
Example
# fetch feature view
fv = fs.get_feature_view(name="my_feature_view", version=1)
# enable statistics monitoring
my_config = fv._create_statistics_monitoring(
name="my_config",
start_date_time="2021-01-01 00:00:00",
description="my description",
cron_expression="0 0 12 ? * * *",
).with_detection_window(
# Statistics computed on 10% of the last week of data
time_offset="1w",
row_percentage=0.1,
).save()
Arguments
- name
str
: Name of the feature monitoring configuration. name must be unique for all configurations attached to the feature view. - feature_name
str | None
: Name of the feature to monitor. If not specified, statistics will be computed for all features. - description
str | None
: Description of the feature monitoring configuration. - start_date_time
int | str | datetime.datetime | datetime.date | pandas._libs.tslibs.timestamps.Timestamp | None
: Start date and time from which to start computing statistics. - end_date_time
int | str | datetime.datetime | datetime.date | pandas._libs.tslibs.timestamps.Timestamp | None
: End date and time at which to stop computing statistics. - cron_expression
str | None
: Cron expression to use to schedule the job. The cron expression must be in UTC and follow the Quartz specification. Default is '0 0 12 ? * * *', every day at 12pm UTC.
Raises
hsfs.client.exceptions.FeatureStoreException
.
Return
FeatureMonitoringConfig
Configuration with minimal information about the feature monitoring. Additional information are required before feature monitoring is enabled.
create_train_test_split#
FeatureView.create_train_test_split(
test_size=None,
train_start="",
train_end="",
test_start="",
test_end="",
storage_connector=None,
location="",
description="",
extra_filter=None,
data_format="parquet",
coalesce=False,
seed=None,
statistics_config=None,
write_options=None,
spine=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
**kwargs
)
Create the metadata for a training dataset and save the corresponding training data into location
. The training data is split into train and test set at random or according to time ranges. The training data can be retrieved by calling feature_view.get_train_test_split
.
Create random splits
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# create a train-test split dataset
version, job = feature_view.create_train_test_split(
test_size=0.2,
description='Description of a dataset',
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format='csv'
)
Create time series splits by specifying date as string
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
train_start = "2022-01-01 00:00:00"
train_end = "2022-06-06 23:59:59"
test_start = "2022-06-07 00:00:00"
test_end = "2022-12-25 23:59:59"
# create a train-test split dataset
version, job = feature_view.create_train_test_split(
train_start=train_start,
train_end=train_end,
test_start=test_start,
test_end=test_end,
description='Description of a dataset',
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format='csv'
)
Create time series splits by specifying date as datetime object
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
from datetime import datetime
date_format = "%Y-%m-%d %H:%M:%S"
train_start = datetime.strptime("2022-01-01 00:00:00", date_format)
train_end = datetime.strptime("2022-06-06 23:59:59", date_format)
test_start = datetime.strptime("2022-06-07 00:00:00", date_format)
test_end = datetime.strptime("2022-12-25 23:59:59" , date_format)
# create a train-test split dataset
version, job = feature_view.create_train_test_split(
train_start=train_start,
train_end=train_end,
test_start=test_start,
test_end=test_end,
description='Description of a dataset',
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format='csv'
)
Write training dataset to external storage
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get storage connector instance
external_storage_connector = fs.get_storage_connector("storage_connector_name")
# create a train-test split dataset
version, job = feature_view.create_train_test_split(
train_start=...,
train_end=...,
test_start=...,
test_end=...,
storage_connector = external_storage_connector,
description=...,
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format=...
)
Data Formats
The feature store currently supports the following data formats for training datasets:
- tfrecord
- csv
- tsv
- parquet
- avro
- orc
Currently not supported petastorm, hdf5 and npy file formats.
Warning, the following code will fail because category column contains sparse values and training dataset may not have all values available in test split.
import pandas as pd
df = pd.DataFrame({
'category_col':['category_a','category_b','category_c','category_d'],
'numeric_col': [40,10,60,40]
})
feature_group = fs.get_or_create_feature_group(
name='feature_group_name',
version=1,
primary_key=['category_col']
)
feature_group.insert(df)
label_encoder = fs.get_transformation_function(name='label_encoder')
feature_view = fs.create_feature_view(
name='feature_view_name',
query=feature_group.select_all(),
transformation_functions={'category_col':label_encoder}
)
feature_view.create_train_test_split(
test_size=0.5
)
# Output: KeyError: 'category_c'
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- test_size
float | None
: size of test set. - train_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the train split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - train_end
int | str | datetime.datetime | datetime.date | None
: End event time for the train split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the test split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_end
int | str | datetime.datetime | datetime.date | None
: End event time for the test split query, exclusive. Strings should be formatted in one of the following ormats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - storage_connector
hsfs.StorageConnector | None
: Storage connector defining the sink location for the training dataset, defaults toNone
, and materializes training dataset on HopsFS. - location
str | None
: Path to complement the sink storage connector with, e.g if the storage connector points to an S3 bucket, this path can be used to define a sub-directory inside the bucket to place the training dataset. Defaults to""
, saving the training dataset at the root defined by the storage connector. - description
str | None
: A string describing the contents of the training dataset to improve discoverability for Data Scientists, defaults to empty string""
. - extra_filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: Additional filters to be attached to the training dataset. The filters will be also applied inget_batch_data
. - data_format
str | None
: The data format used to save the training dataset, defaults to"parquet"
-format. - coalesce
bool | None
: If true the training dataset data will be coalesced into a single partition before writing. The resulting training dataset will be a single file per split. Default False. - seed
int | None
: Optionally, define a seed to create the random splits with, in order to guarantee reproducability, defaults toNone
. - statistics_config
hsfs.StatisticsConfig | bool | dict | None
: A configuration object, or a dictionary with keys "enabled
" to generally enable descriptive statistics computation for this feature group,"correlations
" to turn on feature correlation computation and"histograms"
to compute feature value frequencies. The values should be booleans indicating the setting. To fully turn off statistics computation passstatistics_config=False
. Defaults toNone
and will compute only descriptive statistics. - write_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. When using thepython
engine, write_options can contain the following entries:- key
use_spark
and valueTrue
to materialize training dataset with Spark instead of Hopsworks Feature Query Service. - key
spark
and value an object of type hsfs.core.job_configuration.JobConfiguration to configure the Hopsworks Job used to compute the training dataset. - key
wait_for_job
and valueTrue
orFalse
to configure whether or not to the save call should return only after the Hopsworks Job has finished. By default it waits. Defaults to{}
.
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group. - primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns.
Returns
(td_version, Job
): Tuple of training dataset version and job. When using the python
engine, it returns the Hopsworks Job that was launched to create the training dataset.
create_train_validation_test_split#
FeatureView.create_train_validation_test_split(
validation_size=None,
test_size=None,
train_start="",
train_end="",
validation_start="",
validation_end="",
test_start="",
test_end="",
storage_connector=None,
location="",
description="",
extra_filter=None,
data_format="parquet",
coalesce=False,
seed=None,
statistics_config=None,
write_options=None,
spine=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
**kwargs
)
Create the metadata for a training dataset and save the corresponding training data into location
. The training data is split into train, validation, and test set at random or according to time range. The training data can be retrieved by calling feature_view.get_train_validation_test_split
.
Create random splits
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# create a train-validation-test split dataset
version, job = feature_view.create_train_validation_test_split(
validation_size=0.3,
test_size=0.2,
description='Description of a dataset',
data_format='csv'
)
Create time series splits by specifying date as string
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
train_start = "2022-01-01 00:00:00"
train_end = "2022-06-01 23:59:59"
validation_start = "2022-06-02 00:00:00"
validation_end = "2022-07-01 23:59:59"
test_start = "2022-07-02 00:00:00"
test_end = "2022-08-01 23:59:59"
# create a train-validation-test split dataset
version, job = feature_view.create_train_validation_test_split(
train_start=train_start,
train_end=train_end,
validation_start=validation_start,
validation_end=validation_end,
test_start=test_start,
test_end=test_end,
description='Description of a dataset',
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format='csv'
)
Create time series splits by specifying date as datetime object
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
from datetime import datetime
date_format = "%Y-%m-%d %H:%M:%S"
train_start = datetime.strptime("2022-01-01 00:00:00", date_format)
train_end = datetime.strptime("2022-06-06 23:59:59", date_format)
validation_start = datetime.strptime("2022-06-02 00:00:00", date_format)
validation_end = datetime.strptime("2022-07-01 23:59:59", date_format)
test_start = datetime.strptime("2022-06-07 00:00:00", date_format)
test_end = datetime.strptime("2022-12-25 23:59:59", date_format)
# create a train-validation-test split dataset
version, job = feature_view.create_train_validation_test_split(
train_start=train_start,
train_end=train_end,
validation_start=validation_start,
validation_end=validation_end,
test_start=test_start,
test_end=test_end,
description='Description of a dataset',
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format='csv'
)
Write training dataset to external storage
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get storage connector instance
external_storage_connector = fs.get_storage_connector("storage_connector_name")
# create a train-validation-test split dataset
version, job = feature_view.create_train_validation_test_split(
train_start=...,
train_end=...,
validation_start=...,
validation_end=...,
test_start=...,
test_end=...,
description=...,
storage_connector = external_storage_connector,
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format=...
)
Data Formats
The feature store currently supports the following data formats for training datasets:
- tfrecord
- csv
- tsv
- parquet
- avro
- orc
Currently not supported petastorm, hdf5 and npy file formats.
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- validation_size
float | None
: size of validation set. - test_size
float | None
: size of test set. - train_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the train split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - train_end
int | str | datetime.datetime | datetime.date | None
: End event time for the train split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - validation_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the validation split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - validation_end
int | str | datetime.datetime | datetime.date | None
: End event time for the validation split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the test split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_end
int | str | datetime.datetime | datetime.date | None
: End event time for the test split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - storage_connector
hsfs.StorageConnector | None
: Storage connector defining the sink location for the training dataset, defaults toNone
, and materializes training dataset on HopsFS. - location
str | None
: Path to complement the sink storage connector with, e.g if the storage connector points to an S3 bucket, this path can be used to define a sub-directory inside the bucket to place the training dataset. Defaults to""
, saving the training dataset at the root defined by the storage connector. - description
str | None
: A string describing the contents of the training dataset to improve discoverability for Data Scientists, defaults to empty string""
. - extra_filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: Additional filters to be attached to the training dataset. The filters will be also applied inget_batch_data
. - data_format
str | None
: The data format used to save the training dataset, defaults to"parquet"
-format. - coalesce
bool | None
: If true the training dataset data will be coalesced into a single partition before writing. The resulting training dataset will be a single file per split. Default False. - seed
int | None
: Optionally, define a seed to create the random splits with, in order to guarantee reproducability, defaults toNone
. - statistics_config
hsfs.StatisticsConfig | bool | dict | None
: A configuration object, or a dictionary with keys "enabled
" to generally enable descriptive statistics computation for this feature group,"correlations
" to turn on feature correlation computation and"histograms"
to compute feature value frequencies. The values should be booleans indicating the setting. To fully turn off statistics computation passstatistics_config=False
. Defaults toNone
and will compute only descriptive statistics. - write_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. When using thepython
engine, write_options can contain the following entries:- key
use_spark
and valueTrue
to materialize training dataset with Spark instead of Hopsworks Feature Query Service. - key
spark
and value an object of type hsfs.core.job_configuration.JobConfiguration to configure the Hopsworks Job used to compute the training dataset. - key
wait_for_job
and valueTrue
orFalse
to configure whether or not to the save call should return only after the Hopsworks Job has finished. By default it waits. Defaults to{}
.
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group. - primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns.
Returns
(td_version, Job
): Tuple of training dataset version and job. When using the python
engine, it returns the Hopsworks Job that was launched to create the training dataset.
create_training_data#
FeatureView.create_training_data(
start_time="",
end_time="",
storage_connector=None,
location="",
description="",
extra_filter=None,
data_format="parquet",
coalesce=False,
seed=None,
statistics_config=None,
write_options=None,
spine=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
**kwargs
)
Create the metadata for a training dataset and save the corresponding training data into location
. The training data can be retrieved by calling feature_view.get_training_data
.
Create training dataset
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# create a training dataset
version, job = feature_view.create_training_data(
description='Description of a dataset',
data_format='csv',
# async creation in order not to wait till finish of the job
write_options={"wait_for_job": False}
)
Create training data specifying date range with dates as strings
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
start_time = "2022-01-01 00:00:00"
end_time = "2022-06-06 23:59:59"
# create a training dataset
version, job = feature_view.create_training_data(
start_time=start_time,
end_time=end_time,
description='Description of a dataset',
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format='csv'
)
# When we want to read the training data, we need to supply the training data version returned by the create_training_data method:
X_train, X_test, y_train, y_test = feature_view.get_training_data(version)
Create training data specifying date range with dates as datetime objects
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
from datetime import datetime
date_format = "%Y-%m-%d %H:%M:%S"
start_time = datetime.strptime("2022-01-01 00:00:00", date_format)
end_time = datetime.strptime("2022-06-06 23:59:59", date_format)
# create a training dataset
version, job = feature_view.create_training_data(
start_time=start_time,
end_time=end_time,
description='Description of a dataset',
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format='csv'
)
Write training dataset to external storage
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get storage connector instance
external_storage_connector = fs.get_storage_connector("storage_connector_name")
# create a train-test split dataset
version, job = feature_view.create_training_data(
start_time=...,
end_time=...,
storage_connector = external_storage_connector,
description=...,
# you can have different data formats such as csv, tsv, tfrecord, parquet and others
data_format=...
)
Data Formats
The feature store currently supports the following data formats for training datasets:
- tfrecord
- csv
- tsv
- parquet
- avro
- orc
Currently not supported petastorm, hdf5 and npy file formats.
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- start_time
int | str | datetime.datetime | datetime.date | None
: Start event time for the training dataset query, inclusive. Optional. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - end_time
int | str | datetime.datetime | datetime.date | None
: End event time for the training dataset query, exclusive. Optional. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - storage_connector
hsfs.StorageConnector | None
: Storage connector defining the sink location for the training dataset, defaults toNone
, and materializes training dataset on HopsFS. - location
str | None
: Path to complement the sink storage connector with, e.g if the storage connector points to an S3 bucket, this path can be used to define a sub-directory inside the bucket to place the training dataset. Defaults to""
, saving the training dataset at the root defined by the storage connector. - description
str | None
: A string describing the contents of the training dataset to improve discoverability for Data Scientists, defaults to empty string""
. - extra_filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: Additional filters to be attached to the training dataset. The filters will be also applied inget_batch_data
. - data_format
str | None
: The data format used to save the training dataset, defaults to"parquet"
-format. - coalesce
bool | None
: If true the training dataset data will be coalesced into a single partition before writing. The resulting training dataset will be a single file per split. Default False. - seed
int | None
: Optionally, define a seed to create the random splits with, in order to guarantee reproducability, defaults toNone
. - statistics_config
hsfs.StatisticsConfig | bool | dict | None
: A configuration object, or a dictionary with keys "enabled
" to generally enable descriptive statistics computation for this feature group,"correlations
" to turn on feature correlation computation and"histograms"
to compute feature value frequencies. The values should be booleans indicating the setting. To fully turn off statistics computation passstatistics_config=False
. Defaults toNone
and will compute only descriptive statistics. - write_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. When using thepython
engine, write_options can contain the following entries:- key
use_spark
and valueTrue
to materialize training dataset with Spark instead of Hopsworks Feature Query Service. - key
spark
and value an object of type hsfs.core.job_configuration.JobConfiguration to configure the Hopsworks Job used to compute the training dataset. - key
wait_for_job
and valueTrue
orFalse
to configure whether or not to the save call should return only after the Hopsworks Job has finished. By default it waits. Defaults to{}
.
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group. - primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns.
Returns
(td_version, Job
): Tuple of training dataset version and job. When using the python
engine, it returns the Hopsworks Job that was launched to create the training dataset.
delete#
FeatureView.delete()
Delete current feature view, all associated metadata and training data.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# delete a feature view
feature_view.delete()
Potentially dangerous operation
This operation drops all metadata associated with this version of the feature view and related training dataset and materialized data in HopsFS.
Raises
hsfs.client.exceptions.RestAPIError
.
delete_all_training_datasets#
FeatureView.delete_all_training_datasets()
Delete all training datasets. This will delete both metadata and training data.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# delete all training datasets
feature_view.delete_all_training_datasets()
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to delete the training datasets.
delete_log#
FeatureView.delete_log(transformed=None)
Delete the logged feature data for the current feature view.
Arguments
- transformed
bool | None
: Whether to delete transformed logs. Defaults to None. Delete both transformed and untransformed logs.
Example
# delete log
feature_view.delete_log()
# Raises hsfs.client.exceptions.RestAPIError
in case the backend fails to delete the log.
delete_tag#
FeatureView.delete_tag(name)
Delete a tag attached to a feature view.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# delete a tag
feature_view.delete_tag('name_of_tag')
Arguments
- name
str
: Name of the tag to be removed.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to delete the tag.
delete_training_dataset#
FeatureView.delete_training_dataset(training_dataset_version)
Delete a training dataset. This will delete both metadata and training data.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# delete a training dataset
feature_view.delete_training_dataset(
training_dataset_version=1
)
Arguments
- training_dataset_version
int
: Version of the training dataset to be removed.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to delete the training dataset.
delete_training_dataset_tag#
FeatureView.delete_training_dataset_tag(training_dataset_version, name)
Delete a tag attached to a training dataset.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# delete training dataset tag
feature_view.delete_training_dataset_tag(
training_dataset_version=1,
name='name_of_dataset'
)
Arguments
- training_dataset_version
int
: training dataset version - name
str
: Name of the tag to be removed.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to delete the tag.
enable_logging#
FeatureView.enable_logging()
Enable feature logging for the current feature view.
This method activates logging of features.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# enable logging
feature_view.enable_logging()
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to enable feature logging.
find_neighbors#
FeatureView.find_neighbors(
embedding, feature=None, k=10, filter=None, external=None, return_type="list"
)
Finds the nearest neighbors for a given embedding in the vector database.
If filter
is specified, or if embedding feature is stored in default project index, the number of results returned may be less than k. Try using a large value of k and extract the top k items from the results if needed.
Duplicate column error in Polars
If the feature view has duplicate column names, attempting to create a polars DataFrame will raise an error. To avoid this, set return_type
to "list"
or "pandas"
.
Arguments
- embedding
List[int | float]
: The target embedding for which neighbors are to be found. - feature
hsfs.feature.Feature | None
: The feature used to compute similarity score. Required only if there are multiple embeddings (optional). - k
int | None
: The number of nearest neighbors to retrieve (default is 10). - filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: A filter expression to restrict the search space (optional). - external
bool | None
: boolean, optional. If set to True, the connection to the online feature store is established using the same host as for thehost
parameter in thehopsworks.login()
method. If set to False, the online feature store storage connector is used which relies on the private IP. Defaults to True if connection to Hopsworks is established from external environment (e.g AWS Sagemaker or Google Colab), otherwise to False. - return_type
Literal['list', 'polars', 'pandas']
:"list"
,"pandas"
or"polars"
. Defaults to"list"
.
Returns
list
, pd.DataFrame
or polars.DataFrame
if return type
is set to "list"
, "pandas"
or "polars"
respectively. Defaults to list
.
Example
embedding_index = EmbeddingIndex()
embedding_index.add_embedding(name="user_vector", dimension=3)
fg = fs.create_feature_group(
name='air_quality',
embedding_index=embedding_index,
version=1,
primary_key=['id1'],
online_enabled=True,
)
fg.insert(data)
fv = fs.create_feature_view("air_quality", fg.select_all())
fv.find_neighbors(
[0.1, 0.2, 0.3],
k=5,
)
# apply filter
fg.find_neighbors(
[0.1, 0.2, 0.3],
k=5,
feature=fg.user_vector, # optional
filter=(fg.id1 > 10) & (fg.id1 < 30)
)
from_response_json#
FeatureView.from_response_json(json_dict)
Function that constructs the class object from its json serialization.
Arguments
- json_dict
Dict[str, Any]
:Dict[str, Any]
. Json serialized dictionary for the class.
Returns
TransformationFunction
: Json deserialized class object.
get_batch_data#
FeatureView.get_batch_data(
start_time=None,
end_time=None,
read_options=None,
spine=None,
primary_key=False,
event_time=False,
inference_helper_columns=False,
dataframe_type="default",
transformed=True,
**kwargs
)
Get a batch of data from an event time interval from the offline feature store.
Batch data for the last 24 hours
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
import datetime
start_date = (datetime.datetime.now() - datetime.timedelta(hours=24))
end_date = (datetime.datetime.now())
# get a batch of data
df = feature_view.get_batch_data(
start_time=start_date,
end_time=end_date
)
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- start_time
int | str | datetime.datetime | datetime.date | None
: Start event time for the batch query, inclusive. Optional. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - end_time
int | str | datetime.datetime | datetime.date | None
: End event time for the batch query, exclusive. Optional. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - read_options
Dict[str, Any] | None
: User provided read options for python engine, defaults to{}
:- key
"arrow_flight_config"
to pass a dictionary of arrow flight configurations. For example:{"arrow_flight_config": {"timeout": 900}}
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group. - primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - inference_helper_columns
bool
: whether to include inference helper columns or not. Inference helper columns are a list of feature names in the feature view, defined during its creation, that may not be used in training the model itself but can be used during batch or online inference for extra information. If inference helper columns were not defined in the feature viewinference_helper_columns=True
will not any effect. Defaults toFalse
, no helper columns. - dataframe_type
str | None
: str, optional. The type of the returned dataframe. Possible values are"default"
,"spark"
,"pandas"
,"polars"
,"numpy"
or"python"
. Defaults to "default", which maps to Spark dataframe for the Spark Engine and Pandas dataframe for the Python engine. - transformed
bool | None
: Setting toFalse
returns the untransformed feature vectors.
Returns
DataFrame
: The spark dataframe containing the feature data. pyspark.DataFrame
. A Spark DataFrame. pandas.DataFrame
. A Pandas DataFrame. polars.DataFrame
. A Polars DataFrame. numpy.ndarray
. A two-dimensional Numpy array. list
. A two-dimensional Python list.
get_batch_query#
FeatureView.get_batch_query(start_time=None, end_time=None)
Get a query string of the batch query.
Batch query for the last 24 hours
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
import datetime
start_date = (datetime.datetime.now() - datetime.timedelta(hours=24))
end_date = (datetime.datetime.now())
# get a query string of batch query
query_str = feature_view.get_batch_query(
start_time=start_date,
end_time=end_date
)
# print query string
print(query_str)
Arguments
- start_time
int | str | datetime.datetime | datetime.date | None
: Start event time for the batch query, inclusive. Optional. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - end_time
int | str | datetime.datetime | datetime.date | None
: End event time for the batch query, exclusive. Optional. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds.
Returns
str
: batch query
get_feature_monitoring_configs#
FeatureView.get_feature_monitoring_configs(name=None, feature_name=None, config_id=None)
Fetch feature monitoring configs attached to the feature view. If no arguments is provided the method will return all feature monitoring configs attached to the feature view, meaning all feature monitoring configs that are attach to a feature in the feature view. If you wish to fetch a single config, provide the its name. If you wish to fetch all configs attached to a particular feature, provide the feature name.
Example
# fetch your feature view
fv = fs.get_feature_view(name="my_feature_view", version=1)
# fetch all feature monitoring configs attached to the feature view
fm_configs = fv.get_feature_monitoring_configs()
# fetch a single feature monitoring config by name
fm_config = fv.get_feature_monitoring_configs(name="my_config")
# fetch all feature monitoring configs attached to a particular feature
fm_configs = fv.get_feature_monitoring_configs(feature_name="my_feature")
# fetch a single feature monitoring config with a particular id
fm_config = fv.get_feature_monitoring_configs(config_id=1)
Arguments
- name
str | None
: If provided fetch only the feature monitoring config with the given name. Defaults to None. - feature_name
str | None
: If provided, fetch only configs attached to a particular feature. Defaults to None. - config_id
int | None
: If provided, fetch only the feature monitoring config with the given id. Defaults to None.
Raises
hsfs.client.exceptions.RestAPIError
. hsfs.client.exceptions.FeatureStoreException
. - ValueError: if both name and feature_name are provided. - TypeError: if name or feature_name are not string or None.
Return
Union[FeatureMonitoringConfig
, List[FeatureMonitoringConfig
], None] A list of feature monitoring configs. If name provided, returns either a single config or None if not found.
get_feature_monitoring_history#
FeatureView.get_feature_monitoring_history(
config_name=None, config_id=None, start_time=None, end_time=None, with_statistics=True
)
Fetch feature monitoring history for a given feature monitoring config.
Example
# fetch your feature view
fv = fs.get_feature_view(name="my_feature_group", version=1)
# fetch feature monitoring history for a given feature monitoring config
fm_history = fv.get_feature_monitoring_history(
config_name="my_config",
start_time="2020-01-01",
)
# or use the config id
fm_history = fv.get_feature_monitoring_history(
config_id=1,
start_time=datetime.now() - timedelta(weeks=2),
end_time=datetime.now() - timedelta(weeks=1),
with_statistics=False,
)
Arguments
- config_name
str | None
: The name of the feature monitoring config to fetch history for. Defaults to None. - config_id
int | None
: The id of the feature monitoring config to fetch history for. Defaults to None. - start_date: The start date of the feature monitoring history to fetch. Defaults to None.
- end_date: The end date of the feature monitoring history to fetch. Defaults to None.
- with_statistics
bool | None
: Whether to include statistics in the feature monitoring history. Defaults to True. If False, only metadata about the monitoring will be fetched.
Raises
hsfs.client.exceptions.RestAPIError
. hsfs.client.exceptions.FeatureStoreException
. - ValueError: if both config_name and config_id are provided. - TypeError: if config_name or config_id are not respectively string, int or None.
Return
List[FeatureMonitoringResult
] A list of feature monitoring results containing the monitoring metadata as well as the computed statistics for the detection and reference window if requested.
get_feature_vector#
FeatureView.get_feature_vector(
entry,
passed_features=None,
external=None,
return_type="list",
allow_missing=False,
force_rest_client=False,
force_sql_client=False,
transform=True,
request_parameters=None,
)
Returns assembled feature vector from online feature store. Call feature_view.init_serving
before this method if the following configurations are needed. 1. The training dataset version of the transformation statistics 2. Additional configurations of online serving engine
Missing primary key entries
If the provided primary key entry
can't be found in one or more of the feature groups used by this feature view the call to this method will raise an exception. Alternatively, setting allow_missing
to True
returns a feature vector with missing values.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get assembled serving vector as a python list
feature_view.get_feature_vector(
entry = {"pk1": 1, "pk2": 2}
)
# get assembled serving vector as a pandas dataframe
feature_view.get_feature_vector(
entry = {"pk1": 1, "pk2": 2},
return_type = "pandas"
)
# get assembled serving vector as a numpy array
feature_view.get_feature_vector(
entry = {"pk1": 1, "pk2": 2},
return_type = "numpy"
)
Get feature vector with user-supplied features
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# the application provides a feature value 'app_attr'
app_attr = ...
# get a feature vector
feature_view.get_feature_vector(
entry = {"pk1": 1, "pk2": 2},
passed_features = { "app_feature" : app_attr }
)
Arguments
- entry
Dict[str, Any]
: dictionary of feature group primary key and values provided by serving application. Set of required primary keys isfeature_view.primary_keys
If the required primary keys is not provided, it will look for name of the primary key in feature group in the entry. - passed_features
Dict[str, Any] | None
: dictionary of feature values provided by the application at runtime. They can replace features values fetched from the feature store as well as providing feature values which are not available in the feature store. - external
bool | None
: boolean, optional. If set to True, the connection to the online feature store is established using the same host as for thehost
parameter in thehopsworks.login()
method. If set to False, the online feature store storage connector is used which relies on the private IP. Defaults to True if connection to Hopsworks is established from external environment (e.g AWS Sagemaker or Google Colab), otherwise to False. - return_type
Literal['list', 'polars', 'numpy', 'pandas']
:"list"
,"pandas"
,"polars"
or"numpy"
. Defaults to"list"
. - force_rest_client
bool
: boolean, defaults to False. If set to True, reads from online feature store using the REST client if initialised. - force_sql_client
bool
: boolean, defaults to False. If set to True, reads from online feature store using the SQL client if initialised. - allow_missing
bool
: Setting toTrue
returns feature vectors with missing values. - transformed: Setting to
False
returns the untransformed feature vectors. - request_parameters
Dict[str, Any] | None
: Request parameters required by on-demand transformation functions to compute on-demand features present in the feature view.
Returns
list
, pd.DataFrame
, polars.DataFrame
or np.ndarray
if return type
is set to "list"
, "pandas"
, "polars"
or "numpy"
respectively. Defaults to list
. Returned list
, pd.DataFrame
, polars.DataFrame
or np.ndarray
contains feature values related to provided primary keys, ordered according to positions of this features in the feature view query.
Raises
Exception
. When primary key entry cannot be found in one or more of the feature groups used by this feature view.
get_feature_vectors#
FeatureView.get_feature_vectors(
entry,
passed_features=None,
external=None,
return_type="list",
allow_missing=False,
force_rest_client=False,
force_sql_client=False,
transform=True,
request_parameters=None,
)
Returns assembled feature vectors in batches from online feature store. Call feature_view.init_serving
before this method if the following configurations are needed. 1. The training dataset version of the transformation statistics 2. Additional configurations of online serving engine
Missing primary key entries
If any of the provided primary key elements in entry
can't be found in any of the feature groups, no feature vector for that primary key value will be returned. If it can be found in at least one but not all feature groups used by this feature view the call to this method will raise an exception. Alternatively, setting allow_missing
to True
returns feature vectors with missing values.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get assembled serving vectors as a python list of lists
feature_view.get_feature_vectors(
entry = [
{"pk1": 1, "pk2": 2},
{"pk1": 3, "pk2": 4},
{"pk1": 5, "pk2": 6}
]
)
# get assembled serving vectors as a pandas dataframe
feature_view.get_feature_vectors(
entry = [
{"pk1": 1, "pk2": 2},
{"pk1": 3, "pk2": 4},
{"pk1": 5, "pk2": 6}
],
return_type = "pandas"
)
# get assembled serving vectors as a numpy array
feature_view.get_feature_vectors(
entry = [
{"pk1": 1, "pk2": 2},
{"pk1": 3, "pk2": 4},
{"pk1": 5, "pk2": 6}
],
return_type = "numpy"
)
Arguments
- entry
List[Dict[str, Any]]
: a list of dictionary of feature group primary key and values provided by serving application. Set of required primary keys isfeature_view.primary_keys
If the required primary keys is not provided, it will look for name of the primary key in feature group in the entry. - passed_features
List[Dict[str, Any]] | None
: a list of dictionary of feature values provided by the application at runtime. They can replace features values fetched from the feature store as well as providing feature values which are not available in the feature store. - external
bool | None
: boolean, optional. If set to True, the connection to the online feature store is established using the same host as for thehost
parameter in thehopsworks.login()
method. If set to False, the online feature store storage connector is used which relies on the private IP. Defaults to True if connection to Hopsworks is established from external environment (e.g AWS Sagemaker or Google Colab), otherwise to False. - return_type
Literal['list', 'polars', 'numpy', 'pandas']
:"list"
,"pandas"
,"polars"
or"numpy"
. Defaults to"list"
. - force_sql_client
bool
: boolean, defaults to False. If set to True, reads from online feature store using the SQL client if initialised. - force_rest_client
bool
: boolean, defaults to False. If set to True, reads from online feature store using the REST client if initialised. - allow_missing
bool
: Setting toTrue
returns feature vectors with missing values. - transformed: Setting to
False
returns the untransformed feature vectors. - request_parameters
List[Dict[str, Any]] | None
: Request parameters required by on-demand transformation functions to compute on-demand features present in the feature view.
Returns
List[list]
, pd.DataFrame
, polars.DataFrame
or np.ndarray
if return type
is set to "list",
"pandas",
"polars"or
"numpy"respectively. Defaults to
List[list]`.
Returned List[list]
, pd.DataFrame
, polars.DataFrame
or np.ndarray
contains feature values related to provided primary keys, ordered according to positions of this features in the feature view query.
Raises
Exception
. When primary key entry cannot be found in one or more of the feature groups used by this feature view.
get_inference_helper#
FeatureView.get_inference_helper(
entry, external=None, return_type="pandas", force_rest_client=False, force_sql_client=False
)
Returns assembled inference helper column vectors from online feature store.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get assembled inference helper column vector
feature_view.get_inference_helper(
entry = {"pk1": 1, "pk2": 2}
)
Arguments
- entry
Dict[str, Any]
: dictionary of feature group primary key and values provided by serving application. Set of required primary keys isfeature_view.primary_keys
- external
bool | None
: boolean, optional. If set to True, the connection to the online feature store is established using the same host as for thehost
parameter in thehopsworks.login()
method. If set to False, the online feature store storage connector is used which relies on the private IP. Defaults to True if connection to Hopsworks is established from external environment (e.g AWS Sagemaker or Google Colab), otherwise to False. - return_type
Literal['pandas', 'dict', 'polars']
:"pandas"
,"polars"
or"dict"
. Defaults to"pandas"
.
Returns
pd.DataFrame
, polars.DataFrame
or dict
. Defaults to pd.DataFrame
.
Raises
Exception
. When primary key entry cannot be found in one or more of the feature groups used by this feature view.
get_inference_helpers#
FeatureView.get_inference_helpers(
entry, external=None, return_type="pandas", force_sql_client=False, force_rest_client=False
)
Returns assembled inference helper column vectors in batches from online feature store.
Missing primary key entries
If any of the provided primary key elements in entry
can't be found in any of the feature groups, no inference helper column vectors for that primary key value will be returned. If it can be found in at least one but not all feature groups used by this feature view the call to this method will raise an exception.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get assembled inference helper column vectors
feature_view.get_inference_helpers(
entry = [
{"pk1": 1, "pk2": 2},
{"pk1": 3, "pk2": 4},
{"pk1": 5, "pk2": 6}
]
)
Arguments
- entry
List[Dict[str, Any]]
: a list of dictionary of feature group primary key and values provided by serving application. Set of required primary keys isfeature_view.primary_keys
- external
bool | None
: boolean, optional. If set to True, the connection to the online feature store is established using the same host as for thehost
parameter in thehopsworks.login()
method. If set to False, the online feature store storage connector is used which relies on the private IP. Defaults to True if connection to Hopsworks is established from external environment (e.g AWS Sagemaker or Google Colab), otherwise to False. - return_type
Literal['pandas', 'dict', 'polars']
:"pandas"
,"polars"
or"dict"
. Defaults to"pandas"
.
Returns
pd.DataFrame
, polars.DataFrame
or List[Dict[str, Any]]
. Defaults to pd.DataFrame
.
Returned pd.DataFrame
or List[dict]
contains feature values related to provided primary keys, ordered according to positions of this features in the feature view query.
Raises
Exception
. When primary key entry cannot be found in one or more of the feature groups used by this feature view.
get_last_accessed_training_dataset#
FeatureView.get_last_accessed_training_dataset()
get_log_timeline#
FeatureView.get_log_timeline(wallclock_time=None, limit=None, transformed=False)
Retrieve the log timeline for the current feature view.
Arguments
- wallclock_time
str | int | datetime.datetime | .datetime.date | None
: Specific time to get the log timeline for. Can be a string, integer, datetime, or date. Defaults to None. - limit
int | None
: Maximum number of entries to retrieve. Defaults to None. - transformed
bool | None
: Whether to include transformed logs. Defaults to False.
Example
# get log timeline
log_timeline = feature_view.get_log_timeline(limit=10)
Returns
Dict[str, Dict[str, str]]
. Dictionary object of commit metadata timeline, where Key is commit id and value is Dict[str, str]
with key value pairs of date committed on, number of rows updated, inserted and deleted.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to retrieve the log timeline.
get_models#
FeatureView.get_models(training_dataset_version=None)
Get the generated models using this feature view, based on explicit provenance. Only the accessible models are returned. For more items use the base method - get_models_provenance
Arguments
- training_dataset_version
int | None
: Filter generated models based on the used training dataset version.
Returns
`List[Model]: List of models.
get_models_provenance#
FeatureView.get_models_provenance(training_dataset_version=None)
Get the generated models using this feature view, based on explicit provenance. These models can be accessible or inaccessible. Explicit provenance does not track deleted generated model links, so deleted will always be empty. For inaccessible models, only a minimal information is returned.
Arguments
- training_dataset_version
int | None
: Filter generated models based on the used training dataset version.
Returns
ProvenanceLinks
: Object containing the section of provenance graph requested.
get_newest_model#
FeatureView.get_newest_model(training_dataset_version=None)
Get the latest generated model using this feature view, based on explicit provenance. Search only through the accessible models. For more items use the base method - get_models_provenance
Arguments
- training_dataset_version
int | None
: Filter generated models based on the used training dataset version.
Returns
Model
: Newest Generated Model.
get_parent_feature_groups#
FeatureView.get_parent_feature_groups()
Get the parents of this feature view, based on explicit provenance. Parents are feature groups or external feature groups. These feature groups can be accessible, deleted or inaccessible. For deleted and inaccessible feature groups, only a minimal information is returned.
Returns
ProvenanceLinks
: Object containing the section of provenance graph requested.
get_tag#
FeatureView.get_tag(name)
Get the tags of a feature view.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get a tag of a feature view
name = feature_view.get_tag('tag_name')
Arguments
- name
str
: Name of the tag to get.
Returns
tag value
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to retrieve the tag.
get_tags#
FeatureView.get_tags()
Returns all tags attached to a training dataset.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get tags
list_tags = feature_view.get_tags()
Returns
Dict[str, obj]
of tags.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to retrieve the tags.
get_train_test_split#
FeatureView.get_train_test_split(
training_dataset_version,
read_options=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
dataframe_type="default",
**kwargs
)
Get training data created by feature_view.create_train_test_split
or feature_view.train_test_split
.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get training data
X_train, X_test, y_train, y_test = feature_view.get_train_test_split(training_dataset_version=1)
Arguments
- training_dataset_version
int
: training dataset version - read_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. For python engine:- key
"arrow_flight_config"
to pass a dictionary of arrow flight configurations. For example:{"arrow_flight_config": {"timeout": 900}}
Defaults to{}
.
- key
- primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view or during materializing training dataset in the file system thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns. - dataframe_type
str | None
: str, optional. The type of the returned dataframe. Possible values are"default"
,"spark"
,"pandas"
,"polars"
,"numpy"
or"python"
. Defaults to "default", which maps to Spark dataframe for the Spark Engine and Pandas dataframe for the Python engine.
Returns
(X_train, X_test, y_train, y_test): Tuple of dataframe of features and labels
get_train_validation_test_split#
FeatureView.get_train_validation_test_split(
training_dataset_version,
read_options=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
dataframe_type="default",
**kwargs
)
Get training data created by feature_view.create_train_validation_test_split
or feature_view.train_validation_test_split
.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get training data
X_train, X_val, X_test, y_train, y_val, y_test = feature_view.get_train_validation_test_splits(training_dataset_version=1)
Arguments
- training_dataset_version
int
: training dataset version - read_options
Dict[str, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. For python engine:- key
"arrow_flight_config"
to pass a dictionary of arrow flight configurations. For example:{"arrow_flight_config": {"timeout": 900}}
Defaults to{}
.
- key
- primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view or during materializing training dataset in the file system thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns. - dataframe_type
str
: str, optional. The type of the returned dataframe. Possible values are"default"
,"spark"
,"pandas"
,"polars"
,"numpy"
or"python"
. Defaults to "default", which maps to Spark dataframe for the Spark Engine and Pandas dataframe for the Python engine.
Returns
(X_train, X_val, X_test, y_train, y_val, y_test): Tuple of dataframe of features and labels
get_training_data#
FeatureView.get_training_data(
training_dataset_version,
read_options=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
dataframe_type="default",
**kwargs
)
Get training data created by feature_view.create_training_data
or feature_view.training_data
.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get training data
features_df, labels_df = feature_view.get_training_data(training_dataset_version=1)
External Storage Support
Reading training data that was written to external storage using a Storage Connector other than S3 can currently not be read using HSFS APIs with Python as Engine, instead you will have to use the storage's native client.
Arguments
- training_dataset_version
int
: training dataset version - read_options
Dict[str, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. For python engine:- key
"arrow_flight_config"
to pass a dictionary of arrow flight configurations. For example:{"arrow_flight_config": {"timeout": 900}}
Defaults to{}
.
- key
- primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view or during materializing training dataset in the file system thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns. - dataframe_type
str | None
: str, optional. The type of the returned dataframe. Possible values are"default"
,"spark"
,"pandas"
,"polars"
,"numpy"
or"python"
. Defaults to "default", which maps to Spark dataframe for the Spark Engine and Pandas dataframe for the Python engine.
Returns
(X, y): Tuple of dataframe of features and labels
get_training_dataset_schema#
FeatureView.get_training_dataset_schema(training_dataset_version=None)
Function that returns the schema of the training dataset that is generated from a feature view. It provides the schema of the features after all transformation functions have been applied.
Arguments
- training_dataset_version
int | None
: Specifies the version of the training dataset for which the schema should be generated. By default, this is set to None. However, if theone_hot_encoder
transformation function is used, the training dataset version must be provided. This is because the schema will then depend on the statistics of the training data used.
Example
schema = feature_view.get_training_dataset_schema(training_dataset_version=1)
Returns
List[training_dataset_feature.TrainingDatasetFeature]
: List of training dataset features objects.
Raises
ValueError
if the training dataset version provided cannot be found.
get_training_dataset_statistics#
FeatureView.get_training_dataset_statistics(
training_dataset_version, before_transformation=False, feature_names=None
)
Get statistics of a training dataset.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get training dataset statistics
statistics = feature_view.get_training_dataset_statistics(training_dataset_version=1)
Arguments
- training_dataset_version
int
: Training dataset version - before_transformation
bool
: Whether the statistics were computed before transformation functions or not. - feature_names
List[str] | None
: List of feature names of which statistics are retrieved.
Returns
Statistics
get_training_dataset_tag#
FeatureView.get_training_dataset_tag(training_dataset_version, name)
Get the tags of a training dataset.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get a training dataset tag
tag_str = feature_view.get_training_dataset_tag(
training_dataset_version=1,
name="tag_schema"
)
Arguments
- training_dataset_version
int
: training dataset version - name
str
: Name of the tag to get.
Returns
tag value
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to retrieve the tag.
get_training_dataset_tags#
FeatureView.get_training_dataset_tags(training_dataset_version)
Returns all tags attached to a training dataset.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get a training dataset tags
list_tags = feature_view.get_training_dataset_tags(
training_dataset_version=1
)
Returns
Dict[str, obj]
of tags.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to retrieve the tags.
get_training_datasets#
FeatureView.get_training_datasets()
Returns the metadata of all training datasets created with this feature view.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get all training dataset metadata
list_tds_meta = feature_view.get_training_datasets()
Returns
List[TrainingDatasetBase]
List of training datasets metadata.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to retrieve the training datasets metadata.
init_batch_scoring#
FeatureView.init_batch_scoring(training_dataset_version=None)
Initialise feature view to retrieve feature vector from offline feature store.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# initialise feature view to retrieve feature vector from offline feature store
feature_view.init_batch_scoring(training_dataset_version=1)
# get batch data
batch_data = feature_view.get_batch_data(...)
Arguments
- training_dataset_version
int | None
: int, optional. Default to be None. Transformation statistics are fetched from training dataset and applied to the feature vector.
init_serving#
FeatureView.init_serving(
training_dataset_version=None,
external=None,
options=None,
init_sql_client=None,
init_rest_client=False,
reset_rest_client=False,
config_rest_client=None,
default_client=None,
feature_logger=None,
**kwargs
)
Initialise feature view to retrieve feature vector from online and offline feature store.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# initialise feature view to retrieve a feature vector
feature_view.init_serving(training_dataset_version=1)
Arguments
- training_dataset_version
int | None
: int, optional. Default to be 1 for online feature store. Transformation statistics are fetched from training dataset and applied to the feature vector. - external
bool | None
: boolean, optional. If set to True, the connection to the online feature store is established using the same host as for thehost
parameter in thehopsworks.login()
method. If set to False, the online feature store storage connector is used which relies on the private IP. Defaults to True if connection to Hopsworks is established from external environment (e.g AWS Sagemaker or Google Colab), otherwise to False. - init_sql_client
bool | None
: boolean, optional. By default the sql client is initialised if no client is specified to match legacy behaviour. If set to True, this ensure the online store sql client is initialised, otherwise if init_rest_client is set to true it will skip initialising the sql client. - init_rest_client
bool
: boolean, defaults to False. By default the rest client is not initialised. If set to True, this ensure the online store rest client is initialised. Pass additional configuration options via the rest_config parameter. Set reset_rest_client to True to reset the rest client. - default_client
Literal['sql', 'rest'] | None
: string, optional. Which client to default to if both are initialised. Defaults to None. - options
Dict[str, Any] | None
: Additional options as key/value pairs for configuring online serving engine.- key: kwargs of SqlAlchemy engine creation (See: https://docs.sqlalchemy.org/en/20/core/engines.html#sqlalchemy.create_engine). For example:
{"pool_size": 10}
- key: kwargs of SqlAlchemy engine creation (See: https://docs.sqlalchemy.org/en/20/core/engines.html#sqlalchemy.create_engine). For example:
- reset_rest_client
bool
: boolean, defaults to False. If set to True, the rest client will be reset and reinitialised with provided configuration. - config_rest_client
Dict[str, Any] | None
: dictionary, optional. Additional configuration options for the rest client. If the client is already initialised, this will be ignored. Options include:host
: string, optional. The host of the online store. Dynamically set if not provided.port
: int, optional. The port of the online store. Defaults to 4406.verify_certs
: boolean, optional. Verify the certificates of the online store server. Defaults to True.api_key
: string, optional. The API key to authenticate with the online store. The api key must be provided if initialising the rest client in an internal environment.timeout
: int, optional. The timeout for the rest client in seconds. Defaults to 2.use_ssl
: boolean, optional. Use SSL to connect to the online store. Defaults to True.
- feature_logger
hsfs.feature_logger.FeatureLogger | None
: Custom feature logger whichfeature_view.log()
uses to log feature vectors. If provided, feature vectors will not be inserted to logging feature group automatically whenfeature_view.log()
is called.
json#
FeatureView.json()
Convert class into its json serialized form.
Returns
str
: Json serialized object.
log#
FeatureView.log(
untransformed_features=None,
predictions=None,
transformed_features=None,
write_options=None,
training_dataset_version=None,
model=None,
)
Log features and optionally predictions for the current feature view. The logged features are written periodically to the offline store. If you need it to be available immediately, call materialize_log
.
Note: If features is a pyspark.Dataframe
, prediction needs to be provided as columns in the dataframe, values in predictions
will be ignored.
Arguments
- untransformed_features
pandas.DataFrame | list[list] | numpy.ndarray | hsfs.feature_view.pyspark.sql.DataFrame | None
: The untransformed features to be logged. Can be a pandas DataFrame, a list of lists, or a numpy ndarray. - prediction: The predictions to be logged. Can be a pandas DataFrame, a list of lists, or a numpy ndarray. Defaults to None.
- transformed_features
pandas.DataFrame | list[list] | numpy.ndarray | hsfs.feature_view.pyspark.sql.DataFrame | None
: The transformed features to be logged. Can be a pandas DataFrame, a list of lists, or a numpy ndarray. - write_options
Dict[str, Any] | None
: Options for writing the log. Defaults to None. - training_dataset_version
int | None
: Version of the training dataset. If training dataset version is definied ininit_serving
orinit_batch_scoring
, or model has training dataset version, or training dataset version was cached, then the version will be used, otherwise defaults to None. - model
hsml.model.Model | None
:hsml.model.Model
Hopsworks model associated with the log. Defaults to None.
Returns
list[Job]
job information for feature insertion if python engine is used
Example
# log untransformed features
feature_view.log(features)
# log features and predictions
feature_view.log(features, prediction)
# log both untransformed and transformed features
feature_view.log(
untransformed_features=features,
transformed_features=transformed_features
)
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to log features.
materialize_log#
FeatureView.materialize_log(wait=False, transformed=None)
Materialize the log for the current feature view.
Arguments
- wait
bool
: Whether to wait for the materialization to complete. Defaults to False. - transformed
bool | None
: Whether to materialize transformed or untrasformed logs. Defaults to None, in which case the returned list contains a job for materialization of transformed features and then a job for untransformed features. Otherwise the list contains only transformed jobs if transformed is True and untransformed jobs if it is False.
Example
# materialize log
materialization_result = feature_view.materialize_log(wait=True)
Returns
List[Job
] Job information for the materialization jobs of transformed and untransformed features.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to materialize the log.
pause_logging#
FeatureView.pause_logging()
Pause scheduled materialization job for the current feature view.
Example
# pause logging
feature_view.pause_logging()
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to pause feature logging.
purge_all_training_data#
FeatureView.purge_all_training_data()
Delete all training datasets (data only).
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# purge all training data
feature_view.purge_all_training_data()
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to delete the training datasets.
purge_training_data#
FeatureView.purge_training_data(training_dataset_version)
Delete a training dataset (data only).
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# purge training data
feature_view.purge_training_data(training_dataset_version=1)
Arguments
- training_dataset_version
int
: Version of the training dataset to be removed.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to delete the training dataset.
read_log#
FeatureView.read_log(
start_time=None,
end_time=None,
filter=None,
transformed=False,
training_dataset_version=None,
model=None,
)
Read the log entries for the current feature view. Optionally, filter can be applied to start/end time, training dataset version, hsml model, and custom fitler.
Arguments
- start_time
str | int | datetime.datetime | .datetime.date | None
: Start time for the log entries. Can be a string, integer, datetime, or date. Defaults to None. - end_time
str | int | datetime.datetime | .datetime.date | None
: End time for the log entries. Can be a string, integer, datetime, or date. Defaults to None. - filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: Filter to apply on the log entries. Can be a Filter or Logic object. Defaults to None. - transformed
bool | None
: Whether to include transformed logs. Defaults to False. - training_dataset_version
int | None
: Version of the training dataset. Defaults to None. - model
hsml.model.Model | None
: HSML model associated with the log. Defaults to None.
Example
# read all log entries
log_entries = feature_view.read_log()
# read log entries within time ranges
log_entries = feature_view.read_log(start_time="2022-01-01", end_time="2022-01-31")
# read log entries of a specific training dataset version
log_entries = feature_view.read_log(training_dataset_version=1)
# read log entries of a specific hopsworks model
log_entries = feature_view.read_log(model=Model(1, "dummy", version=1))
# read log entries by applying filter on features of feature group `fg` in the feature view
log_entries = feature_view.read_log(filter=fg.feature1 > 10)
Returns
DataFrame
: The spark dataframe containing the feature data. pyspark.DataFrame
. A Spark DataFrame. pandas.DataFrame
. A Pandas DataFrame. polars.DataFrame
. A Polars DataFrame.
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to read the log entries.
recreate_training_dataset#
FeatureView.recreate_training_dataset(
training_dataset_version, statistics_config=None, write_options=None, spine=None
)
Recreate a training dataset.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# recreate a training dataset that has been deleted
feature_view.recreate_training_dataset(training_dataset_version=1)
Info
If a materialised training data has deleted. Use recreate_training_dataset()
to recreate the training data.
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- training_dataset_version
int
: training dataset version - statistics_config
hsfs.StatisticsConfig | bool | dict | None
: A configuration object, or a dictionary with keys "enabled
" to generally enable descriptive statistics computation for this feature group,"correlations
" to turn on feature correlation computation and"histograms"
to compute feature value frequencies. The values should be booleans indicating the setting. To fully turn off statistics computation passstatistics_config=False
. Defaults toNone
and will compute only descriptive statistics. - write_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. When using thepython
engine, write_options can contain the following entries:- key
use_spark
and valueTrue
to materialize training dataset with Spark instead of Hopsworks Feature Query Service. - key
spark
and value an object of type hsfs.core.job_configuration.JobConfiguration to configure the Hopsworks Job used to compute the training dataset. - key
wait_for_job
and valueTrue
orFalse
to configure whether or not to the save call should return only after the Hopsworks Job has finished. By default it waits. Defaults to{}
.
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group.
Returns
Job
: When using the python
engine, it returns the Hopsworks Job that was launched to create the training dataset.
resume_logging#
FeatureView.resume_logging()
Resume scheduled materialization job for the current feature view.
Example
# resume logging
feature_view.resume_logging()
Raises
hsfs.client.exceptions.RestAPIError
in case the backend fails to pause feature logging.
to_dict#
FeatureView.to_dict()
Convert class into a dictionary.
Returns
Dict
: Dictionary that contains all data required to json serialize the object.
train_test_split#
FeatureView.train_test_split(
test_size=None,
train_start="",
train_end="",
test_start="",
test_end="",
description="",
extra_filter=None,
statistics_config=None,
read_options=None,
spine=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
dataframe_type="default",
**kwargs
)
Create the metadata for a training dataset and get the corresponding training data from the offline feature store. This returns the training data in memory and does not materialise data in storage. The training data is split into train and test set at random or according to time ranges. The training data can be recreated by calling feature_view.get_train_test_split
with the metadata created.
Create random train/test splits
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get training data
X_train, X_test, y_train, y_test = feature_view.train_test_split(
test_size=0.2
)
Create time-series train/test splits
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
train_start = "2022-05-01 00:00:00"
train_end = "2022-06-04 23:59:59"
test_start = "2022-07-01 00:00:00"
test_end= "2022-08-04 23:59:59"
# you can also pass dates as datetime objects
# get training data
X_train, X_test, y_train, y_test = feature_view.train_test_split(
train_start=train_start,
train_end=train_end,
test_start=test_start,
test_end=test_end,
description='Description of a dataset'
)
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- test_size
float | None
: size of test set. Should be between 0 and 1. - train_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the train split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. - train_end
int | str | datetime.datetime | datetime.date | None
: End event time for the train split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the test split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_end
int | str | datetime.datetime | datetime.date | None
: End event time for the test split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - description
str | None
: A string describing the contents of the training dataset to improve discoverability for Data Scientists, defaults to empty string""
. - extra_filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: Additional filters to be attached to the training dataset. The filters will be also applied inget_batch_data
. - statistics_config
hsfs.StatisticsConfig | bool | dict | None
: A configuration object, or a dictionary with keys "enabled
" to generally enable descriptive statistics computation for this feature group,"correlations
" to turn on feature correlation computation and"histograms"
to compute feature value frequencies. The values should be booleans indicating the setting. To fully turn off statistics computation passstatistics_config=False
. Defaults toNone
and will compute only descriptive statistics. - read_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. When using thepython
engine, read_options can contain the following entries:- key
"arrow_flight_config"
to pass a dictionary of arrow flight configurations. For example:{"arrow_flight_config": {"timeout": 900}}
- key
spark
and value an object of type hsfs.core.job_configuration.JobConfiguration to configure the Hopsworks Job used to compute the training dataset. Defaults to{}
.
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group. - primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns. - dataframe_type
str | None
: str, optional. The type of the returned dataframe. Possible values are"default"
,"spark"
,"pandas"
,"polars"
,"numpy"
or"python"
. Defaults to "default", which maps to Spark dataframe for the Spark Engine and Pandas dataframe for the Python engine.
Returns
(X_train, X_test, y_train, y_test): Tuple of dataframe of features and labels
train_validation_test_split#
FeatureView.train_validation_test_split(
validation_size=None,
test_size=None,
train_start="",
train_end="",
validation_start="",
validation_end="",
test_start="",
test_end="",
description="",
extra_filter=None,
statistics_config=None,
read_options=None,
spine=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
dataframe_type="default",
**kwargs
)
Create the metadata for a training dataset and get the corresponding training data from the offline feature store. This returns the training data in memory and does not materialise data in storage. The training data is split into train, validation, and test set at random or according to time ranges. The training data can be recreated by calling feature_view.get_train_validation_test_split
with the metadata created.
Example
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get training data
X_train, X_val, X_test, y_train, y_val, y_test = feature_view.train_validation_test_split(
validation_size=0.3,
test_size=0.2
)
Time Series split
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up dates
start_time_train = '2017-01-01 00:00:01'
end_time_train = '2018-02-01 23:59:59'
start_time_val = '2018-02-02 23:59:59'
end_time_val = '2019-02-01 23:59:59'
start_time_test = '2019-02-02 23:59:59'
end_time_test = '2020-02-01 23:59:59'
# you can also pass dates as datetime objects
# get training data
X_train, X_val, X_test, y_train, y_val, y_test = feature_view.train_validation_test_split(
train_start=start_time_train,
train_end=end_time_train,
validation_start=start_time_val,
validation_end=end_time_val,
test_start=start_time_test,
test_end=end_time_test
)
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- validation_size
float | None
: size of validation set. Should be between 0 and 1. - test_size
float | None
: size of test set. Should be between 0 and 1. - train_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the train split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - train_end
int | str | datetime.datetime | datetime.date | None
: End event time for the train split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - validation_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the validation split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - validation_end
int | str | datetime.datetime | datetime.date | None
: End event time for the validation split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_start
int | str | datetime.datetime | datetime.date | None
: Start event time for the test split query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - test_end
int | str | datetime.datetime | datetime.date | None
: End event time for the test split query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - description
str | None
: A string describing the contents of the training dataset to improve discoverability for Data Scientists, defaults to empty string""
. - extra_filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: Additional filters to be attached to the training dataset. The filters will be also applied inget_batch_data
. - statistics_config
hsfs.StatisticsConfig | bool | dict | None
: A configuration object, or a dictionary with keys "enabled
" to generally enable descriptive statistics computation for this feature group,"correlations
" to turn on feature correlation computation and"histograms"
to compute feature value frequencies. The values should be booleans indicating the setting. To fully turn off statistics computation passstatistics_config=False
. Defaults toNone
and will compute only descriptive statistics. - read_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. When using thepython
engine, read_options can contain the following entries:- key
"arrow_flight_config"
to pass a dictionary of arrow flight configurations. For example:{"arrow_flight_config": {"timeout": 900}}
- key
spark
and value an object of type hsfs.core.job_configuration.JobConfiguration to configure the Hopsworks Job used to compute the training dataset. Defaults to{}
.
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group. - primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns. - dataframe_type
str | None
: str, optional. The type of the returned dataframe. Possible values are"default"
,"spark"
,"pandas"
,"polars"
,"numpy"
or"python"
. Defaults to "default", which maps to Spark dataframe for the Spark Engine and Pandas dataframe for the Python engine.
Returns
(X_train, X_val, X_test, y_train, y_val, y_test): Tuple of dataframe of features and labels
training_data#
FeatureView.training_data(
start_time=None,
end_time=None,
description="",
extra_filter=None,
statistics_config=None,
read_options=None,
spine=None,
primary_key=False,
event_time=False,
training_helper_columns=False,
dataframe_type="default",
**kwargs
)
Create the metadata for a training dataset and get the corresponding training data from the offline feature store. This returns the training data in memory and does not materialise data in storage. The training data can be recreated by calling feature_view.get_training_data
with the metadata created.
Create random splits
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# get training data
features_df, labels_df = feature_view.training_data(
description='Descriprion of a dataset',
)
Create time-series based splits
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
# set up a date
start_time = "2022-05-01 00:00:00"
end_time = "2022-06-04 23:59:59"
# you can also pass dates as datetime objects
# get training data
features_df, labels_df = feature_view.training_data(
start_time=start_time,
end_time=end_time,
description='Description of a dataset'
)
Spine Groups/Dataframes
Spine groups and dataframes are currently only supported with the Spark engine and Spark dataframes.
Arguments
- start_time
int | str | datetime.datetime | datetime.date | None
: Start event time for the training dataset query, inclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - end_time
int | str | datetime.datetime | datetime.date | None
: End event time for the training dataset query, exclusive. Strings should be formatted in one of the following formats%Y-%m-%d
,%Y-%m-%d %H
,%Y-%m-%d %H:%M
,%Y-%m-%d %H:%M:%S
, or%Y-%m-%d %H:%M:%S.%f
. Int, i.e Unix Epoch should be in seconds. - description
str | None
: A string describing the contents of the training dataset to improve discoverability for Data Scientists, defaults to empty string""
. - extra_filter
hsfs.constructor.filter.Filter | hsfs.constructor.filter.Logic | None
: Additional filters to be attached to the training dataset. The filters will be also applied inget_batch_data
. - statistics_config
hsfs.StatisticsConfig | bool | dict | None
: A configuration object, or a dictionary with keys "enabled
" to generally enable descriptive statistics computation for this feature group,"correlations
" to turn on feature correlation computation and"histograms"
to compute feature value frequencies. The values should be booleans indicating the setting. To fully turn off statistics computation passstatistics_config=False
. Defaults toNone
and will compute only descriptive statistics. - read_options
Dict[Any, Any] | None
: Additional options as key/value pairs to pass to the execution engine. For spark engine: Dictionary of read options for Spark. When using thepython
engine, read_options can contain the following entries:- key
"arrow_flight_config"
to pass a dictionary of arrow flight configurations. For example:{"arrow_flight_config": {"timeout": 900}}
. - key
spark
and value an object of type hsfs.core.job_configuration.JobConfiguration to configure the Hopsworks Job used to compute the training dataset. Defaults to{}
.
- key
- spine
pandas.DataFrame | hsfs.feature_view.pyspark.sql.DataFrame | hsfs.feature_view.pyspark.RDD | numpy.ndarray | List[List[Any]] | hsfs.feature_view.SplineGroup | None
: Spine dataframe with primary key, event time and label column to use for point in time join when fetching features. Defaults toNone
and is only required when feature view was created with spine group in the feature query. It is possible to directly pass a spine group instead of a dataframe to overwrite the left side of the feature join, however, the same features as in the original feature group that is being replaced need to be available in the spine group. - primary_key
bool
: whether to include primary key features or not. Defaults toFalse
, no primary key features. - event_time
bool
: whether to include event time feature or not. Defaults toFalse
, no event time feature. - training_helper_columns
bool
: whether to include training helper columns or not. Training helper columns are a list of feature names in the feature view, defined during its creation, that are not the part of the model schema itself but can be used during training as a helper for extra information. If training helper columns were not defined in the feature view thentraining_helper_columns=True
will not have any effect. Defaults toFalse
, no training helper columns. - dataframe_type
str | None
: str, optional. The type of the returned dataframe. Possible values are"default"
,"spark"
,"pandas"
,"polars"
,"numpy"
or"python"
. Defaults to "default", which maps to Spark dataframe for the Spark Engine and Pandas dataframe for the Python engine.
Returns
(X, y): Tuple of dataframe of features and labels. If there are no labels, y returns None
.
transform#
FeatureView.transform(feature_vector, external=None)
Transform the input feature vector by applying Model-dependent transformations attached to the feature view.
List input must match the schema of the feature view
If features are provided as a List to the transform function. Make sure that the input are ordered to match the schema
in the feature view.
Arguments
- feature_vector
List[Any] | List[List[Any]] | pandas.DataFrame | polars.dataframe.frame.DataFrame
:Union[List[Any], List[List[Any]], pd.DataFrame, pl.DataFrame]
. The feature vector to be transformed. - external
bool | None
: boolean, optional. If set to True, the connection to the online feature store is established using the same host as for thehost
parameter in thehopsworks.login()
method. If set to False, the online feature store storage connector is used which relies on the private IP. Defaults to True if connection to Hopsworks is established from external environment (e.g AWS Sagemaker or Google Colab), otherwise to False.
Returns
Union[List[Any], List[List[Any]], pd.DataFrame, pl.DataFrame]
: The transformed feature vector obtained by applying Model-Dependent Transformations.
update#
FeatureView.update()
Update the description of the feature view.
Update the feature view with a new description.
# get feature store instance
fs = ...
# get feature view instance
feature_view = fs.get_feature_view(...)
feature_view.description = "new description"
feature_view.update()
# Description is updated in the metadata. Below should return "new description".
fs.get_feature_view("feature_view_name", 1).description
Returns
FeatureView
Updated feature view.
Raises
hsfs.client.exceptions.RestAPIError
.
update_from_response_json#
FeatureView.update_from_response_json(json_dict)
Function that updates the class object from its json serialization.
Arguments
- json_dict
Dict[str, Any]
:Dict[str, Any]
. Json serialized dictionary for the class.
Returns
TransformationFunction
: Json deserialized class object.
update_last_accessed_training_dataset#
FeatureView.update_last_accessed_training_dataset(version)