How To Run A Python Job#
Introduction#
All members of a project in Hopsworks can launch the following types of applications through a project's Jobs service:
- Python
- Apache Spark
Launching a job of any type is very similar process, what mostly differs between job types is the various configuration parameters each job type comes with. Hopsworks support scheduling jobs to run on a regular basis, e.g backfilling a Feature Group by running your feature engineering pipeline nightly. Scheduling can be done both through the UI and the python API, checkout our Scheduling guide.
UI#
Step 1: Jobs overview#
The image below shows the Jobs overview page in Hopsworks and is accessed by clicking Jobs
in the sidebar.
Step 2: Create new job dialog#
Click New Job
and the following dialog will appear.
Step 3: Set the job type#
By default, the dialog will create a Spark job. To instead configure a Python job, select PYTHON
.
Step 4: Set the script#
Next step is to select the python script to run. You can either select From project
, if the file was previously uploaded to Hopsworks, or Upload new file
which lets you select a file from your local filesystem as demonstrated below. By default, the job name is the same as the file name, but you can customize it as shown.
Step 5 (optional): Set the Python script arguments#
In the job settings, you can specify arguments for your Python script. Remember to handle the arguments inside your Python script.
Step 6 (optional): Additional configuration#
It is possible to also set following configuration settings for a PYTHON
job.
Environment
: The python environment to useContainer memory
: The amount of memory in MB to be allocated to the Python scriptContainer cores
: The number of cores to be allocated for the Python scriptAdditional files
: List of files that will be locally accessible by the application
Step 7: Execute the job#
Now click the Run
button to start the execution of the job. You will be redirected to the Executions
page where you can see the list of all executions.
Once the execution is finished, click on Logs
to see the logs for the execution.
Code#
Step 1: Upload the Python script#
This snippet assumes the python script is in the current working directory and named script.py
.
It will upload the python script to the Resources
dataset in your project.
import hopsworks
project = hopsworks.login()
dataset_api = project.get_dataset_api()
uploaded_file_path = dataset_api.upload("script.py", "Resources")
Step 2: Create Python job#
In this snippet we get the JobsApi
object to get the default job configuration for a PYTHON
job, set the python script and override the environment to run in, and finally create the Job
object.
jobs_api = project.get_jobs_api()
py_job_config = jobs_api.get_configuration("PYTHON")
# Set the application file
py_job_config['appPath'] = uploaded_file_path
# Override the python job environment
py_job_config['environmentName'] = "python-feature-pipeline"
job = jobs_api.create_job("py_job", py_job_config)
Step 3: Execute the job#
In this snippet we execute the job synchronously, that is wait until it reaches a terminal state, and then download and print the logs.
# Run the job
execution = job.run(await_termination=True)
# Download logs
out, err = execution.download_logs()
f_out = open(out, "r")
print(f_out.read())
f_err = open(err, "r")
print(f_err.read())
API Reference#
Conclusion#
In this guide you learned how to create and run a Python job.