Training in Tune (tune.Trainable, train.report)#
Training can be done with either a Function API (train.report()
) or
Class API (tune.Trainable).
For the sake of example, let’s maximize this objective function:
def objective(x, a, b):
return a * (x ** 0.5) + b
Function Trainable API#
Use the Function API to define a custom training function that Tune runs in Ray actor processes. Each trial is placed into a Ray actor process and runs in parallel.
The config
argument in the function is a dictionary populated automatically by Ray Tune and corresponding to
the hyperparameters selected for the trial from the search space.
With the Function API, you can report intermediate metrics by simply calling train.report()
within the function.
from ray import train, tune
def trainable(config: dict):
intermediate_score = 0
for x in range(20):
intermediate_score = objective(x, config["a"], config["b"])
train.report({"score": intermediate_score}) # This sends the score to Tune.
tuner = tune.Tuner(trainable, param_space={"a": 2, "b": 4})
results = tuner.fit()
Tip
Do not use train.report()
within a Trainable
class.
In the previous example, we reported on every step, but this metric reporting frequency is configurable. For example, we could also report only a single time at the end with the final score:
from ray import train, tune
def trainable(config: dict):
final_score = 0
for x in range(20):
final_score = objective(x, config["a"], config["b"])
train.report({"score": final_score}) # This sends the score to Tune.
tuner = tune.Tuner(trainable, param_space={"a": 2, "b": 4})
results = tuner.fit()
It’s also possible to return a final set of metrics to Tune by returning them from your function:
def trainable(config: dict):
final_score = 0
for x in range(20):
final_score = objective(x, config["a"], config["b"])
return {"score": final_score} # This sends the score to Tune.
Note that Ray Tune outputs extra values in addition to the user reported metrics,
such as iterations_since_restore
. See How to use log metrics in Tune? for an explanation of these values.
See how to configure checkpointing for a function trainable here.
Class Trainable API#
Caution
Do not use train.report()
within a Trainable
class.
The Trainable class API will require users to subclass ray.tune.Trainable
. Here’s a naive example of this API:
from ray import train, tune
class Trainable(tune.Trainable):
def setup(self, config: dict):
# config (dict): A dict of hyperparameters
self.x = 0
self.a = config["a"]
self.b = config["b"]
def step(self): # This is called iteratively.
score = objective(self.x, self.a, self.b)
self.x += 1
return {"score": score}
tuner = tune.Tuner(
Trainable,
run_config=train.RunConfig(
# Train for 20 steps
stop={"training_iteration": 20},
checkpoint_config=train.CheckpointConfig(
# We haven't implemented checkpointing yet. See below!
checkpoint_at_end=False
),
),
param_space={"a": 2, "b": 4},
)
results = tuner.fit()
As a subclass of tune.Trainable
, Tune will create a Trainable
object on a
separate process (using the Ray Actor API).
setup
function is invoked once training starts.
step
is invoked multiple times. Each time, the Trainable object executes one logical iteration of training in the tuning process, which may include one or more iterations of actual training.
cleanup
is invoked when training is finished.
The config
argument in the setup
method is a dictionary populated automatically by Tune and corresponding to
the hyperparameters selected for the trial from the search space.
Tip
As a rule of thumb, the execution time of step
should be large enough to avoid overheads
(i.e. more than a few seconds), but short enough to report progress periodically (i.e. at most a few minutes).
You’ll notice that Ray Tune will output extra values in addition to the user reported metrics,
such as iterations_since_restore
.
See How to use log metrics in Tune? for an explanation/glossary of these values.
See how to configure checkpoint for class trainable here.
Advanced: Reusing Actors in Tune#
Note
This feature is only for the Trainable Class API.
Your Trainable can often take a long time to start.
To avoid this, you can do tune.TuneConfig(reuse_actors=True)
(which is taken in by Tuner
) to reuse the same Trainable Python process and
object for multiple hyperparameters.
This requires you to implement Trainable.reset_config
, which provides a new set of hyperparameters.
It is up to the user to correctly update the hyperparameters of your trainable.
class PytorchTrainable(tune.Trainable):
"""Train a Pytorch ConvNet."""
def setup(self, config):
self.train_loader, self.test_loader = get_data_loaders()
self.model = ConvNet()
self.optimizer = optim.SGD(
self.model.parameters(),
lr=config.get("lr", 0.01),
momentum=config.get("momentum", 0.9))
def reset_config(self, new_config):
for param_group in self.optimizer.param_groups:
if "lr" in new_config:
param_group["lr"] = new_config["lr"]
if "momentum" in new_config:
param_group["momentum"] = new_config["momentum"]
self.model = ConvNet()
self.config = new_config
return True
Comparing Tune’s Function API and Class API#
Here are a few key concepts and what they look like for the Function and Class API’s.
Concept |
Function API |
Class API |
---|---|---|
Training Iteration |
Increments on each |
Increments on each |
Report metrics |
|
Return metrics from |
Saving a checkpoint |
|
|
Loading a checkpoint |
|
|
Accessing config |
Passed as an argument |
Passed through |
Advanced Resource Allocation#
Trainables can themselves be distributed. If your trainable function / class creates further Ray actors or tasks
that also consume CPU / GPU resources, you will want to add more bundles to the PlacementGroupFactory
to reserve extra resource slots.
For example, if a trainable class requires 1 GPU itself, but also launches 4 actors, each using another GPU,
then you should use tune.with_resources
like this:
tuner = tune.Tuner(
tune.with_resources(my_trainable, tune.PlacementGroupFactory([
{"CPU": 1, "GPU": 1},
{"GPU": 1},
{"GPU": 1},
{"GPU": 1},
{"GPU": 1}
])),
run_config=RunConfig(name="my_trainable")
)
The Trainable
also provides the default_resource_requests
interface to automatically
declare the resources per trial based on the given configuration.
It is also possible to specify memory ("memory"
, in bytes) and custom resource requirements.
Function API#
For reporting results and checkpoints with the function API, see the Ray Train utilities documentation.
Trainable (Class API)#
Constructor#
Abstract class for trainable models, functions, etc. |
Trainable Methods to Implement#
Subclasses should override this for custom initialization. |
|
Subclasses should override this to implement |
|
Subclasses should override this to implement restore(). |
|
Subclasses should override this to implement train(). |
|
Resets configuration without restarting the trial. |
|
Subclasses should override this for any cleanup on stop. |
|
Provides a static resource requirement for the given configuration. |
Tune Trainable Utilities#
Tune Data Ingestion Utilities#
Wrapper for trainables to pass arbitrary large data objects. |
Tune Resource Assignment Utilities#
Wrapper for trainables to specify resource requests. |
|
Wrapper class that creates placement groups for trials. |
|
Checks if a given GPU has freed memory. |
Tune Trainable Debugging Utilities#
Utility for detecting why your trainable function isn't serializing. |
|
Helper method to check if your Trainable class will resume correctly. |
|
Generic validation of a Searcher's warm start functionality. |