13 Jul 2018 | Richard Liaw*, Eric Liang*, Robert Nishihara, Philipp Moritz, Joseph E. Gonzalez, Ion Stoica
Tune is a research platform for distributed model selection and training. It provides a unified framework that allows for efficient hyperparameter search and training. The platform offers a narrow-waist interface between training scripts and search algorithms, enabling the integration of a wide range of hyperparameter search algorithms. This interface allows for straightforward scaling to large clusters and simplifies algorithm implementation. Tune is implemented on the Ray distributed computing framework, which provides the underlying distributed execution and resource management.
Tune's user API allows users to train models by integrating with the framework, while the scheduler API enables researchers to improve the model search process. The user API supports cooperative control, allowing minimal changes to existing user code to enable control over model training via Tune's trial schedulers. The scheduler API provides a flexible interface for trial scheduling, enabling a variety of hyperparameter tuning algorithms, including Median Stopping Rule, Bayesian Optimization, HyperBand, and Population-based Training.
Tune supports both sequential and parallel computation, allowing for the efficient use of cluster resources. It can handle irregular computations and resource requirements of arbitrary user code and third-party libraries. The platform also provides monitoring and visualization of trial progress and outcomes, and allows for simple integration and specification of experiments.
Tune is available at http://ray.readthedocs.io/en/latest/tune.html. It has been implemented with a variety of model selection algorithms, including two versions of HyperBand. The platform is designed to be extensible and supports distributed hyperparameter search algorithms while being easy for end-users to incorporate into their model design processes. Future work includes developing new functionality to help with the tuning process, as well as analyzing and debugging intermediate results.Tune is a research platform for distributed model selection and training. It provides a unified framework that allows for efficient hyperparameter search and training. The platform offers a narrow-waist interface between training scripts and search algorithms, enabling the integration of a wide range of hyperparameter search algorithms. This interface allows for straightforward scaling to large clusters and simplifies algorithm implementation. Tune is implemented on the Ray distributed computing framework, which provides the underlying distributed execution and resource management.
Tune's user API allows users to train models by integrating with the framework, while the scheduler API enables researchers to improve the model search process. The user API supports cooperative control, allowing minimal changes to existing user code to enable control over model training via Tune's trial schedulers. The scheduler API provides a flexible interface for trial scheduling, enabling a variety of hyperparameter tuning algorithms, including Median Stopping Rule, Bayesian Optimization, HyperBand, and Population-based Training.
Tune supports both sequential and parallel computation, allowing for the efficient use of cluster resources. It can handle irregular computations and resource requirements of arbitrary user code and third-party libraries. The platform also provides monitoring and visualization of trial progress and outcomes, and allows for simple integration and specification of experiments.
Tune is available at http://ray.readthedocs.io/en/latest/tune.html. It has been implemented with a variety of model selection algorithms, including two versions of HyperBand. The platform is designed to be extensible and supports distributed hyperparameter search algorithms while being easy for end-users to incorporate into their model design processes. Future work includes developing new functionality to help with the tuning process, as well as analyzing and debugging intermediate results.