Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Models - Nixtla

All the NeuralForecast models are “global” because we train them with all the series from the input pd.DataFrame data Y_df, yet the optimization objective is, momentarily, “univariate” as it does not consider the interaction between the output predictions across time series. Like the StatsForecast library, core.NeuralForecast allows you to explore collections of models efficiently and contains functions for convenient wrangling of input and output pd.DataFrames predictions.

First we load the AirPassengers dataset such that you can run all the examples.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from neuralforecast.tsdataset import TimeSeriesDataset
from neuralforecast.utils import AirPassengersDF as Y_df
# Split train/test and declare time series dataset
Y_train_df = Y_df[Y_df.ds<='1959-12-31'] # 132 train
Y_test_df = Y_df[Y_df.ds>'1959-12-31']   # 12 test
dataset, *_ = TimeSeriesDataset.from_df(Y_train_df)

1. Automatic Forecasting

A. RNN-Based


source

AutoRNN

 AutoRNN (h, loss=MAE(), valid_loss=None, config=None,
          search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
          object at 0x7fb7dd14aef0>, num_samples=10, refit_with_val=False,
          cpus=4, gpus=0, verbose=False, alias=None, backend='ray',
          callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd14aef0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoRNN.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=-1, encoder_hidden_size=8)
model = AutoRNN(h=12, config=config, num_samples=1, cpus=1)

model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoRNN(h=12, config=None, num_samples=1, cpus=1, backend='optuna')

source

AutoLSTM

 AutoLSTM (h, loss=MAE(), valid_loss=None, config=None,
           search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
           object at 0x7fb7dd8313f0>, num_samples=10,
           refit_with_val=False, cpus=4, gpus=0, verbose=False,
           alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd8313f0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoLSTM.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=-1, encoder_hidden_size=8)
model = AutoLSTM(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoLSTM(h=12, config=None, backend='optuna')

source

AutoGRU

 AutoGRU (h, loss=MAE(), valid_loss=None, config=None,
          search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
          object at 0x7fb7db90b790>, num_samples=10, refit_with_val=False,
          cpus=4, gpus=0, verbose=False, alias=None, backend='ray',
          callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7db90b790>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoGRU.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=-1, encoder_hidden_size=8)
model = AutoGRU(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoGRU(h=12, config=None, backend='optuna')

source

AutoTCN

 AutoTCN (h, loss=MAE(), valid_loss=None, config=None,
          search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
          object at 0x7fb7dd2fdc30>, num_samples=10, refit_with_val=False,
          cpus=4, gpus=0, verbose=False, alias=None, backend='ray',
          callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2fdc30>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoTCN.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=-1, encoder_hidden_size=8)
model = AutoTCN(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoTCN(h=12, config=None, backend='optuna')

source

AutoDeepAR

 AutoDeepAR (h, loss=DistributionLoss(), valid_loss=MQLoss(), config=None,
             search_alg=<ray.tune.search.basic_variant.BasicVariantGenerat
             or object at 0x7fb7dd313160>, num_samples=10,
             refit_with_val=False, cpus=4, gpus=0, verbose=False,
             alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossDistributionLossDistributionLoss()Instantiated train loss class from losses collection.
valid_lossMQLossMQLoss()Instantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd313160>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, lstm_hidden_size=8)
model = AutoDeepAR(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoDeepAR(h=12, config=None, backend='optuna')

source

AutoDilatedRNN

 AutoDilatedRNN (h, loss=MAE(), valid_loss=None, config=None,
                 search_alg=<ray.tune.search.basic_variant.BasicVariantGen
                 erator object at 0x7fb7dd28b190>, num_samples=10,
                 refit_with_val=False, cpus=4, gpus=0, verbose=False,
                 alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd28b190>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoDilatedRNN.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=-1, encoder_hidden_size=8)
model = AutoDilatedRNN(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoDilatedRNN(h=12, config=None, backend='optuna')

source

AutoBiTCN

 AutoBiTCN (h, loss=MAE(), valid_loss=None, config=None,
            search_alg=<ray.tune.search.basic_variant.BasicVariantGenerato
            r object at 0x7fb7dd2cbe80>, num_samples=10,
            refit_with_val=False, cpus=4, gpus=0, verbose=False,
            alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2cbe80>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=8)
model = AutoBiTCN(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoBiTCN(h=12, config=None, backend='optuna')

B. MLP-Based


source

AutoMLP

 AutoMLP (h, loss=MAE(), valid_loss=None, config=None,
          search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
          object at 0x7fb7dd2cbee0>, num_samples=10, refit_with_val=False,
          cpus=4, gpus=0, verbose=False, alias=None, backend='ray',
          callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2cbee0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoMLP.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12, hidden_size=8)
model = AutoMLP(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoMLP(h=12, config=None, backend='optuna')

source

AutoNBEATS

 AutoNBEATS (h, loss=MAE(), valid_loss=None, config=None,
             search_alg=<ray.tune.search.basic_variant.BasicVariantGenerat
             or object at 0x7fb8aee29b70>, num_samples=10,
             refit_with_val=False, cpus=4, gpus=0, verbose=False,
             alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb8aee29b70>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNBEATS.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12,
              mlp_units=3*[[8, 8]])
model = AutoNBEATS(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoNBEATS(h=12, config=None, backend='optuna')

source

AutoNBEATSx

 AutoNBEATSx (h, loss=MAE(), valid_loss=None, config=None,
              search_alg=<ray.tune.search.basic_variant.BasicVariantGenera
              tor object at 0x7fb7dd2a3640>, num_samples=10,
              refit_with_val=False, cpus=4, gpus=0, verbose=False,
              alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2a3640>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNBEATS.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12,
              mlp_units=3*[[8, 8]])
model = AutoNBEATSx(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoNBEATSx(h=12, config=None, backend='optuna')

source

AutoNHITS

 AutoNHITS (h, loss=MAE(), valid_loss=None, config=None,
            search_alg=<ray.tune.search.basic_variant.BasicVariantGenerato
            r object at 0x7fb7dd2cabc0>, num_samples=10,
            refit_with_val=False, cpus=4, gpus=0, verbose=False,
            alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2cabc0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12, 
              mlp_units=3 * [[8, 8]])
model = AutoNHITS(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoNHITS(h=12, config=None, backend='optuna')

source

AutoDLinear

 AutoDLinear (h, loss=MAE(), valid_loss=None, config=None,
              search_alg=<ray.tune.search.basic_variant.BasicVariantGenera
              tor object at 0x7fb7dd2b0880>, num_samples=10,
              refit_with_val=False, cpus=4, gpus=0, verbose=False,
              alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2b0880>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoDLinear.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12)
model = AutoDLinear(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoDLinear(h=12, config=None, backend='optuna')

source

AutoNLinear

 AutoNLinear (h, loss=MAE(), valid_loss=None, config=None,
              search_alg=<ray.tune.search.basic_variant.BasicVariantGenera
              tor object at 0x7fb7dd450be0>, num_samples=10,
              refit_with_val=False, cpus=4, gpus=0, verbose=False,
              alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd450be0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNLinear.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12)
model = AutoNLinear(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoNLinear(h=12, config=None, backend='optuna')

source

AutoTiDE

 AutoTiDE (h, loss=MAE(), valid_loss=None, config=None,
           search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
           object at 0x7fb7dd27d090>, num_samples=10,
           refit_with_val=False, cpus=4, gpus=0, verbose=False,
           alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd27d090>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoTiDE.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12)
model = AutoTiDE(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoTiDE(h=12, config=None, backend='optuna')

source

AutoDeepNPTS

 AutoDeepNPTS (h, loss=MAE(), valid_loss=None, config=None,
               search_alg=<ray.tune.search.basic_variant.BasicVariantGener
               ator object at 0x7fb7dd288640>, num_samples=10,
               refit_with_val=False, cpus=4, gpus=0, verbose=False,
               alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd288640>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoDeepNPTS.default_config
config = dict(max_steps=2, val_check_steps=1, input_size=12)
model = AutoDeepNPTS(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoDeepNPTS(h=12, config=None, backend='optuna')

C. Transformer-Based


source

AutoTFT

 AutoTFT (h, loss=MAE(), valid_loss=None, config=None,
          search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
          object at 0x7fb7db170220>, num_samples=10, refit_with_val=False,
          cpus=4, gpus=0, verbose=False, alias=None, backend='ray',
          callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7db170220>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=8)
model = AutoTFT(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoTFT(h=12, config=None, backend='optuna')

source

AutoVanillaTransformer

 AutoVanillaTransformer (h, loss=MAE(), valid_loss=None, config=None,
                         search_alg=<ray.tune.search.basic_variant.BasicVa
                         riantGenerator object at 0x7fb7dd260910>,
                         num_samples=10, refit_with_val=False, cpus=4,
                         gpus=0, verbose=False, alias=None, backend='ray',
                         callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd260910>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=8)
model = AutoVanillaTransformer(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoVanillaTransformer(h=12, config=None, backend='optuna')

source

AutoInformer

 AutoInformer (h, loss=MAE(), valid_loss=None, config=None,
               search_alg=<ray.tune.search.basic_variant.BasicVariantGener
               ator object at 0x7fb7dd2b2920>, num_samples=10,
               refit_with_val=False, cpus=4, gpus=0, verbose=False,
               alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2b2920>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=8)
model = AutoInformer(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoInformer(h=12, config=None, backend='optuna')

source

AutoAutoformer

 AutoAutoformer (h, loss=MAE(), valid_loss=None, config=None,
                 search_alg=<ray.tune.search.basic_variant.BasicVariantGen
                 erator object at 0x7fb7dd2605b0>, num_samples=10,
                 refit_with_val=False, cpus=4, gpus=0, verbose=False,
                 alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd2605b0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=8)
model = AutoAutoformer(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoAutoformer(h=12, config=None, backend='optuna')

source

AutoFEDformer

 AutoFEDformer (h, loss=MAE(), valid_loss=None, config=None,
                search_alg=<ray.tune.search.basic_variant.BasicVariantGene
                rator object at 0x7fb7dd4ae980>, num_samples=10,
                refit_with_val=False, cpus=4, gpus=0, verbose=False,
                alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd4ae980>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=64)
model = AutoFEDformer(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoFEDformer(h=12, config=None, backend='optuna')

source

AutoPatchTST

 AutoPatchTST (h, loss=MAE(), valid_loss=None, config=None,
               search_alg=<ray.tune.search.basic_variant.BasicVariantGener
               ator object at 0x7fb7dd28a290>, num_samples=10,
               refit_with_val=False, cpus=4, gpus=0, verbose=False,
               alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd28a290>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoNHITS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=16)
model = AutoPatchTST(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoPatchTST(h=12, config=None, backend='optuna')

source

AutoiTransformer

 AutoiTransformer (h, n_series, loss=MAE(), valid_loss=None, config=None,
                   search_alg=<ray.tune.search.basic_variant.BasicVariantG
                   enerator object at 0x7fb8aee28d90>, num_samples=10,
                   refit_with_val=False, cpus=4, gpus=0, verbose=False,
                   alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
n_series
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb8aee28d90>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoiTransformer.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=16)
model = AutoiTransformer(h=12, n_series=1, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoiTransformer(h=12, n_series=1, config=None, backend='optuna')

D. CNN Based


source

AutoTimesNet

 AutoTimesNet (h, loss=MAE(), valid_loss=None, config=None,
               search_alg=<ray.tune.search.basic_variant.BasicVariantGener
               ator object at 0x7fb7dd42f190>, num_samples=10,
               refit_with_val=False, cpus=4, gpus=0, verbose=False,
               alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd42f190>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoTimesNet.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=32)
model = AutoTimesNet(h=12, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoTimesNet(h=12, config=None, backend='optuna')

E. Multivariate


source

AutoStemGNN

 AutoStemGNN (h, n_series, loss=MAE(), valid_loss=None, config=None,
              search_alg=<ray.tune.search.basic_variant.BasicVariantGenera
              tor object at 0x7fb7dd82e2c0>, num_samples=10,
              refit_with_val=False, cpus=4, gpus=0, verbose=False,
              alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
n_series
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd82e2c0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoStemGNN.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12)
model = AutoStemGNN(h=12, n_series=1, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoStemGNN(h=12, n_series=1, config=None, backend='optuna')

source

AutoHINT

 AutoHINT (cls_model, h, loss, valid_loss, S, config,
           search_alg=<ray.tune.search.basic_variant.BasicVariantGenerator
           object at 0x7fb7dd3408b0>, num_samples=10, cpus=4, gpus=0,
           refit_with_val=False, verbose=False, alias=None, backend='ray',
           callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
cls_modelPyTorch/PyTorchLightning modelSee neuralforecast.models collection here.
hintForecast horizon
lossPyTorch moduleInstantiated train loss class from losses collection.
valid_lossPyTorch moduleInstantiated valid loss class from losses collection.
S
configdict or callableDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd3408b0>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
refit_with_valboolFalseRefit of best model should preserve val_size.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Perform a simple hyperparameter optimization with 
# NHITS and then reconcile with HINT
from neuralforecast.losses.pytorch import GMM, sCRPS

base_config = dict(max_steps=1, val_check_steps=1, input_size=8)
base_model = AutoNHITS(h=4, loss=GMM(n_components=2, quantiles=quantiles), 
                       config=base_config, num_samples=1, cpus=1)
model = HINT(h=4, S=S_df.values,
             model=base_model,  reconciliation='MinTraceOLS')

model.fit(dataset=dataset)
y_hat = model.predict(dataset=hint_dataset)

# Perform a conjunct hyperparameter optimization with 
# NHITS + HINT reconciliation configurations
nhits_config = {
       "learning_rate": tune.choice([1e-3]),                                     # Initial Learning rate
       "max_steps": tune.choice([1]),                                            # Number of SGD steps
       "val_check_steps": tune.choice([1]),                                      # Number of steps between validation
       "input_size": tune.choice([5 * 12]),                                      # input_size = multiplier * horizon
       "batch_size": tune.choice([7]),                                           # Number of series in windows
       "windows_batch_size": tune.choice([256]),                                 # Number of windows in batch
       "n_pool_kernel_size": tune.choice([[2, 2, 2], [16, 8, 1]]),               # MaxPool's Kernelsize
       "n_freq_downsample": tune.choice([[168, 24, 1], [24, 12, 1], [1, 1, 1]]), # Interpolation expressivity ratios
       "activation": tune.choice(['ReLU']),                                      # Type of non-linear activation
       "n_blocks":  tune.choice([[1, 1, 1]]),                                    # Blocks per each 3 stacks
       "mlp_units":  tune.choice([[[512, 512], [512, 512], [512, 512]]]),        # 2 512-Layers per block for each stack
       "interpolation_mode": tune.choice(['linear']),                            # Type of multi-step interpolation
       "random_seed": tune.randint(1, 10),
       "reconciliation": tune.choice(['BottomUp', 'MinTraceOLS', 'MinTraceWLS'])
    }
model = AutoHINT(h=4, S=S_df.values,
                 cls_model=NHITS,
                 config=nhits_config,
                 loss=GMM(n_components=2, level=[80, 90]),
                 valid_loss=sCRPS(level=[80, 90]),
                 num_samples=1, cpus=1)
model.fit(dataset=dataset)
y_hat = model.predict(dataset=hint_dataset)

source

AutoTSMixer

 AutoTSMixer (h, n_series, loss=MAE(), valid_loss=None, config=None,
              search_alg=<ray.tune.search.basic_variant.BasicVariantGenera
              tor object at 0x7fb7dd406f50>, num_samples=10,
              refit_with_val=False, cpus=4, gpus=0, verbose=False,
              alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
n_series
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd406f50>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoTSMixer.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12)
model = AutoTSMixer(h=12, n_series=1, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoTSMixer(h=12, n_series=1, config=None, backend='optuna')

source

AutoTSMixerx

 AutoTSMixerx (h, n_series, loss=MAE(), valid_loss=None, config=None,
               search_alg=<ray.tune.search.basic_variant.BasicVariantGener
               ator object at 0x7fb7dd831e40>, num_samples=10,
               refit_with_val=False, cpus=4, gpus=0, verbose=False,
               alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
n_series
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd831e40>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoTSMixerx.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12)
model = AutoTSMixerx(h=12, n_series=1, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoTSMixerx(h=12, n_series=1, config=None, backend='optuna')

source

AutoMLPMultivariate

 AutoMLPMultivariate (h, n_series, loss=MAE(), valid_loss=None,
                      config=None, search_alg=<ray.tune.search.basic_varia
                      nt.BasicVariantGenerator object at 0x7fb7dd3ea200>,
                      num_samples=10, refit_with_val=False, cpus=4,
                      gpus=0, verbose=False, alias=None, backend='ray',
                      callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
n_series
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd3ea200>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoTSMixerx.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12)
model = AutoMLPMultivariate(h=12, n_series=1, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoMLPMultivariate(h=12, n_series=1, config=None, backend='optuna')

source

AutoSOFTS

 AutoSOFTS (h, n_series, loss=MAE(), valid_loss=None, config=None,
            search_alg=<ray.tune.search.basic_variant.BasicVariantGenerato
            r object at 0x7fb7dd3d3280>, num_samples=10,
            refit_with_val=False, cpus=4, gpus=0, verbose=False,
            alias=None, backend='ray', callbacks=None)

*Class for Automatic Hyperparameter Optimization, it builds on top of ray to give access to a wide variety of hyperparameter optimization tools ranging from classic grid search, to Bayesian optimization and HyperBand algorithm.

The validation loss to be optimized is defined by the config['loss'] dictionary value, the config also contains the rest of the hyperparameter search space.

It is important to note that the success of this hyperparameter optimization heavily relies on a strong correlation between the validation and test periods.*

TypeDefaultDetails
hintForecast horizon
n_series
lossMAEMAE()Instantiated train loss class from losses collection.
valid_lossNoneTypeNoneInstantiated valid loss class from losses collection.
configNoneTypeNoneDictionary with ray.tune defined search space or function that takes an optuna trial and returns a configuration dict.
search_algBasicVariantGenerator<ray.tune.search.basic_variant.BasicVariantGenerator object at 0x7fb7dd3d3280>For ray see https://docs.ray.io/en/latest/tune/api_docs/suggestion.html
For optuna see https://optuna.readthedocs.io/en/stable/reference/samplers/index.html.
num_samplesint10Number of hyperparameter optimization steps/samples.
refit_with_valboolFalseRefit of best model should preserve val_size.
cpusint4Number of cpus to use during optimization. Only used with ray tune.
gpusint0Number of gpus to use during optimization, default all available. Only used with ray tune.
verboseboolFalseTrack progress.
aliasNoneTypeNoneCustom name of the model.
backendstrrayBackend to use for searching the hyperparameter space, can be either ‘ray’ or ‘optuna’.
callbacksNoneTypeNoneList of functions to call during the optimization process.
ray reference: https://docs.ray.io/en/latest/tune/tutorials/tune-metrics.html
optuna reference: https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/007_optuna_callback.html
# Use your own config or AutoSOFTS.default_config
config = dict(max_steps=1, val_check_steps=1, input_size=12, hidden_size=16)
model = AutoSOFTS(h=12, n_series=1, config=config, num_samples=1, cpus=1)

# Fit and predict
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

# Optuna
model = AutoSOFTS(h=12, n_series=1, config=None, backend='optuna')
2024-05-31 15:06:51,623 INFO worker.py:1749 -- Started a local Ray instance.
2024-05-31 15:06:52,217 INFO tune.py:263 -- Initializing Ray automatically. For cluster usage or custom Ray initialization, call `ray.init(...)` before `Tuner(...)`.
2024-05-31 15:06:52,219 INFO tune.py:624 -- [output] This uses the legacy output and progress reporter, as Jupyter notebooks are not supported by the new engine, yet. For more information, please see https://github.com/ray-project/ray/issues/36949
2024-05-31 15:06:54,633 ERROR tune_controller.py:1332 -- Trial task failed for trial _train_tune_eb814_00000
Traceback (most recent call last):
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
    result = ray.get(future)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
    return fn(*args, **kwargs)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/_private/worker.py", line 2623, in get
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/_private/worker.py", line 861, in get_objects
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): ray::ImplicitFunc.train() (pid=3951, ip=127.0.0.1, actor_id=1641e0248328a9ca364b098901000000, repr=_train_tune)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/air/_internal/util.py", line 98, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/neuralforecast/common/_base_multivariate.py", line 541, in fit
    return self._fit(
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/neuralforecast/common/_base_model.py", line 281, in _fit
    trainer = pl.Trainer(**model.trainer_kwargs)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/pytorch_lightning/utilities/argparse.py", line 70, in insert_env_defaults
    return fn(self, **kwargs)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 431, in __init__
    self._callback_connector.on_trainer_init(
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py", line 79, in on_trainer_init
    _validate_callbacks_list(self.trainer.callbacks)
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py", line 227, in _validate_callbacks_list
    stateful_callbacks = [cb for cb in callbacks if is_overridden("state_dict", instance=cb)]
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py", line 227, in <listcomp>
    stateful_callbacks = [cb for cb in callbacks if is_overridden("state_dict", instance=cb)]
  File "/Users/marcopeix/miniconda3/envs/neuralforecast/lib/python3.10/site-packages/pytorch_lightning/utilities/model_helpers.py", line 39, in is_overridden
    raise ValueError("Expected a parent")
ValueError: Expected a parent
2024-05-31 15:06:54,638 WARNING experiment_state.py:205 -- Experiment state snapshotting has been triggered multiple times in the last 5.0 seconds. A snapshot is forced if `CheckpointConfig(num_to_keep)` is set, and a trial has checkpointed >= `num_to_keep` times since the last snapshot.
You may want to consider increasing the `CheckpointConfig(num_to_keep)` or decreasing the frequency of saving checkpoints.
You can suppress this error by setting the environment variable TUNE_WARN_EXCESSIVE_EXPERIMENT_CHECKPOINT_SYNC_THRESHOLD_S to a smaller value than the current threshold (5.0).
2024-05-31 15:06:54,640 INFO tune.py:1021 -- Wrote the latest version of all result files and experiment state to '/Users/marcopeix/ray_results/_train_tune_2024-05-31_15-06-49' in 0.0031s.
2024-05-31 15:06:54,641 ERROR tune.py:1049 -- Trials did not complete: [_train_tune_eb814_00000]
Seed set to 1
GPU available: True (mps), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
`Trainer(val_check_interval=1)` was configured so validation will run after every batch.
`Trainer.fit` stopped: `max_steps=1` reached.
GPU available: True (mps), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
`Trainer(val_check_interval=1)` was configured so validation will run after every batch.

TESTS