gluonts.mx.model.n_beats package#

class gluonts.mx.model.n_beats.NBEATSEnsembleEstimator(freq: str, prediction_length: int, meta_context_length: Optional[List[int]] = None, meta_loss_function: Optional[List[str]] = None, meta_bagging_size: int = 10, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), num_stacks: int = 30, widths: Optional[List[int]] = None, num_blocks: Optional[List[int]] = None, num_block_layers: Optional[List[int]] = None, expansion_coefficient_lengths: Optional[List[int]] = None, sharing: Optional[List[bool]] = None, stack_types: Optional[List[str]] = None, aggregation_method: str = 'median', **kwargs)[source]#

Bases: gluonts.model.estimator.Estimator

An ensemble N-BEATS Estimator (approximately) as described in the paper: https://arxiv.org/abs/1905.10437.

The three meta parameters ‘meta_context_length’, ‘meta_loss_function’ and ‘meta_bagging_size’ together define the way the sub-models are assembled together. The total number of models used for the ensemble is:

|meta_context_length| x |meta_loss_function| x meta_bagging_size

Noteworthy differences in this implementation compared to the paper: * The parameter L_H is not implemented; we sample training sequences using the default method in GluonTS using the “InstanceSplitter”.

Parameters
  • freq – Time time granularity of the data

  • prediction_length (int) – Length of the prediction. Also known as ‘horizon’.

  • meta_context_length – The different ‘context_length’ (also known as ‘lookback period’) to use for training the models. The ‘context_length’ is the number of time units that condition the predictions. Default and recommended value: [multiplier * prediction_length for multiplier in range(2, 7)]

  • meta_loss_function – The different ‘loss_function’ (also known as metric) to use for training the models. Unlike other models in GluonTS this network does not use a distribution. Default and recommended value: [“sMAPE”, “MASE”, “MAPE”]

  • meta_bagging_size – The number of models that share the parameter combination of ‘context_length’ and ‘loss_function’. Each of these models gets a different initialization random initialization. Default and recommended value: 10

  • trainer – Trainer object to be used (default: Trainer())

  • num_stacks – The number of stacks the network should contain. Default and recommended value for generic mode: 30 Recommended value for interpretable mode: 2

  • num_blocks – The number of blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [1] Recommended value for interpretable mode: [3]

  • block_layers – Number of fully connected layers with ReLu activation per block. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [4] Recommended value for interpretable mode: [4]

  • widths – Widths of the fully connected layers with ReLu activation in the blocks. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [512] Recommended value for interpretable mode: [256, 2048]

  • sharing – Whether the weights are shared with the other blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [False] Recommended value for interpretable mode: [True]

  • expansion_coefficient_lengths – If the type is “G” (generic), then the length of the expansion coefficient. If type is “T” (trend), then it corresponds to the degree of the polynomial. If the type is “S” (seasonal) then its not used. A list of ints of length 1 or ‘num_stacks’. Default value for generic mode: [32] Recommended value for interpretable mode: [3]

  • stack_types – One of the following values: “G” (generic), “S” (seasonal) or “T” (trend). A list of strings of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [“G”] Recommended value for interpretable mode: [“T”,”S”]

  • aggregation_method – The method by which to aggregate the individual predictions of the models. Either ‘median’, ‘mean’ or ‘none’, in which case no aggregation happens. Default is ‘median’.

  • **kwargs – Arguments passed down to the individual estimators.

classmethod from_hyperparameters(**hyperparameters) gluonts.mx.model.n_beats._ensemble.NBEATSEnsembleEstimator[source]#
lead_time: int#
prediction_length: int#
train(training_data: gluonts.dataset.Dataset, validation_data: Optional[gluonts.dataset.Dataset] = None) gluonts.mx.model.n_beats._ensemble.NBEATSEnsemblePredictor[source]#

Train the estimator on the given data.

Parameters
  • training_data – Dataset to train the model on.

  • validation_data – Dataset to validate the model on during training.

Returns

The predictor containing the trained model.

Return type

Predictor

train_from(predictor: gluonts.model.predictor.Predictor, training_data: gluonts.dataset.Dataset, validation_data: Optional[gluonts.dataset.Dataset] = None) gluonts.mx.model.n_beats._ensemble.NBEATSEnsemblePredictor[source]#
class gluonts.mx.model.n_beats.NBEATSEnsemblePredictor(prediction_length: int, predictors: List[gluonts.mx.model.predictor.RepresentableBlockPredictor], aggregation_method: Optional[str] = 'median')[source]#

Bases: gluonts.model.predictor.Predictor

An ensemble predictor for N-BEATS. Calling ‘.predict’ will result in:

|predictors|x|dataset|

predictions, if aggregation_method is ‘none’, otherwise in:

|dataset|
Parameters
  • prediction_length – Prediction horizon.

  • predictors – The list of ‘RepresentableBlockPredictor’ that the ensemble consists of.

  • aggregation_method – The method by which to aggregate the individual predictions of the models. Either ‘median’, ‘mean’ or ‘none’, in which case no aggregation happens. Default is ‘median’.

classmethod deserialize(path: pathlib.Path, ctx: Optional[mxnet.context.Context] = None, **kwargs) gluonts.mx.model.n_beats._ensemble.NBEATSEnsemblePredictor[source]#

Load a serialized NBEATSEnsemblePredictor from the given path.

Parameters
  • path – Path to the serialized files predictor.

  • ctx – Optional mxnet context parameter to be used with the predictor. If nothing is passed will use the GPU if available and CPU otherwise.

hybridize(batch: Dict[str, Any]) None[source]#
predict(dataset: gluonts.dataset.Dataset, num_samples: Optional[int] = 1, **kwargs) Iterator[gluonts.model.forecast.Forecast][source]#

Compute forecasts for the time series in the provided dataset. This method is not implemented in this abstract class; please use one of the subclasses. :param dataset: The dataset containing the time series to predict.

Returns

Iterator over the forecasts, in the same order as the dataset iterable was provided.

Return type

Iterator[Forecast]

serialize(path: pathlib.Path) None[source]#
set_aggregation_method(aggregation_method: str)[source]#
class gluonts.mx.model.n_beats.NBEATSEstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), num_stacks: int = 30, widths: Optional[List[int]] = None, num_blocks: Optional[List[int]] = None, num_block_layers: Optional[List[int]] = None, expansion_coefficient_lengths: Optional[List[int]] = None, sharing: Optional[List[bool]] = None, stack_types: Optional[List[str]] = None, loss_function: Optional[str] = 'MAPE', train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, batch_size: int = 32, scale: bool = False, **kwargs)[source]#

Bases: gluonts.mx.model.estimator.GluonEstimator

An Estimator based on a single (!) NBEATS Network (approximately) as described in the paper: https://arxiv.org/abs/1905.10437. The actual NBEATS model is an ensemble of NBEATS Networks, and is implemented by the “NBEATSEnsembleEstimator”.

Noteworthy differences in this implementation compared to the paper: * The parameter L_H is not implemented; we sample training sequences using the default method in GluonTS using the “InstanceSplitter”.

Parameters
  • freq – Time time granularity of the data

  • prediction_length (int) – Length of the prediction. Also known as ‘horizon’.

  • context_length – Number of time units that condition the predictions Also known as ‘lookback period’. Default is 2 * prediction_length.

  • trainer – Trainer object to be used (default: Trainer())

  • num_stacks – The number of stacks the network should contain. Default and recommended value for generic mode: 30 Recommended value for interpretable mode: 2

  • num_blocks – The number of blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [1] Recommended value for interpretable mode: [3]

  • block_layers – Number of fully connected layers with ReLu activation per block. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [4] Recommended value for interpretable mode: [4]

  • widths – Widths of the fully connected layers with ReLu activation in the blocks. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [512] Recommended value for interpretable mode: [256, 2048]

  • sharing – Whether the weights are shared with the other blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [False] Recommended value for interpretable mode: [True]

  • expansion_coefficient_lengths – If the type is “G” (generic), then the length of the expansion coefficient. If type is “T” (trend), then it corresponds to the degree of the polynomial. If the type is “S” (seasonal) then its not used. A list of ints of length 1 or ‘num_stacks’. Default value for generic mode: [32] Recommended value for interpretable mode: [3]

  • stack_types – One of the following values: “G” (generic), “S” (seasonal) or “T” (trend). A list of strings of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [“G”] Recommended value for interpretable mode: [“T”,”S”]

  • loss_function – The loss function (also known as metric) to use for training the network. Unlike other models in GluonTS this network does not use a distribution. One of the following: “sMAPE”, “MASE” or “MAPE”. The default value is “MAPE”.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

  • batch_size – The size of the batches to be used training and prediction.

  • scale – if True scales the input observations by the mean

  • kwargs – Arguments passed to ‘GluonEstimator’.

create_predictor(transformation: gluonts.transform._base.Transformation, trained_network: mxnet.gluon.block.HybridBlock) gluonts.model.predictor.Predictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained HybridBlock object.

Returns

A predictor wrapping a HybridBlock used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for training purposes.

Parameters

data – Dataset from which to create the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

DataLoader

create_training_network() mxnet.gluon.block.HybridBlock[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

HybridBlock

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for validation purposes.

Parameters

data – Dataset from which to create the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

DataLoader

lead_time: int#
prediction_length: int#