gluonts.torch.model.lag_tst.estimator module#

class gluonts.torch.model.lag_tst.estimator.LagTSTEstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, d_model: int = 32, nhead: int = 4, dim_feedforward: int = 128, lags_seq: Optional[List[int]] = None, dropout: float = 0.1, activation: str = 'relu', norm_first: bool = False, num_encoder_layers: int = 2, lr: float = 0.001, weight_decay: float = 1e-08, scaling: Optional[str] = 'mean', distr_output: gluonts.torch.distributions.output.Output = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

An estimator training the LagTST model for forecasting.

This class is uses the model defined in SimpleFeedForwardModel, and wraps it into a LagTSTLightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Parameters
  • freq – Frequency of the data to train on and predict.

  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of time steps prior to prediction time that the model takes as inputs (default: 10 * prediction_length).

  • lags_seq – Indices of the lagged target values to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq).

  • d_model – Size of hidden layers in the Transformer encoder.

  • nhead – Number of attention heads in the Transformer encoder.

  • dim_feedforward – Size of hidden layers in the Transformer encoder.

  • dropout – Dropout probability in the Transformer encoder.

  • activation – Activation function in the Transformer encoder.

  • norm_first – Whether to apply normalization before or after the attention.

  • num_encoder_layers – Number of layers in the Transformer encoder.

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • scaling – Scaling parameter can be “mean”, “std” or None.

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch

    Number of batches to be processed in each training epoch

    (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.lag_tst.lightning_module.LagTSTLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.lag_tst.lightning_module.LagTSTLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#