gluonts.torch.model.d_linear.estimator module#

class gluonts.torch.model.d_linear.estimator.DLinearEstimator(prediction_length: int, context_length: Optional[int] = None, hidden_dimension: Optional[int] = None, lr: float = 0.001, weight_decay: float = 1e-08, scaling: Optional[str] = 'mean', distr_output: gluonts.torch.distributions.output.Output = gluonts.torch.distributions.studentT.StudentTOutput(beta=0.0), kernel_size: int = 25, batch_size: int = 32, num_batches_per_epoch: int = 50, trainer_kwargs: Optional[Dict[str, Any]] = None, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.torch.model.estimator.PyTorchLightningEstimator

An estimator training the d-linear model form the paper https://arxiv.org/pdf/2205.13504.pdf extended for probabilistic forecasting.

This class is uses the model defined in DLinearModel, and wraps it into a DLinearLightningModule for training purposes: training is performed using PyTorch Lightning’s pl.Trainer class.

Parameters
  • prediction_length (int) – Length of the prediction horizon.

  • context_length – Number of time steps prior to prediction time that the model takes as inputs (default: 10 * prediction_length).

  • hidden_dimension – Size of representation.

  • lr – Learning rate (default: 1e-3).

  • weight_decay – Weight decay regularization parameter (default: 1e-8).

  • distr_output – Distribution to use to evaluate observations and sample predictions (default: StudentTOutput()).

  • kernel_size

  • batch_size – The size of the batches to be used for training (default: 32).

  • num_batches_per_epoch

    Number of batches to be processed in each training epoch

    (default: 50).

  • trainer_kwargs – Additional arguments to provide to pl.Trainer for construction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

create_lightning_module() lightning.pytorch.core.module.LightningModule[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

pl.LightningModule

create_predictor(transformation: gluonts.transform._base.Transformation, module) gluonts.torch.model.predictor.PyTorchPredictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained pl.LightningModule object.

Returns

A predictor wrapping a nn.Module used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.d_linear.lightning_module.DLinearLightningModule, shuffle_buffer_length: Optional[int] = None, **kwargs) Iterable[source]#

Create a data loader for training purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, module: gluonts.torch.model.d_linear.lightning_module.DLinearLightningModule, **kwargs) Iterable[source]#

Create a data loader for validation purposes.

Parameters
  • data – Dataset from which to create the data loader.

  • module – The pl.LightningModule object that will receive the batches from the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

Iterable

lead_time: int#
prediction_length: int#