gluonts.mx.model.renewal package#

class gluonts.mx.model.renewal.DeepRenewalProcessEstimator(prediction_length: int, context_length: int, num_cells: int, num_layers: int, dropout_rate: float = 0.1, interval_distr_output: gluonts.mx.distribution.distribution_output.DistributionOutput = gluonts.mx.distribution.neg_binomial.NegativeBinomialOutput(), size_distr_output: gluonts.mx.distribution.distribution_output.DistributionOutput = gluonts.mx.distribution.neg_binomial.NegativeBinomialOutput(), train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=False, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), batch_size: int = 32, num_parallel_samples: int = 100, **kwargs)[source]#

Bases: gluonts.mx.model.estimator.GluonEstimator

Implements a deep renewal process estimator designed to forecast intermittent time series sampled in discrete time, as described in.

[TWJ19].

In short, instead of viewing sparse time series as a univariate stochastic process, this estimator transforms a sparse time series [0, 0, 0, 3, 0, 0, 7] to an interval-size format,[(4, 3), (3, 7)] where each ordered pair marks the time since the last positive time step(interval) and the value of the positive time step (size). Then, probabilistic prediction is performed on this transformed time series, as is customary in the intermittent demand literature, e.g., Croston’s method.

This construction is a self-modulated marked renewal process in discrete time as one assumes the (conditional) distribution of intervals are identical.

Parameters
  • prediction_length (int) – Length of the prediction horizon

  • context_length – The number of time steps the model will condition on

  • num_cells – Number of hidden units used in the RNN cell (LSTM) and dense layer for projection to distribution arguments

  • num_layers – Number of layers in the LSTM

  • dropout_rate – Dropout regularization parameter (default: 0.1)

  • trainer – Trainer object to be used (default: Trainer())

  • interval_distr_output – Distribution output object for the intervals. Must be a distribution with support on positive integers, where the first argument corresponds to the(conditional) mean.

  • size_distr_output – Distribution output object for the demand sizes. Must be a distribution with support on positive integers, where the first argument corresponds to the(conditional) mean.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

  • batch_size – The size of the batches to be used training and prediction.

  • num_parallel_samples – Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy (default: 100)

create_predictor(transformation: gluonts.transform._base.Transformation, trained_network: mxnet.gluon.block.HybridBlock) gluonts.model.predictor.Predictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained HybridBlock object.

Returns

A predictor wrapping a HybridBlock used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for training purposes.

Parameters

data – Dataset from which to create the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

DataLoader

create_training_network() gluonts.mx.model.renewal._network.DeepRenewalTrainingNetwork[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

HybridBlock

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for validation purposes.

Parameters

data – Dataset from which to create the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

DataLoader

lead_time: int#
prediction_length: int#