gluonts.mx.model.seq2seq package#

class gluonts.mx.model.seq2seq.MQCNNEstimator(freq: str, prediction_length: int, context_length: Optional[int] = None, use_past_feat_dynamic_real: bool = False, use_feat_dynamic_real: bool = False, use_feat_static_cat: bool = False, cardinality: Optional[List[int]] = None, embedding_dimension: Optional[List[int]] = None, add_time_feature: bool = True, add_age_feature: bool = False, enable_encoder_dynamic_feature: bool = True, enable_decoder_dynamic_feature: bool = True, seed: Optional[int] = None, decoder_mlp_dim_seq: Optional[List[int]] = None, channels_seq: Optional[List[int]] = None, dilation_seq: Optional[List[int]] = None, kernel_size_seq: Optional[List[int]] = None, use_residual: bool = True, quantiles: Optional[List[float]] = None, distr_output: Optional[gluonts.mx.distribution.distribution_output.DistributionOutput] = None, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), scaling: Optional[bool] = None, scaling_decoder_dynamic_feature: bool = False, num_forking: Optional[int] = None, max_ts_len: Optional[int] = None, is_iqf: bool = True, batch_size: int = 32, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.mx.model.seq2seq._forking_estimator.ForkingSeq2SeqEstimator

An MQDNNEstimator with a Convolutional Neural Network (CNN) as an encoder and a multi-quantile MLP as a decoder. Implements the MQ-CNN Forecaster, proposed in [WTN+17].

Note that MQCNN uses ValidationSplitSampler as its default train_sampler. If context_length is less than the length of the input time series, only one example will be used for training.

Parameters
  • freq – Time granularity of the data.

  • prediction_length (int) – Length of the prediction, also known as ‘horizon’.

  • context_length – Number of time units that condition the predictions, also known as ‘lookback period’. (default: 4 * prediction_length)

  • use_past_feat_dynamic_real – Whether to use the past_feat_dynamic_real field from the data. (default: False) Automatically inferred when creating the MQCNNEstimator with the from_inputs class method.

  • use_feat_dynamic_real – Whether to use the feat_dynamic_real field from the data. (default: False) Automatically inferred when creating the MQCNNEstimator with the from_inputs class method.

  • use_feat_static_cat – Whether to use the feat_static_cat field from the data. (default: False) Automatically inferred when creating the MQCNNEstimator with the from_inputs class method.

  • cardinality – Number of values of each categorical feature. This must be set if use_feat_static_cat == True (default: None) Automatically inferred when creating the MQCNNEstimator with the from_inputs class method.

  • embedding_dimension – Dimension of the embeddings for categorical features. (default: [min(50, (cat+1)//2) for cat in cardinality])

  • add_time_feature – Adds a set of time features. (default: True)

  • add_age_feature – Adds an age feature. (default: False) The age feature starts with a small value at the start of the time series and grows over time.

  • enable_encoder_dynamic_feature – Whether the encoder should also be provided with the dynamic features (age, time and feat_dynamic_real if enabled respectively). (default: True)

  • enable_decoder_dynamic_feature – Whether the decoder should also be provided with the dynamic features (age, time and feat_dynamic_real if enabled respectively). (default: True) It makes sense to disable this, if you don’t have feat_dynamic_real for the prediction range.

  • seed – Will set the specified int seed for numpy and MXNet if specified. (default: None)

  • decoder_mlp_dim_seq – The dimensionalities of the Multi Layer Perceptron layers of the decoder. (default: [30])

  • channels_seq – The number of channels (i.e. filters or convolutions) for each layer of the HierarchicalCausalConv1DEncoder. More channels usually correspond to better performance and larger network size.(default: [30, 30, 30])

  • dilation_seq – The dilation of the convolutions in each layer of the HierarchicalCausalConv1DEncoder. Greater numbers correspond to a greater receptive field of the network, which is usually better with longer context_length. (Same length as channels_seq) (default: [1, 3, 5])

  • kernel_size_seq – The kernel sizes (i.e. window size) of the convolutions in each layer of the HierarchicalCausalConv1DEncoder. (Same length as channels_seq) (default: [7, 3, 3])

  • use_residual – Whether the hierarchical encoder should additionally pass the unaltered past target to the decoder. (default: True)

  • quantiles – The list of quantiles that will be optimized for, and predicted by, the model. Optimizing for more quantiles than are of direct interest to you can result in improved performance due to a regularizing effect. (default: [0.025, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.975])

  • distr_output – DistributionOutput to use. Only one between quantile and distr_output can be set. (Default: None)

  • trainer – The GluonTS trainer to use for training. (default: Trainer())

  • scaling – Whether to automatically scale the target values. (default: False if quantile_output is used, True otherwise)

  • scaling_decoder_dynamic_feature – Whether to automatically scale the dynamic features for the decoder. (default: False)

  • num_forking – Decides how much forking to do in the decoder. 1 reduces to seq2seq and enc_len reduces to MQ-CNN.

  • max_ts_len – Returns the length of the longest time series in the dataset to be used in bounding context_length.

  • is_iqf – Determines whether to use IQF or QF. (default: True).

  • batch_size – The size of the batches to be used training and prediction.

  • train_sampler – Controls the sampling of windows during training.

  • validation_sampler – Controls the sampling of windows during validation.

classmethod derive_auto_fields(train_iter)[source]#
classmethod from_inputs(train_iter, **params)[source]#
lead_time: int#
prediction_length: int#
class gluonts.mx.model.seq2seq.MQRNNEstimator(prediction_length: int, freq: str, context_length: Optional[int] = None, decoder_mlp_dim_seq: Optional[List[int]] = None, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), quantiles: Optional[List[float]] = None, distr_output: Optional[gluonts.mx.distribution.distribution_output.DistributionOutput] = None, scaling: Optional[bool] = None, scaling_decoder_dynamic_feature: bool = False, num_forking: Optional[int] = None, is_iqf: bool = True, batch_size: int = 32, train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None)[source]#

Bases: gluonts.mx.model.seq2seq._forking_estimator.ForkingSeq2SeqEstimator

An MQDNNEstimator with a Recurrent Neural Network (RNN) as an encoder and a multi-quantile MLP as a decoder.

Implements the MQ-RNN Forecaster, proposed in [WTN+17].

Note that MQRNN uses ValidationSplitSampler as its default train_sampler. If context_length is less than the length of the input time series, only one example will be used for training.

lead_time: int#
prediction_length: int#
class gluonts.mx.model.seq2seq.RNN2QRForecaster(freq: str, prediction_length: int, cardinality: List[int], embedding_dimension: int, encoder_rnn_layer: int, encoder_rnn_num_hidden: int, decoder_mlp_layer: List[int], decoder_mlp_static_dim: int, encoder_rnn_model: str = 'lstm', encoder_rnn_bidirectional: bool = True, scaler: gluonts.mx.block.scaler.Scaler = gluonts.mx.block.scaler.NOPScaler(), context_length: Optional[int] = None, quantiles: Optional[List[float]] = None, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), num_parallel_samples: int = 100)[source]#

Bases: gluonts.mx.model.seq2seq._seq2seq_estimator.Seq2SeqEstimator

lead_time: int#
prediction_length: int#
class gluonts.mx.model.seq2seq.Seq2SeqEstimator(freq: str, prediction_length: int, cardinality: List[int], embedding_dimension: int, encoder: gluonts.mx.block.encoder.Seq2SeqEncoder, decoder_mlp_layer: List[int], decoder_mlp_static_dim: int, scaler: gluonts.mx.block.scaler.Scaler = gluonts.mx.block.scaler.NOPScaler(), context_length: Optional[int] = None, quantiles: Optional[List[float]] = None, trainer: gluonts.mx.trainer._base.Trainer = gluonts.mx.trainer._base.Trainer(add_default_callbacks=True, callbacks=None, clip_gradient=10.0, ctx=None, epochs=100, hybridize=True, init='xavier', learning_rate=0.001, num_batches_per_epoch=50, weight_decay=1e-08), train_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, validation_sampler: Optional[gluonts.transform.sampler.InstanceSampler] = None, num_parallel_samples: int = 100, batch_size: int = 32)[source]#

Bases: gluonts.mx.model.estimator.GluonEstimator

Quantile-Regression Sequence-to-Sequence Estimator.

create_predictor(transformation: gluonts.transform._base.Transformation, trained_network: gluonts.mx.model.seq2seq._seq2seq_network.Seq2SeqTrainingNetwork) gluonts.model.predictor.Predictor[source]#

Create and return a predictor object.

Parameters
  • transformation – Transformation to be applied to data before it goes into the model.

  • module – A trained HybridBlock object.

Returns

A predictor wrapping a HybridBlock used for inference.

Return type

Predictor

create_training_data_loader(data: gluonts.dataset.Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for training purposes.

Parameters

data – Dataset from which to create the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

DataLoader

create_training_network() mxnet.gluon.block.HybridBlock[source]#

Create and return the network used for training (i.e., computing the loss).

Returns

The network that computes the loss given input data.

Return type

HybridBlock

create_transformation() gluonts.transform._base.Transformation[source]#

Create and return the transformation needed for training and inference.

Returns

The transformation that will be applied entry-wise to datasets, at training and inference time.

Return type

Transformation

create_validation_data_loader(data: gluonts.dataset.Dataset, **kwargs) Iterable[Dict[str, Any]][source]#

Create a data loader for validation purposes.

Parameters

data – Dataset from which to create the data loader.

Returns

The data loader, i.e. and iterable over batches of data.

Return type

DataLoader

lead_time: int#
prediction_length: int#