gluonts.torch.model.tide.module module#

class gluonts.torch.model.tide.module.DenseDecoder(num_layers: int, hidden_dim: int, output_dim: int, dropout_rate: float, layer_norm: bool)[source]#

Bases: torch.nn.modules.module.Module

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class gluonts.torch.model.tide.module.DenseEncoder(num_layers: int, input_dim: int, hidden_dim: int, dropout_rate: float, layer_norm: bool)[source]#

Bases: torch.nn.modules.module.Module

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class gluonts.torch.model.tide.module.FeatureProjection(input_dim: int, hidden_dim: int, output_dim: int, dropout_rate: float, layer_norm: bool)[source]#

Bases: torch.nn.modules.module.Module

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class gluonts.torch.model.tide.module.ResBlock(dim_in: int, dim_hidden: int, dim_out: int, dropout_rate: float, layer_norm: bool)[source]#

Bases: torch.nn.modules.module.Module

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class gluonts.torch.model.tide.module.TemporalDecoder(input_dim: int, hidden_dim: int, output_dim: int, dropout_rate: float, layer_norm: bool)[source]#

Bases: torch.nn.modules.module.Module

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class gluonts.torch.model.tide.module.TiDEModel(context_length: int, prediction_length: int, num_feat_dynamic_real: int, num_feat_dynamic_proj: int, num_feat_static_real: int, num_feat_static_cat: int, cardinality: List[int], embedding_dimension: List[int], feat_proj_hidden_dim: int, encoder_hidden_dim: int, decoder_hidden_dim: int, temporal_hidden_dim: int, distr_hidden_dim: int, decoder_output_dim: int, dropout_rate: float, num_layers_encoder: int, num_layers_decoder: int, layer_norm: bool, distr_output: gluonts.torch.distributions.output.Output, scaling: str)[source]#

Bases: torch.nn.modules.module.Module

Parameters
  • context_length – Number of time steps prior to prediction time that the model takes as inputs.

  • prediction_length – Length of the prediction horizon.

  • num_feat_dynamic_proj – Output size of feature projection layer.

  • num_feat_dynamic_real – Number of dynamic real features in the data.

  • num_feat_static_real – Number of static real features in the data.

  • num_feat_static_cat – Number of static categorical features in the data.

  • cardinality – Number of values of each categorical feature. This must be set if num_feat_static_cat > 0.

  • embedding_dimension – Dimension of the embeddings for categorical features.

  • feat_proj_hidden_dim – Size of the feature projection layer.

  • encoder_hidden_dim – Size of the dense encoder layer.

  • decoder_hidden_dim – Size of the dense decoder layer.

  • temporal_hidden_dim – Size of the temporal decoder layer.

  • distr_hidden_dim – Size of the distribution projection layer.

  • decoder_output_dim – Output size of dense decoder.

  • dropout_rate – Dropout regularization parameter.

  • num_layers_encoder – Number of layers in dense encoder.

  • num_layers_decoder – Number of layers in dense decoder.

  • layer_norm – Enable layer normalization or not.

  • distr_output – Distribution to use to evaluate observations and sample predictions.

  • scaling – Which scaling method to use to scale the target values.

describe_inputs(batch_size=1) gluonts.model.inputs.InputSpec[source]#
forward(feat_static_real: torch.Tensor, feat_static_cat: torch.Tensor, past_time_feat: torch.Tensor, past_target: torch.Tensor, past_observed_values: torch.Tensor, future_time_feat: torch.Tensor) Tuple[Tuple[torch.Tensor, ...], torch.Tensor, torch.Tensor][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(feat_static_real: torch.Tensor, feat_static_cat: torch.Tensor, past_time_feat: torch.Tensor, past_target: torch.Tensor, past_observed_values: torch.Tensor, future_time_feat: torch.Tensor, future_target: torch.Tensor, future_observed_values: torch.Tensor)[source]#
training: bool#