gluonts.torch.distributions.implicit_quantile_network module#

class gluonts.torch.distributions.implicit_quantile_network.ImplicitQuantileModule(in_features: int, args_dim: Dict[str, int], domain_map: Callable[[...], Tuple[torch.Tensor]], concentration1: float = 1.0, concentration0: float = 1.0, output_domain_map=None, cos_embedding_dim: int = 64)[source]#

Bases: torch.nn.modules.module.Module

Implicit Quantile Network from the paper IQN for Distributional Reinforcement Learning (https://arxiv.org/abs/1806.06923) by Dabney et al.

forward(inputs: torch.Tensor)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#
class gluonts.torch.distributions.implicit_quantile_network.ImplicitQuantileNetwork(outputs: torch.Tensor, taus: torch.Tensor, validate_args=None)[source]#

Bases: torch.distributions.distribution.Distribution

Distribution class for the Implicit Quantile from which we can sample or calculate the quantile loss.

Parameters
  • outputs – Outputs from the Implicit Quantile Network.

  • taus – Tensor random numbers from the Beta or Uniform distribution for the corresponding outputs.

arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {}#
quantile_loss(value: torch.Tensor) torch.Tensor[source]#
sample(sample_shape=torch.Size([])) torch.Tensor[source]#

Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.

class gluonts.torch.distributions.implicit_quantile_network.ImplicitQuantileNetworkOutput(output_domain: Optional[str] = None, concentration1: float = 1.0, concentration0: float = 1.0, cos_embedding_dim: int = 64)[source]#

Bases: gluonts.torch.distributions.distribution_output.DistributionOutput

DistributionOutput class for the IQN from the paper Probabilistic Time Series Forecasting with Implicit Quantile Networks (https://arxiv.org/abs/2107.03743) by Gouttes et al. 2021.

Parameters
  • output_domain – Optional domain mapping of the output. Can be “positive”, “unit” or None.

  • concentration1 – Alpha parameter of the Beta distribution when sampling the taus during training.

  • concentration0 – Beta parameter of the Beta distribution when sampling the taus during training.

  • cos_embedding_dim – The embedding dimension for the taus embedding layer of IQN. Default is 64.

args_dim: Dict[str, int] = {'quantile_function': 1}#
distr_cls#

alias of gluonts.torch.distributions.implicit_quantile_network.ImplicitQuantileNetwork

distribution(distr_args, loc=0, scale=None) gluonts.torch.distributions.implicit_quantile_network.ImplicitQuantileNetwork[source]#

Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.

Parameters
  • distr_args – Constructor arguments for the underlying Distribution type.

  • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

  • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.

classmethod domain_map(*args)[source]#

Converts arguments to the right shape and domain.

The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.

property event_shape#

Shape of each individual event compatible with the output object.

get_args_proj(in_features: int) torch.nn.modules.module.Module[source]#
in_features: int#
loss(target: torch.Tensor, distr_args: Tuple[torch.Tensor, ...], loc: Optional[torch.Tensor] = None, scale: Optional[torch.Tensor] = None) torch.Tensor[source]#

Compute loss for target data given network output.

Parameters
  • target – Values of the target time series for which loss is to be computed.

  • distr_args – Arguments that can be used to construct the output distribution.

  • loc – Location parameter of the distribution, optional.

  • scale – Scale parameter of the distribution, optional.

Returns

Values of the loss, has same shape as target.

Return type

loss_values

class gluonts.torch.distributions.implicit_quantile_network.QuantileLayer(num_output: int, cos_embedding_dim: int = 128)[source]#

Bases: torch.nn.modules.module.Module

Implicit Quantile Layer from the paper IQN for Distributional Reinforcement Learning (https://arxiv.org/abs/1806.06923) by Dabney et al.

forward(tau: torch.Tensor) torch.Tensor[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool#