gluonts.mx.model.transformer.trans_decoder module#

class gluonts.mx.model.transformer.trans_decoder.TransformerDecoder(decoder_length: int, config: Dict, **kwargs)[source]#

Bases: mxnet.gluon.block.HybridBlock

cache_reset()[source]#
hybrid_forward(F, data: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], enc_out: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], mask: Optional[Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]] = None, is_train: bool = True) Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]#

A transformer encoder block consists of a self-attention and a feed- forward layer with pre/post process blocks in between.