gluonts.evaluation.backtest module#

gluonts.evaluation.backtest.backtest_metrics(test_dataset: gluonts.dataset.Dataset, predictor: gluonts.model.predictor.Predictor, evaluator=<gluonts.evaluation._base.Evaluator object>, num_samples: int = 100, logging_file: typing.Optional[str] = None) Tuple[dict, pandas.core.frame.DataFrame][source]#
Parameters
  • test_dataset – Dataset to use for testing.

  • predictor – The predictor to test.

  • evaluator – Evaluator to use.

  • num_samples – Number of samples to use when generating sample-based forecasts. Only sampling-based models will use this.

  • logging_file – If specified, information of the backtest is redirected to this file.

Returns

A tuple of aggregate metrics and metrics per time series obtained by training forecaster on train_dataset and evaluating the resulting evaluator provided on the test_dataset.

Return type

Tuple[dict, pd.DataFrame]

gluonts.evaluation.backtest.make_evaluation_predictions(dataset: gluonts.dataset.Dataset, predictor: gluonts.model.predictor.Predictor, num_samples: int = 100) Tuple[Iterator[gluonts.model.forecast.Forecast], Iterator[pandas.core.series.Series]][source]#

Returns predictions for the trailing prediction_length observations of the given time series, using the given predictor.

The predictor will take as input the given time series without the trailing prediction_length observations.

Parameters
  • dataset – Dataset where the evaluation will happen. Only the portion excluding the prediction_length portion is used when making prediction.

  • predictor – Model used to draw predictions.

  • num_samples – Number of samples to draw on the model when evaluating. Only sampling-based models will use this.

Returns

A pair of iterators, the first one yielding the forecasts, and the second one yielding the corresponding ground truth series.

Return type

Tuple[Iterator[Forecast], Iterator[pd.Series]]

gluonts.evaluation.backtest.serialize_message(logger, message: str, variable)[source]#