Skip to content
Surf Wiki
Save to docs
general/statistical-deviation-and-dispersion

From Surf Wiki (app.surf) — the open knowledge base

Symmetric mean absolute percentage error

Statistical accuracy measure


Statistical accuracy measure

The symmetric mean absolute percentage error (SMAPE or sMAPE) is an accuracy measure based on percentage (or relative) errors. It is usually defined as follows:

: \text{SMAPE} = \frac{2}{n} \sum_{t=1}^n \frac{\left|F_t-A_t\right|}{|A_t|+|F_t|}

where A_t are the actual values and F_t are the forecasted values. Note that if A_t = F_t = 0, then term t is undefined (0/0), and is usually ignored in the summation.

Explaining this equation in words, the absolute difference between A**t and F**t is divided by half the sum of absolute values of the actual value A**t and the forecast value F**t. The value of this calculation is summed for every fitted point t and divided again by the number of fitted points n.

History

The earliest reference to a similar formula appears to be Armstrong (1985, p. 348), where it is called "adjusted MAPE" and is defined without the absolute values in the denominator. It was later discussed, modified, and re-proposed by Flores (1986).

Armstrong's original definition is as follows:

: \text{SMAPE} = \frac 1 n \sum_{t=1}^n \frac{\left|F_t-A_t\right|}{(A_t+F_t)/2}

The problem is that it can be negative if A_t + F_t . Therefore, the currently accepted version of SMAPE assumes the absolute values in the denominator.

Discussion

Comparison with MAPE

The idea behind SMAPE is that over and under-forecasts are treated in a relative way, rather than an absolute way, as with the mean absolute percentage error (MAPE). For example, applying the formula above to some actual A and forecasted F values:

AFMAPESMAPE
10011010%9.52%
1009010%10.53%

we see that MAPE considers an over and underestimation of 10% as equivalent, whereas SMAPE considers the underestimation to be slightly "worse" than the overestimation.

Extending this to larger forecast errors:

AFMAPESMAPE
100200100%66.67%
1005050%66.67%

Here, double overestimation and half underestimation are treated equivalently by SMAPE, whereas MAPE considers the overestimation to be "twice as bad" as the underestimation.

Extending to an even more extreme case:

AFMAPESMAPE
1001,000900%163.63%
1001090%163.63%

Here it becomes clear that MAPE is unbounded from above, and can provide extremely large penalties for overestimations – but cannot do the same for extreme underestimations. SMAPE, on the other hand, is bounded between 0% and 200%, and penalises these larger over and underestimations in a more "symmetric" manner.

Therefore, the choice between MAPE and SMAPE depends entirely on the problem at hand, and whether or not a relative metric is more appropriate. This may be the case if the expected forecasting errors exceed \gg10%; for smaller errors, the MAPE is more frequently chosen, due to its simplicity and ease of interpretation.

Alternative Versions

As a "percentage error", SMAPE values between 0% and 100% can be considered easier to interpret, and an alternative formula is sometimes used in practice:

: \text{SMAPE} = \frac{1}{n} \sum_{t=1}^n \frac{|F_t-A_t|}{|A_t|+|F_t|}

There is also a third version of SMAPE, which allows measuring the direction of the bias in the data by generating a positive and a negative error on line item level. Furthermore, it is better protected against outliers and the bias effect. The formula is:

: \text{SMAPE} = \frac{\sum_{t=1}^n \left|F_t-A_t\right|}{\sum_{t=1}^n (A_t+F_t)}

Alternatives

Provided the data are strictly positive, an alternative measure of relative accuracy can be obtained based on the log of the accuracy ratio: log(F**t / A**t). This measure is easier to analyze statistically and has valuable symmetry and unbiasedness properties. When used in constructing forecasting models, the resulting prediction corresponds to the geometric mean (Tofallis, 2015) .

References

  • Armstrong, J. S. (1985) Long-range Forecasting: From Crystal Ball to Computer, 2nd. ed. Wiley.
  • Flores, B. E. (1986) "A pragmatic view of accuracy measurement in forecasting", Omega (Oxford), 14(2), 93–98.
  • Tofallis, C (2015) "A Better Measure of Relative Prediction Accuracy for Model Selection and Model Estimation", Journal of the Operational Research Society, 66(8),1352-1362. archived preprint
Info: Wikipedia Source

This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.

Want to explore this topic further?

Ask Mako anything about Symmetric mean absolute percentage error — get instant answers, deeper analysis, and related topics.

Research with Mako

Free with your Surf account

Content sourced from Wikipedia, available under CC BY-SA 4.0.

This content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.

Report