Newsletter Subscribe
Enter your email address below and subscribe to our newsletter
Enter your email address below and subscribe to our newsletter
A clear guide to Mean Absolute Deviation, explaining how it measures variability and supports business decision-making.
Mean Absolute Deviation (MAD) is a statistical measure that quantifies the average distance between each data point and the dataset’s mean. It is used to assess variability, consistency, and forecasting accuracy.
Definition
MAD is the average of the absolute differences between each data point and the mean of the dataset.
MAD provides a clear sense of how spread out data points are. Unlike variance or standard deviation, MAD uses absolute values, which makes it less sensitive to extreme outliers.
In business forecasting, MAD is a key performance metric for evaluating forecast accuracy. Lower MAD values indicate more accurate predictions.
MAD is also useful in identifying data consistency, quality variations, and distribution patterns.
If a dataset has values (x_1, x_2, …, x_n) with mean (\bar{x}):
[ \text{MAD} = \frac{1}{n} \sum_{i=1}^{n} |x_i – \bar{x}| ]
A retail company evaluates the accuracy of its weekly sales forecasts. If the actual and forecasted values differ by an average of 150 units, the MAD is 150. Lower MAD helps improve inventory and planning decisions.
MAD is essential for forecasting accuracy, financial modelling, operational planning, and quality management. It simplifies variability analysis and supports better decision-making.
It depends, MAD is easier to interpret and less affected by outliers.
Yes, only when all data points are identical.
Yes, it is a popular error metric for evaluating forecast accuracy.