6

According to the Wikipedia page on moving averages, "This is also why sometimes an EMA is referred to as an $N$-day EMA. Despite the name suggesting there are $N$ periods, the terminology only specifies the $\alpha$ factor. $N$ is not a stopping point for the calculation in the way it is in an SMA or WMA."

I was very shocked to read this. It seems to be suggesting that the sequence of weights used to compute the Exponential Moving Average via discrete convolution with, i.e., historical market price data goes on forever. I understand that the sum of an infinite number of weights can converge to $1$, but not how an infinite weight function sequence could be convolved with a finite sequence of historical market price data.

How is an $N$-period Exponential Moving Average computed as the convolution of a weight function and historical data? Is the weight function sequence indeed infinite, or does it only contain $N (\pm 1?)$ elements, or does it contain as many elements as historical data points are available $(\pm 1?)$ which is usually greater than $N$? This distinction seems important because adding additional elements will re-normalize the significant elements given that all elements sum to $1$. What is a typical example of how this convolution product would be formulated?

Jose Avilez
  • 13,432
user10478
  • 2,118
  • 1
    Usually with EMA's for a time series $x_0,x_1,x_2,x_3,\ldots$, the EMA series $E_0, E_1,E_2, E_3,\ldots$ is calculated as below (or something similar): $$E_n = \begin{cases}x_0&\text{if }n=0\ \alpha x_n + (1-\alpha)E_{n-1}&\text{if }n\ge 1\end{cases}.$$ (From this, you can obtain a summation formula for $E_n$ in terms of $\alpha$ and $x_0,x_1,\ldots,x_n$, for any $n$.) – Minus One-Twelfth Jan 01 '22 at 06:14

1 Answers1

3

As the comment by @MinusOneTwelfth suggests, the recursive formula for the exponential moving average $$E_n = \alpha x_n + (1-\alpha) E_{n-1}$$ admits an expression as a sum: $$\begin{align*} E_n &= \alpha x_n + (1-\alpha) E_{n-1} \\ &= \alpha x_n + (1 - \alpha) \left( \alpha x_{n-1} + (1-\alpha)E_{n-2} \right) \\ &= \alpha x_n +\alpha (1-\alpha) x_{n-1} +(1-\alpha)^2 E_{n-2} \\ &\vdots \\ &= \alpha\sum_{k=0}^\infty (1-\alpha)^kx_{n-k} \end{align*}$$ where we note that the sum is well-defined if, for instance, $0 < \alpha < 1$ and $\sup \mathbb{E}(X_n^2) < \infty$, via a geometric series argument.

Sadly, in practice, we don't often have access to time series of infinite length indexed by $\mathbb{Z}$, so we must truncate this expression somehow. What we may do is set $$\hat{E}_n = \alpha\sum_{k=0}^{N-1}(1-\alpha)^kx_{n-k} + (1-\alpha)^N x_{n-N}$$ where $N$ is a parameter that may be chosen to agree with the terminology "$N$-day EMA"; nevertheless, in practice people may use the entire time series instead. Notice that this sum is indeed a convolution product between the time series and the sequence of exponentially decaying weights.

You may find a gentle introduction to exponential smoothing in Chapter 7 Rob Hyndman's book Forecasting: Principles and Practice.

Jose Avilez
  • 13,432
  • The $(1-\alpha)^N x_{n-N}$ in the final expression looks a bit strange. $\hat{E}n = \alpha\sum{k=0}^{N-1}(1-\alpha)^kx_{n-k}$ would just be a partial sum of $E_n$, but the extra $(1-\alpha)^N x_{n-N}$ appears out of nowhere and doesn't vary with the summation? – user10478 Feb 09 '22 at 00:27