The Extended Kalman Filter: An Interactive Tutorial for Non-Experts
Part 6: Prediction and Update
We're almost ready to run our Kalman Filter and see some results. First, though, you may be wondering
what happened to the constant $a$ in our original state equation:
\[ x_k = a x_{k-1} \]
which seems to have vanished in our equation for the state estimate:
\[ \hat{x}_k = \hat{x}_{k-1} + g_k(z_k - \hat{x}_{k-1})\]
The answer is, we need both of these equations to estimate the state. Indeed, both equations represent an estimate
of the state, based on different kinds of information. Our original equation represents a prediction about what
the state should be, and our second equation represents an update to this prediction, based on an
observation.
[7]
So we rewrite our original equation with a little hat on the $x$ to indicate an estimate:
\[ \hat{x}_k = a \hat{x}_{k-1} \]
Finally, we use the constant $a$ in a prediction of the error as well:
[8]
\[ p_k = a p_{k-1} a \]
Together these two formulas in red represent the prediction phase of our Kalman Filter. The idea is that the cycle
predict / update, predict / update, ... is repeated for as many time steps as we like.
Previous: Computing the Gain
Next: Running the Filter
[7] Technically, the first estimate is called a prior, and the second a
posterior, and most treatments introduce some additional superscript or subscript to show the
distinction. Because I am trying to keep things simple (and easy to code up in your favorite programming language!),
I avoid complicating the notation any further.
[8] As Zichao Zhang has kindly pointed out to me, we multiply twice by $a$ because
the prediction error $p_k$ is itself a squared error; hence, it is scaled by the square of the coefficient associated
with the state value $x_k$. The reason for representing the error prediction as $a p_{k-1} a$ instead of $a^2p_{k-1}$ will become
clear in Part 12.