Hacker News new | past | comments | ask | show | jobs | submit login

One thing I wonder (maybe not a sensible question, I haven't really thought it through): A Kalman filter update is superficially like a HMM Forward Algorithm step, right? You have some belief, you have a new observation, and you have some new belief after.

Are there similar equivalences with other HMM algorithms for different questions? A Viterbi-like algorithm for the most likely path, for example, distinct from the series of Kalman-update argmaxes?




i've always just thought of kalman filters as continuous time/space equivalents of hmms, replacing multinomials with gaussians.

so yeah, you can do things like e-m to learn the parameters of the observation and state update models from data (just like fitting an hmm in the fully observed case), or given state and observation models, and past observations you can predict future state given current state. (what it was made for, where state is the state of your rocket given the physics of motion and all the noisy sensors or whatever, which if you think about it is like a viterbi search for the best fitting path through the continuous state space)


These are all unified as different modes of message passing in probabilistic graphical models. Kalman filters are indeed structurally similar to an HMM without the backward message passing step if viewed as a PGM.


Agreed. And, Bar Shalom's forward/backward alg for maneuvering, nonlinear models from Maneuvering Target Tracking is essentially one full execution PGO.


The KF update is just computing the Bayesian posterior of a markov chain (with certain constraints). p(state|data) = p(data|state) P(state) / p(data).


The connection you are drawing is not incorrect in my opinion. But perhaps I can clarify.

The dynamics model in a KF is a linear system with an input and and output (often denoted u and y respectively).

A Hidden Markov Model typically does not have an input. There is an initial state, and then the system transitions "autonomously" (that's a term I'm borrowing from differential equations).

An "Markov model" with input is called a Markov Decision Process (MDP). And if the state of the system is not fully observable, then it's a POMDP.

So Kalman filters are most analogous to "belief updates" in POMDPs. The KF takes into account the known input to the system.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: