Applications of Hidden Markov Chains

View

Hidden Markov model

In hidden Markov chains, the system's behavior depends on latent (or hidden) variables. This has a lot of applications in contemporary AI. For now, focus on grasping the high-level themes and ideas. If the subject interests you, you can dive deeper into technical details. The examples are particularly instructive.

A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or "hidden") Markov process (referred to as X). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X in a known way. Since X cannot be observed directly, the goal is to learn about state of X by observing Y. By definition of being a Markov model, an HMM has an additional requirement that the outcome of Y at time t=t_0) must be "influenced" exclusively by the outcome of X at t=t_0) and that the outcomes of X and Y at t must be conditionally independent of Y at t=t_0) given X at time t=t_0. Estimation of the parameters in an HMM can be performed using maximum likelihood. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate the parameters.

Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition - such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.


Source: Wikipedia, https://en.wikipedia.org/wiki/Hidden_Markov_model
Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.