next up previous
Next: Max-likelihood inference and EM-Algorithm Up: The general setup for Previous: The general setup for

Notation

The state variables are $ s = (s_t)_{t=1,\dots,T}$ , the observed variables are $ x=(x_t)_{t=1,\dots,T}$ . The observation probabilities are $ p(x_t\vert s_t)$ and the transition probabilities of the unobserved (''hidden'') states are $ p(s_t\vert s_{t-1})$ . Figure 1.1 shows the conditional dependency structure. The following abbrevations are used very frequently:
$\displaystyle x^t$ $\displaystyle =$ $\displaystyle (x_1,x_2,\dots,x_t), \quad\mathrm{and}$  
$\displaystyle x^{[t}$ $\displaystyle =$ $\displaystyle (x_t,x_{t+1},\dots,x_T)\quad .$  

Fig.: Basic conditional dependencies of Hidden Markov Models.
Image HMM

The $ d$ -dimensional normal distribution is denoted as $ {p_{\mathcal{N}}}(x;\mu,V), x\in{\mathord{\mathbb{R}}}^d$ ,

$\displaystyle {p_{\mathcal{N}}}(x;\mu,V) = (2\pi)^{d\over 2}\,\vert V\vert^{-{1\over 2}} \, \exp\left\{-{1\over 2}(x-\mu)^\prime\, V^- \,(x-\mu)\right\}$ (1)



Markus Mayer 2009-06-22