next up previous
Next: Feasible HMM's Up: Hidden Markov Models, etc. Previous: Backward sampling

EM-Algorithm

The three terms that need to be maximised in the E-step are

    $\displaystyle \max_{p(s_1)} \int ds\, \alpha_1(s) \, \ln p(s),$  
    $\displaystyle \max_{p(s\vert s^\prime)} \sum_{t=2}^T \int ds ds^\prime \, \Omega_t(ss^\prime \ln p(s\vert s^\prime),$  
    $\displaystyle \max_{p(x\vert s)} \sum_{t=1}^T \int ds\, \gamma_t(s) \ln p(x_t\vert s) \quad ,$  

The maximisation of (M1) is achieved by

$\displaystyle 0 = \frac{\delta}{\delta p} \left\{\int ds \, \alpha_1(s) \, \ln p(s) - \lambda\left[\int ds p(s) -1 \right] \right\},\quad \int ds p(s) = 1$    

which leads to

$\displaystyle p(s) = \frac{\alpha_1(s)}{\int ds \alpha_1(s)}$ (25)

The maximisation of (M2) is

$\displaystyle 0 = \frac{\delta}{\delta p} \left\{ \sum_{t=2}^T \int ds ds^\prim...
...t ds ds^\prime p(s\vert s^\prime) \right\}\quad, \int ds p(s\vert s^\prime) = 1$    

which leads to

$\displaystyle p(s\vert s^\prime) = \frac{\sum_{t=2}^T\Omega_t(ss^\prime)}{\sum_...
...int ds\,\Omega_t(ss^\prime)} = \frac{\Omega(ss^\prime)}{\gamma_{t-1}(s^\prime)}$ (26)



Markus Mayer 2009-06-22