a search ..! 12:15 PM * 31%. Homework3.pdf The start-to-hidden-state transitions
ID: 3362599 • Letter: A
Question
a search ..! 12:15 PM * 31%. Homework3.pdf The start-to-hidden-state transitions are equiprobable with au, 0.5, ao1 = 0.5, The last hidden state transitions to the end state must occur with ao-1 and ato- 1. For the forward algorithm, give the P)-Jwli)+fali) for each i For the hidden Markov model defined above, find the posterior probabilities of states H and L at the last position of z-GGCA and GGCACTGA. Use the formula: (i)b0 (foand on page 59 of Durbin et al.), where f's are from the forward algorithm. b.8 are from the backward algorithm, and P(x) is the result of the forward/backward calculation. What is the difference between Viterbi decoding and smoothing (x") and posterior decoding ()? Write down their mathematical deinitions and then explain how they differ Open with PrintExplanation / Answer
The forward algorithm proceeds as follows.
Initialization:
i = 0, f0(0) = 1, fH (0) = 0, fL(0) = 0;
i = 1, fH (1) = eH (x1)a0H = 0.3 × 0.5 = 0.15, fL(1) = eL(x1)a0L = 0.2 × 0.5 = 0.1;
i = 2, fH (2) = eH (G)(fH (1)aHH + fL(1)aLH ) = 0.3(0.15 × 0.5 + 0.1 × 0.4) = 0.0345,
fL(2) = eL(G)(fH (1)aHL + fL(1)aLL) = 0.2(0.15 × 0.5 + 0.1 × 0.6) = 0.027;
i = 3, fH (3) = eH (C)(fH (2)aHH + fL(2)aLH ) = 0.3(0.0345 × 0.5 + 0.027 × 0.4) = 0.008415,
fL(3) = eL(C)(fH (2)aHL + fL(2)aLL) = 0.2(0.0345 × 0.5 + 0.027 × 0.6) = 0.00669;
i = 4, fH (4) = eH (A)(fH (3)aHH + fL(3)aLH ) = 0.2(0.008415 × 0.5 + 0.00669 × 0.4) = 0.0013767,
fL(4) = eL(A)(fH (3)aHL + fL(3)aLL) = 0.3(0.008415 × 0.5 + 0.00669 × 0.6) = 0.00246645.
At the termination step we have P(x) = P(GGCA) = fH (4) + fL(4) = 0.00384315.
Here we assume that the probabilities of transitions to the end state aH0 and aL0 are equal to 1 at position L = 4 since we are dealing with sequences of length 4.
The backward algorithm, starting with position 4, proceeds as follows:
i = 4, bH (4) = 1, bL(4) = 1;
i = 3, bH (3) = aHH eH (A)bH (4) + aHLeL(A)
bL(4) = 0.5 × 0.2 + 0.5 × 0.3 = 0.25,
bL(3) = aLH eH (A)bH (4) + aLLeL(A)
bL(4) = 0.4 × 0.2 + 0.6 × 0.3 = 0.26;
i = 2, bH (2) = aHH eH (C)bH (3) + aHLeL(C)bL(3)
= 0.5 × 0.3 × 0.25 + 0.5 × 0.2 × 0.26 = 0.0635,
bL(2) = aLH eH (C)bH (3) + aLLeL(C)bL(3)
= 0.4 × 0.3 × 0.25 + 0.6 × 0.2 × 0.26 = 0.0612;
i = 1, bH (1) = aHH eH (G)bH (2) + aHLeL(G)bL(2)
= 0.5 × 0.3 × 0.0635 + 0.5 × 0.2 × 0.0612 = 0.015645,
bL(1) = aLH eH (G)bH (2) + aLLeL(G)bL(2)
= 0.4 × 0.3 × 0.0635 + 0.6 × 0.2 × 0.0612 = 0.014964.
At the termination step we have P(x) = P(GGCA) = eH (G)bH (1)a0H + eL(G)bL(1)a0L
= 0.3 × 0.015645 × 0.5 + 0.2 × 0.014964 × 0.5
= 0.00384315.
The posterior probabilities can be found by the following formula: P(i = k|x) = fk(i)bk(i) P(x) . The forward and backward variables f(4) and b(4) were determined
P(4 = H|GGCA) = fH (4)bH (4) P(GGCA)
= 0.0013767 × 1 0.00384315 = 0.35822.
Similarly, P(4 = L|GGCA) = fL(4)bL(4) P(GGCA)
= 0.002466457 × 1 0.00384315 = 0.64178 = 1 P(4 = H|GGCA).
We see that state L is more likely to appear at position 4 than state H. Incidentally, state L appears at position 4 in the most probable (Viterbi) path = HHHL .Still, we should remember that states of the Viterbi path and the most probable states obtained by the posterior decoding algorithm do not necessarily coincide.
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.