1.3. Prove that the LMS weight update rule described in this chapter performs a
ID: 3732053 • Letter: 1
Question
1.3. Prove that the LMS weight update rule described in this chapter performs a gradient descent to minimize the squared error. In particular, define the squared error E as in the text. Now calculate the derivative of E with respect to the weight wi, assuming that V(b) is a linear function as defined in the text. Gradient descent is achieved by updating each weight in proportion to - Therefore, you must show that the LMS training rule alters weights in this proportion for each training example it encounters. P with respect to the weighiExplanation / Answer
SOLUTION:-
WE HAVE FOLLOWING CONDITION AS PER QUESTION:-
Vtr(b)=V^(discent(b))
V^(b)=w0+6i=1wixi
Calculation for Derivative of E=(Vtr(b)V^(b))2
Here it is very important to understand that We have used abstract board states to find a distinct set of values of each x with respect to V^ and Vtr
So, (Vtr(b)V^(b)) will be calculated as given below:-
wn(xnxn) due to
Vtr = w0 + wixi
Now, ( E / wi ) will gives the expression (Vtr(b)V^(b))
So now we can see that Gradient Descent is achieved by updating each weight in proportion to –( E / wi ) also we can say that LMS Training rule alters weights in this proportion for each training example it encounters.
(HENCE PROVED)
=======================================================================================
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.