Consider the fixed point iteration x_n + 1 = g(x_n). Once the iterates are \"clo
ID: 3009325 • Letter: C
Question
Consider the fixed point iteration x_n + 1 = g(x_n). Once the iterates are "close" to the root alpha then alpha - x_n + 1/alpha - x_n g'(alpha) is nearly a constant (using the MVT, and assuming g has enough derivatives), which is independent of n. In this case we can write alpha - x_n + 1/alpha - x_n alpha - x_n/alpha - x_n + 1 or equivalently (alpha - x_n + 1)(alpha - x_n-1) (alpha - x_n)^2. One can then solve this expression for a to get an improved approximation for the fixed point. If the assumption g'(x_n) constant is true, the approximation for a that is obtained in this way is usually a significant improvement over the last x_n in the generated sequence. This procedure is called Aitken's extrapolation, Given below is a table of iterates generated from a linearly convergent sequence x_n + 1 = g(x_n) = x_n - (x^2_n - 3)/2, used to find the root squareroot 3 of y = x^2 - 3. Use Aitken's extrapolation, and the last three iterates below, to obtain an improved estimate for the fixed point alpha.Explanation / Answer
The above iterations can be written symbolically as E1: xn+1 =1+ .5 sin xn E2: xn+1 = 3 + 2 sin xn for n = 0, 1, 2, ... Why does one of these iterations converge, but not the other? The graphs show similar behaviour, so why the difference. As another example, note that the Newton method xn+1 = xn f(xn) f0 (xn) is also a fixed point iteration, for the equation x = x f(x) f0 (x) In general, we are interested in solving equations x = g(x) by means of fixed point iteration: xn+1 = g(xn), n = 0, 1, 2, ... It is called ‘fixed point iteration’ because the root is a fixed point of the function g(x), meaning that is a number for which g() =
We begin by asking whether the equation x = g(x) has a solution. For this to occur, the graphs of y = x and y = g(x) must intersect, as seen on the earlier graphs. The lemmas and theorems in the book give conditions under which we are guaranteed there is a fixed point . Lemma: Let g(x) be a continuous function on the interval [a, b], and suppose it satisfies the property a x b a g(x) b (#) Then the equation x = g(x) has at least one solution in the interval [a, b]. See the graphs for examples. The proof of this is fairly intuitive. Look at the function f(x) = x g(x), a x b Evaluating at the endpoints, f(a) 0, f(b) 0 The function f(x) is continuous on [a, b], and therefore it contains a zero in the interval.
Theorem: Assume g(x) and g0 (x) exist and are continuous on the interval [a, b]; and further, assume a x b a g(x) b max axb ¯ ¯ ¯g0 (x) ¯ ¯ ¯ < 1 Then: S1. The equation x = g(x) has a unique solution in [a, b]. S2. For any initial guess x0 in [a, b], the iteration xn+1 = g(xn), n = 0, 1, 2, ... will converge to . S3. | xn| n 1 |x1 x0| , n 0 S4. lim n xn+1 xn = g 0 () Thus for xn close to , xn+1 g0 () ( xn)
The proof is given in the text, and I go over only a portion of it here. For S2, note that from (#), if x0 is in [a, b], then x1 = g(x0) is also in [a, b]. Repeat the argument to show that x2 = g(x1) belongs to [a, b]. This can be continued by induction to show that every xn belongs to [a, b]. We need the following general result. For any two points w and z in [a, b], g(w) g(z) = g0 (c) (w z) for some unknown point c between w and z. Therefore, |g(w) g(z)| |w z| for any a w, z b.
For S3, subtract xn+1 = g(xn) from = g() to get xn+1 = g() g(xn) = g0 (cn) ( xn) ($) | xn+1| | xn| (*) with cn between and xn. From (*), we have that the error is guaranteed to decrease by a factor of with each iteration. This leads to | xn| n | xn| , n 0 With some extra manipulation, we can obtain the error bound in S3. For S4, use ($) to write xn+1 xn = g 0 (cn) Since xn and cn is between and xn, we have g0 (cn) g0 ().
The statement xn+1 g0 () ( xn) tells us that when near to the root , the errors will decrease by a constant factor of g0 (). If this is negative, then the errors will oscillate between positive and negative, and the iterates will be approaching from both sides. When g0 () is positive, the iterates will approach from only one side. The statements xn+1 = g0 (cn) ( xn) xn+1 g0 () ( xn) also tell us a bit more of what happens when ¯ ¯ ¯g0 () ¯ ¯ ¯ > 1 Then the errors will increase as we approach the root rather than decrease in size.
Look at the earlier examples E1: x =1+ .5 sin x E2: x = 3 + 2 sin x In the first case E1, g(x) = 1+ .5 sin x g0 (x) = .5 cos x ¯ ¯g0 ( ¯ ¯ 1 2 Therefore the fixed point iteration xn+1 =1+ .5 sin xn will converge for E1. For the second case E2, g(x) = 3 + 2 sin x g0 (x) = 2 cos x g0 () = 2 cos (3.09438341304928) . = 1.998 Therefore the fixed point iteration xn+1 = 3 + 2 sin xn will diverge for E2
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.