Skip to main content
留学咨询

辅导案例-MTH3320-Assignment 3

By May 15, 2020No Comments

SCHOOL OF MATHEMATICAL SCIENCES MTH3320 Computational Linear Algebra Assignment 3 Due date: Thursday 21 May, 2020, 9pm (submit the electronic copy of your assignment and the matlab code via Moodle) This assignment contains four questions for a total of 50 marks (equal to 7.5% of the final mark for this unit). Late submissions will be subject to late penalties (see the unit guide for full details). The relevant output of the programming exercises are to be submitted as part of your electronic submission. You can either include the outputs as a part of your pdf file (if you can edit the pdf document), or include them in a separate word document (you need to label the figures and outputs consistently with the question number). In addition, collect all your Matlab m-files in one zip file and submit this file through Moodle. 1. Unitary Similarity Transformation to a Lower Trian- gular Matrix. (10 marks) Use induction to show that every square matrix can be transformed to a lower triangular matrix via a unitary similarity transformation. That is, for a matrix A ∈ Rn×n, we can always identify a unitary matrix Q and a lower triangular matrix L such that A = QLQ∗. Hint: use the same procedure in the proof of the existence of Schur factorisation, and start from the lower-rightmost entry. 2. Convergence of the Inverse Iteration on Symmetric Matrices. (15 marks) Consider the inverse iteration (Algorithm 6.32) applied to a real symmetric matrix A ∈ Rn×n. Suppose λK is the closest eigenvalue to µ and λL is the second closest, that is, |λK − µ| < |λL − µ| ≤ |λi − µ|, for each i 6= K. School of Mathematical Sciences Monash University Let ~q1, . . . , ~qn denote eigenvectors corresponding the eigenvalues of A. Suppose further we have an initial vector ~x such that ~x∗~qK 6= 0. Prove that, at iteration k, the vector ~b(k) in the inverse iteration converges as ‖~b(k) − ~qK‖ = O (∣∣∣∣λK − µλL − µ ∣∣∣∣k ) . You may consider taking the following steps: (i) Suppose all the eigenvectors ~q1, ~q2, . . . , ~qn are orthonormal, write down the eigen- decomposition of (A− µI)−1. (ii) Use induction to show that the inverse iteration method produces a sequence of vectors ~b(k), k = 1, 2, . . . in the form of ~b(k) = (A− µI)−k~b(0) ‖(A− µI)−k~b(0)‖ , (iii) Use the results in (i) and (ii) to complete the rest of the proof. 3. Implementation of the steepest descent and conjugate gradient methods. (15 marks) You are asked to implement the steepest descent and conjugate gradient methods for A~x = ~b in Matlab, and apply the methods to the 2D model problem (download build laplace 2D.m or build laplace 2D kron.m to build the 2D Laplacian matrix). (a) Implement the following in m-file steepest.m: function [x res steps]=steepest(A,x0,b,tol,maxit) % Performs a sequence of steepest descent iterations on % A x = b using the initial estimate x0, until the 2-norm % of the residual is smaller than tol (relative % to the initial residual), or until the number of iterations % reaches maxit. ‘steps’ is the number of steps % that were performed, and ‘res’ is a vector with the % residual size after every interation (the 2-norm of % the residual vector). (b) Implement the following in m-file conjGrad.m: function [x res steps]=conjGrad(A,x0,b,tol,maxit) % Performs a sequence of conjugate gradient iterations % on A x = b using the initial estimate u0, until the 2-norm % of the residual is smaller than tol (relative 2020 2 School of Mathematical Sciences Monash University % to the initial residual), or until the number of iterations % reaches maxit. ‘steps’ is the number of steps % that were performed, and ‘res’ is a vector with the % residual size after every interation (the 2-norm of % the residual vector). Submit your m-files steepest.m and conjGrad.m, and provide a printout in your main assignment package. (c) Download test steepest cg.m to test the correctness of (a) and (b). Report on the errors and number of steps. (The errors should be smaller than 10−10.) (d) Make a driver program CG driver.m that applies steepest.m and conjGrad.m to A~x = ~b withA being the 2D model problem matrix generated by build laplace 2D.m or build laplace 2D kron.m, and ~b a vector with all twos. Use maxit=400 and tol=1e-6. Generate a plot in which you compare, for N = 30, the residual con- vergence for the steepest descent and conjugate gradient methods (starting from a zero initial guess), as a function of the iteration. For each of the methods, plot the 10-log of the residuals as a function of iteration number. Which method requires the least iterations to obtain an accurate solution, and is this as you expected? (e) Provide a table in which you list, for steepest descent and conjugate gradient, how many iterations you need to reduce the residual by 1e − 4, for N = 16, 32, 64. (You can use maxit = 500.) What are the total number of unknowns and the condition number for each problem size? (You can use cond(full(A)); no need to do this for N = 64 though, because it may take too long.) For each method, do you see the expected behaviour in the number of required iterations as a function of the total number of unknowns? (Explain.) Using these numerical results, briefly discuss the computational cost/complexity of each of the methods as a function of the total number of unknowns (discuss the cost per iteration, the number of iterations required, and the total cost, as a function of total problem size). Which method is best for large problem size? 4. Implementation of the Inverse Iteration and the Rayleigh Quotient Iteration. (10 marks) (a) Implement the inverse iterations in Matlab according to the pseudocode discussed in class. Your implementation InverseIteration.m should store estimated eigen- values and eigenvectors in each step, except the step 0. You should use the function header given below. function [B, lamb] = InverseIteration(A, mu, x, k) % % Usage: [B, lamb] = InverseIteration(A, mu, x, k) carries k steps % of the inverse iteration for estimating the eigenvalue of A, % with an initial shift mu, and an initial vector x. 2020 3 School of Mathematical Sciences Monash University % It generates outputs B and lamb. % -- B is an (n x k) matrix that stores the estimated eigenvector at % i-th step as its i-th column, B(:,i) % -- lamb is a (1 x k) vector that stores the estimated eigenvalue at % i-th step as its i-th element, lamb(i) % Note that the initial vector and initial eigenvalue estimate are % not stored. n = size(A, 1); % size of the matrix B = zeros(n, k); % matrix B lamb = zeros(1, k); % vector lamb % your code end (b) Implement the Rayleigh quotient iteration in Matlab according to the pseudocode discussed in class. Your implementation RayleighIteration.m should store esti- mated eigenvalues and eigenvectors in each step, except the step 0. You should use the function header given below. function [B, lamb] = RayleighIteration(A, x, k) % % Usage: [B, lamb] = RayleighIteration(A, x, k) carries k steps % of the Rayleigh quotient iteration for estimating the eigenvalue % of A, with an initial vector x % It generates outputs B and lamb. % -- B is an (n x k) matrix that stores the estimated eigenvector at % i-th step as its i-th column, B(:,i) % -- lamb is a (1 x k) vector that stores the estimated eigenvalue at % i-th step as its i-th element, lamb(i) % Note that the initial vector and initial eigenvalue estimate are % not stored. n = size(A, 1); % size of the matrix B = zeros(n, k); % matrix B lamb = zeros(1, k); % vector lamb % your code end (c) Check the correctness of your implementation of RayleighIteration.m as follows. Download the Matlab files flip vec.m, test Rayleigh.m and matrix data Q4.mat under Assignment 4 on Moodle. Save the test Rayleigh.m as my test Rayleigh.m. Then modify my test Rayleigh.m as the driver to call your RayleighIteration.m to solve an eigenvalue problem. The matrix and initial vector are provided (see 2020 4 School of Mathematical Sciences Monash University Step 1 in test Rayleigh.m). Comment on the result plot if your implementation produces a de sirable answer. (d) Run the same test for your implementation of InverseIteration.m using the initial shift value given in my test Rayleigh.m and matrix data Q4.mat. Mod- ify the convergence plots in my test Rayleigh.m to also plot the convergence of eigenvalue and eigenvector estimates produced by the inverse iteration on the same graph showing the convergence of the Rayleigh quotient iteration. Include a printout of the plots produced by my test Rayleigh.m in the main assignment package. Comment on the performance of your implementation of the Rayleigh quotient iteration, compared with the inverse iteration. (e) You should submit the code InverseIteration.m, RayleighIteration.m and my test Rayleigh.m to the Moodle. Also, include a printout of your code in the main assignment package. 2020 5

admin

Author admin

More posts by admin