This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Edward Neuman Department of Mathematics Southern Illinois University at Carbondale [email protected] This tutorial is devoted to discussion of the computational methods used in numerical linear algebra. Topics discussed include, matrix multiplication, matrix transformations, numerical methods for solving systems of linear equations, the linear least squares, orthogonality, singular value decomposition, the matrix eigenvalue problem, and computations with sparse matrices. The following MATLAB functions will be used in this tutorial. Function Description abs Absolute value chol Cholesky factorization cond Condition number det Determinant diag Diagonal matrices and diagonals of a matrix diff Difference and approximate derivative eps Floating point relative accuracy eye Identity matrix fliplr Flip matrix in left/right direction flipud Flip matrix in up/down direction flops Floating point operation count full Convert sparse matrix to full matrix funm Evaluate general matrix function hess Hessenberg form hilb Hilbert matrix imag Complex imaginary part inv Matrix inverse length Length of vector lu LU factorization max Largest component 2 min Smallest component norm Matrix or vector norm ones Ones array pascal Pascal matrix pinv Pseudoinverse qr Orthogonaltriangular decomposition rand Uniformly distributed random numbers randn Normally distributed random numbers rank Matrix rank real Complex real part repmat Replicate and tile an array schur Schur decomposition sign Signum function size Size of matrix sqrt Square root sum Sum of elements svd Singular value decomposition tic Start a stopwatch timer toc Read the stopwach timer trace Sum of diagonal entries tril Extract lower triangular part triu Extract upper triangular part zeros Zeros array ! "# " $ Computation of the product of two or more matrices is one of the basic operations in the numerical linear algebra. Number of flops needed for computing a product of two matrices A and B can be decreased drastically if a special structure of matrices A and B is utilized properly. For instance, if both A and B are upper (lower) triangular, then the product of A and B is an upper (lower) triangular matrix. function C = prod2t(A, B) % Product C = A*B of two upper triangular matrices A and B. [m,n] = size(A); [u,v] = size(B); if (m ~= n)  (u ~= v) error( 'Matrices must be square' ) end if n ~= u error( 'Inner dimensions must agree' ) end C = zeros(n); for i=1:n for j=i:n C(i,j) = A(i,i:j)*B(i:j,j); end end 3 In the following example a product of two random triangular matrices is computed using function prod2t . Number of flops is also determined. A = triu(randn(4)); B = triu(rand(4)); flops(0) C = prod2t(A, B) nflps = flops C =0.4110 1.2593 0.6637 1.4261 0 0.9076 0.6371 1.7957 0 0 0.1149 0.0882 0 0 0 0.0462 nflps = 36 For comparison, using MATLAB's "general purpose" matrix multiplication operator * , the number of flops needed for computing the product of matrices A and B is flops(0) A*B; flops ans = 128 Product of two Hessenberg matrices...
View
Full
Document
This note was uploaded on 05/05/2011 for the course FC gj, taught by Professor Glokgh during the Spring '97 term at Punjab Engineering College.
 Spring '97
 glokgh

Click to edit the document details