Slides_2010_02_17

# Slides_2010_02_17 - Applied linear algebra and numerical...

This preview shows pages 1–7. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Applied linear algebra and numerical analysis Session 18 Prof. Ulrich Hetmaniuk Department of Applied Mathematics February 17, 2010 Operation counts • The work required to solve a problem on a computer is often measured in terms of the number of fl oating point op erations or flops needed to do the calculation. A floating point operation is multiplication, division, addition or subtraction on floating point numbers (real numbers represented on the computer). Operation counts Example Computing the inner product of two vectors in R n , x T y = x 1 y 1 + x 2 y 2 + ··· + x n y n , requires n multiplications and n- 1 additions, so 2 n- 1 flops. • Computing an inner product will take about 2 times more operations for a vector with 1000 rows than for a vector with 500 rows. • It should take about 2 times more seconds. Operation counts Example Computing y = Ax where A ∈ R m × n and x ∈ R n . The i-th element of y is the inner product of the i-th row of A with x and requires 2 n- 1 flops. There are m rows. So we need to compute y i for i = 1 , 2 ,..., m . The total work is 2 mn- m flops. When the matrix A is square ( m = n ) , the count becomes 2 n 2- n . • Computing a matrix-vector product will take about 4 times more operations (or longer) for a 1000 × 1000 matrix than for a 500 × 500 matrix. Operation counts Example Consider A ∈ R m × r and B ∈ R r × n . Computing the matrix-matrix product C = AB will require the computations of mn entries c ij . Each entry c ij is the inner product of two vectors with r components, which require 2 r- 1 flops. So the total work becomes 2 mnr- mn flops. • Computing a matrix-matrix product will take about 8 times more longer for a 1000 × 1000 matrix than for a 500 × 500 matrix. “Big oh” notation Definition The function W ( n ) is “Big oh” of n k when the ratio W ( n ) / n k remains bounded as n-→ + ∞ : W ( n ) = O ( n k ) ⇔ | W ( n ) | ≤ Cn k for large values of n • If the work required by some algorithm for a system of size...
View Full Document

## This note was uploaded on 03/31/2010 for the course AMATH 352 taught by Professor Leveque during the Winter '07 term at University of Washington.

### Page1 / 25

Slides_2010_02_17 - Applied linear algebra and numerical...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online