TVSC_Lecture_3

7 lecture 3 nite dimensional furthermore

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: The target of state-space realization for these problems is to determine efficient computations, where the efficiency of the computations is measured in terms of the number of multiplications (arithmetic cost) and in terms of the number of registers (delays), which denotes the memory cost of a computation. As discussed earlier, truly time-invariant systems are associated with matrices, which 1. are infinite dimensional 2. exhibit Toeplitz structure. Checking Equation 8 reveals that the matrix T neither has Toeplitz structure nor is it infinite dimensional. Hence it must correspond to a linear time-varying system. However, we can still interpret the columns of T as time-varying impulse responses. Direct Realization We proceed to draw a direct realization for the given matrix T as a time-varying system. As a start we redraw the state-space realization for one time step k as is shown in Figure 2. The signal flow graph on Dk yk xk+1 Bk uk Z Ak Ck uk xk Bk xk xk+1 Ak Ck Z Dk yk Abbildung 2: Redrawing and simplification right-hand side of Figure 2 is the basic building block of a time-varying system. Based on this buildingblock, the system that realizes the transfer function in Equation 8 is shown in Figure 3. The signal flow inherently depicts causality by its unidirectional signal flwo, i.e. all the arrows strictly point from top to bottom and from left to right. Other properties that can be observed are that this simple system uses 6 registers and 6 (non-trivial) multipliers for the realization of our given matrix T (adders are typically not accounted for such complexity estimates). We denote the realization matrix for an indivdual block 8 Lecture 3 u1 u3 u2 u4 x4 x3 x2 y1 Z Z Z 1/2 y2 1/4 Z 1/3 Z 1/12 Z 1/6 1/24 y3 Abbildung 3: Direct implementation of the matrix T in Equation 8 y4 9 Lecture 3 at time-indec k by Bk Dk Ak Ck Σk = . (9) For the direct realization shown in Figure 3 we can write down the individual realization matrices Σk by inspection as Σ1 Σ2 Σ3 Σ4 = ·1 ·1 1 0 1/2 1 0 0 1/6 = = = 1/24 (10) 0 1 1 0 1 0 1/3 · 1/12 (11) 0 0 1 1 (12) · 1 1/4 , (13) which we can combine to specify the time-varying realization matrix A C Σ= B D where the corresponding block-diagonal matrices are given as [·] 1 [A1 ] 0 [A2 ] = 10 A= [A3 ] 0 1 [ A4 ] 00 [B1 ] [B2 ] = B= [B3 ] [B4 ] C= D= [C1 ] [C2 ] [C3 ] [C4 ] [D1 ] = [D2 ] [D3 ] [D4 ] [1] 0 1 [·] = 0 1 1 [·] [·] [1/2] 1/6 1/3 [1] [1] [1] [1] . 1/24 1/12 Here and in the following a ‘[·]’ as an entry represents a zero-dimensional matrix. 1/4 10 Lecture 3 Alternative Realization Theocratically speaking an infinite number of realizations are possible for a given transfer operator T . In reality only a few alternates will be of interest. For example, we consider the implementation as shown in Figure 4 that realizes the same transfer function as in Equation 8, but enables this with 3 registers (half the number as used by the straight-forward realization of Figure 3) and 5 multiplications. Although the simplification might not look significant at the moment, its worth considering t...
View Full Document

This note was uploaded on 07/07/2013 for the course EI 2012 taught by Professor Tum during the Winter '12 term at TU München.

Ask a homework question - tutors are online