Tensor Algebra and Tensor Analysis for Engineers with Applications to Continuum Mechanics Second Edi - www.TechnicalBooksPDF.com Tensor Algebra and

Tensor Algebra and Tensor Analysis for Engineers with Applications to Continuum Mechanics Second Edi

This preview shows page 1 out of 263 pages.

Unformatted text preview: Tensor Algebra and Tensor Analysis for Engineers Second edition Mikhail Itskov Tensor Algebra and Tensor Analysis for Engineers With Applications to Continuum Mechanics Second edition 123 Prof. Dr.-Ing. Mikhail Itskov Department of Continuum Mechanics RWTH Aachen University Eilfschornsteinstr. 18 D 52062 Aachen Germany [email protected] ISBN 978-3-540-93906-1 e-ISBN 978-3-540-93907-8 DOI 10.1007/978-3-540-93907-8 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2009926098 c Springer-Verlag Berlin Heidelberg 2007, 2009  This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: eStudio Calamar S.L. Printed on acid-free paper Springer is part of Springer Science+Business Media ( ) Moim roditelm Preface to the Second Edition This second edition is completed by a number of additional examples and exercises. In response of comments and questions of students using this book, solutions of many exercises have been improved for a better understanding. Some changes and enhancements are concerned with the treatment of skewsymmetric and rotation tensors in the first chapter. Besides, the text and formulae have thoroughly been reexamined and improved where necessary. Aachen, January 2009 Mikhail Itskov Preface to the First Edition Like many other textbooks the present one is based on a lecture course given by the author for master students of the RWTH Aachen University. In spite of a somewhat difficult matter those students were able to endure and, as far as I know, are still fine. I wish the same for the reader of the book. Although the present book can be referred to as a textbook one finds only little plain text inside. I tried to explain the matter in a brief way, nevertheless going into detail where necessary. I also avoided tedious introductions and lengthy remarks about the significance of one topic or another. A reader interested in tensor algebra and tensor analysis but preferring, however, words instead of equations can close this book immediately after having read the preface. The reader is assumed to be familiar with the basics of matrix algebra and continuum mechanics and is encouraged to solve at least some of numerous exercises accompanying every chapter. Having read many other texts on mathematics and mechanics I was always upset vainly looking for solutions to the exercises which seemed to be most interesting for me. For this reason, all the exercises here are supplied with solutions amounting a substantial part of the book. Without doubt, this part facilitates a deeper understanding of the subject. As a research work this book is open for discussion which will certainly contribute to improving the text for further editions. In this sense, I am very grateful for comments, suggestions and constructive criticism from the reader. I already expect such criticism for example with respect to the list of references which might be far from being complete. Indeed, throughout the book I only quote the sources indispensable to follow the exposition and notation. For this reason, I apologize to colleagues whose valuable contributions to the matter are not cited. Finally, a word of acknowledgment is appropriate. I would like to thank Uwe Navrath for having prepared most of the figures for the book. Further, I am grateful to Alexander Ehret who taught me first steps as well as some “dirty” tricks in LATEX, which were absolutely necessary to bring the X Preface to the First Edition manuscript to a printable form. He and Tran Dinh Tuyen are also acknowledged for careful proof reading and critical comments to an earlier version of the book. My special thanks go to the Springer-Verlag and in particular to Eva Hestermann-Beyerle and Monika Lempe for their friendly support in getting this book published. Aachen, November 2006 Mikhail Itskov Contents 1 2 3 Vectors and Tensors in a Finite-Dimensional Space . . . . . . . . 1.1 Notion of the Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Basis and Dimension of the Vector Space . . . . . . . . . . . . . . . . . . . 1.3 Components of a Vector, Summation Convention . . . . . . . . . . . . 1.4 Scalar Product, Euclidean Space, Orthonormal Basis . . . . . . . . . 1.5 Dual Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Second-Order Tensor as a Linear Mapping . . . . . . . . . . . . . . . . . . 1.7 Tensor Product, Representation of a Tensor with Respect to a Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Change of the Basis, Transformation Rules . . . . . . . . . . . . . . . . . 1.9 Special Operations with Second-Order Tensors . . . . . . . . . . . . . . 1.10 Scalar Product of Second-Order Tensors . . . . . . . . . . . . . . . . . . . . 1.11 Decompositions of Second-Order Tensors . . . . . . . . . . . . . . . . . . . 1.12 Tensors of Higher Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 3 5 6 8 12 16 19 20 26 27 29 30 Vector and Tensor Analysis in Euclidean Space . . . . . . . . . . . . 2.1 Vector- and Tensor-Valued Functions, Differential Calculus . . . 2.2 Coordinates in Euclidean Space, Tangent Vectors . . . . . . . . . . . . 2.3 Coordinate Transformation. Co-, Contra- and Mixed Variant Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Gradient, Covariant and Contravariant Derivatives . . . . . . . . . . 2.5 Christoffel Symbols, Representation of the Covariant Derivative 2.6 Applications in Three-Dimensional Space: Divergence and Curl Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 42 46 49 57 Curves and Surfaces in Three-Dimensional Euclidean Space 3.1 Curves in Three-Dimensional Euclidean Space . . . . . . . . . . . . . . . 3.2 Surfaces in Three-Dimensional Euclidean Space . . . . . . . . . . . . . 3.3 Application to Shell Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 59 66 73 79 35 35 37 XII Contents 4 Eigenvalue Problem and Spectral Decomposition of Second-Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.1 Complexification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2 Eigenvalue Problem, Eigenvalues and Eigenvectors . . . . . . . . . . . 82 4.3 Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.4 Spectral Decomposition and Eigenprojections . . . . . . . . . . . . . . . 87 4.5 Spectral Decomposition of Symmetric Second-Order Tensors . . 92 4.6 Spectral Decomposition of Orthogonal and Skew-Symmetric Second-Order Tensors . . . . . . . . . . . . . . . . . 94 4.7 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5 Fourth-Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.1 Fourth-Order Tensors as a Linear Mapping . . . . . . . . . . . . . . . . . 103 5.2 Tensor Products, Representation of Fourth-Order Tensors with Respect to a Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3 Special Operations with Fourth-Order Tensors . . . . . . . . . . . . . . 106 5.4 Super-Symmetric Fourth-Order Tensors . . . . . . . . . . . . . . . . . . . . 109 5.5 Special Fourth-Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6 Analysis of Tensor Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.1 Scalar-Valued Isotropic Tensor Functions . . . . . . . . . . . . . . . . . . . 115 6.2 Scalar-Valued Anisotropic Tensor Functions . . . . . . . . . . . . . . . . . 119 6.3 Derivatives of Scalar-Valued Tensor Functions . . . . . . . . . . . . . . . 122 6.4 Tensor-Valued Isotropic and Anisotropic Tensor Functions . . . . 129 6.5 Derivatives of Tensor-Valued Tensor Functions . . . . . . . . . . . . . . 135 6.6 Generalized Rivlin’s Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 7 Analytic Tensor Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 7.2 Closed-Form Representation for Analytic Tensor Functions and Their Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.3 Special Case: Diagonalizable Tensor Functions . . . . . . . . . . . . . . 152 7.4 Special case: Three-Dimensional Space . . . . . . . . . . . . . . . . . . . . . 154 7.5 Recurrent Calculation of Tensor Power Series and Their Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 8 Applications to Continuum Mechanics . . . . . . . . . . . . . . . . . . . . . 165 8.1 Polar Decomposition of the Deformation Gradient . . . . . . . . . . . 165 8.2 Basis-Free Representations for the Stretch and Rotation Tensor 166 8.3 The Derivative of the Stretch and Rotation Tensor with Respect to the Deformation Gradient . . . . . . . . . . . . . . . . . . 169 Contents XIII 8.4 Time Rate of Generalized Strains . . . . . . . . . . . . . . . . . . . . . . . . . . 173 8.5 Stress Conjugate to a Generalized Strain . . . . . . . . . . . . . . . . . . . 175 8.6 Finite Plasticity Based on the Additive Decomposition of Generalized Strains . . . . . . . . . . . . . . . . . . . . . . 178 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 1 Vectors and Tensors in a Finite-Dimensional Space 1.1 Notion of the Vector Space We start with the definition of the vector space over the field of real numbers R. Definition 1.1. A vector space is a set V of elements called vectors satisfying the following axioms. A. To every pair, x and y of vectors in V there corresponds a vector x + y, called the sum of x and y, such that (A.1) x + y = y + x (addition is commutative), (A.2) (x + y) + z = x + (y + z) (addition is associative), (A.3) there exists in V a unique vector zero 0 , such that 0 + x = x, ∀x ∈ V, (A.4) to every vector x in V there corresponds a unique vector −x such that x + (−x) = 0 . B. To every pair α and x, where α is a scalar real number and x is a vector in V, there corresponds a vector αx, called the product of α and x, such that (B.1) α (βx) = (αβ) x (multiplication by scalars is associative), (B.2) 1x = x, (B.3) α (x + y) = αx + αy (multiplication by scalars is distributive with respect to vector addition), (B.4) (α + β) x = αx + βx (multiplication by scalars is distributive with respect to scalar addition), ∀α, β ∈ R, ∀x, y ∈ V. Examples of vector spaces. 1) The set of all real numbers R. 2 1 Vectors and Tensors in a Finite-Dimensional Space x +y =y +x x x −x y vector addition negative vector 2.5x 2x x zero vector multiplication by a real scalar Fig. 1.1. Geometric illustration of vector axioms in two dimensions 2) The set of all directional arrows in two or three dimensions. Applying the usual definitions for summation, multiplication by a scalar, the negative and zero vector (Fig. 1.1) one can easily see that the above axioms hold for directional arrows. 3) The set of all n-tuples of real numbers R: ⎧ ⎫ a1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ a2 ⎪ ⎬ . a= . ⎪ ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ ⎭ ⎪ ⎩ an Indeed, the axioms (A) and (B) apply to the n-tuples if one defines addition, multiplication by a scalar and finally the zero tuple, respectively, by ⎧ ⎫ ⎧ ⎫ ⎧ ⎫ 0⎪ ⎪ ⎪ ⎪ ⎪ a1 + b 1 ⎪ ⎪ ⎪ αa1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ a2 + b 2 ⎪ ⎬ ⎨ αa2 ⎪ ⎬ ⎨0⎪ ⎬ . . , αa = , 0 = . . a+b= ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪.⎪ ⎪ ⎩ ⎭ ⎩ ⎭ ⎩ ⎭ 0 an + b n αan 4) The set of all real-valued functions defined on a real line. 1.2 Basis and Dimension of the Vector Space 3 1.2 Basis and Dimension of the Vector Space Definition 1.2. A set of vectors x1 , x2 , . . . , xn is called linearly dependent if there exists a set of corresponding scalars α1 , α2 , . . . , αn ∈ R, not all zero, such that n αi xi = 0 . (1.1) i=1 Otherwise, the vectors x1 , x2 , . . . , xn are called linearly independent. In this case, none of the vectors xi is the zero vector (Exercise 1.2). Definition 1.3. The vector x= n αi xi (1.2) i=1 is called linear combination of the vectors x1 , x2 , . . . , xn , where αi ∈ R (i = 1, 2, . . . , n). Theorem 1.1. The set of n non-zero vectors x1 , x2 , . . . , xn is linearly dependent if and only if some vector xk (2 ≤ k ≤ n) is a linear combination of the preceding ones xi (i = 1, . . . , k − 1). Proof. If the vectors x1 , x2 , . . . , xn are linearly dependent, then n αi xi = 0 , i=1 where not all αi are zero. Let αk (2 ≤ k ≤ n) be the last non-zero number, so that αi = 0 (i = k + 1, . . . , n). Then, k i=1 αi xi = 0 ⇒ xk = k−1 i=1 −αi xi . αk Thereby, the case k = 1 is avoided because α1 x1 = 0 implies that x1 = 0 (Exercise 1.1). Thus, the sufficiency is proved. The necessity is evident. Definition 1.4. A basis of a vector space V is a set G of linearly independent vectors such that every vector in V is a linear combination of elements of G. A vector space V is finite-dimensional if it has a finite basis. Within this book, we restrict our attention to finite-dimensional vector spaces. Although one can find for a finite-dimensional vector space an infinite number of bases, they all have the same number of vectors. 4 1 Vectors and Tensors in a Finite-Dimensional Space Theorem 1.2. All the bases of a finite-dimensional vector space V contain the same number of vectors. Proof. Let G = {g 1 , g 2 , . . . , g n } and F = {f 1 , f 2 , . . . , f m } be two arbitrary bases of V with different numbers of elements, say m > n. Then, every vector in V is a linear combination of the following vectors: f 1 , g1 , g2 , . . . , gn . (1.3) These vectors are non-zero and linearly dependent. Thus, according to Theorem 1.1 we can find such a vector g k , which is a linear combination of the preceding ones. Excluding this vector we obtain the set G  by f 1 , g 1 , g 2 , . . . , g k−1 , g k+1 , . . . , g n again with the property that every vector in V is a linear combination of the elements of G  . Now, we consider the following vectors f 1 , f 2 , g 1 , g 2 , . . . , g k−1 , g k+1 , . . . , g n and repeat the excluding procedure just as before. We see that none of the vectors f i can be eliminated in this way because they are linearly independent. As soon as all g i (i = 1, 2, . . . , n) are exhausted we conclude that the vectors f 1 , f 2 , . . . , f n+1 are linearly dependent. This contradicts, however, the previous assumption that they belong to the basis F . Definition 1.5. The dimension of a finite-dimensional vector space V is the number of elements in a basis of V. Theorem 1.3. Every set F = {f 1 , f 2 , . . . , f n } of linearly independent vectors in an n-dimensional vectors space V forms a basis of V. Every set of more than n vectors is linearly dependent. Proof. The proof of this theorem is similar to the preceding one. Let G = {g 1 , g 2 , . . . , g n } be a basis of V. Then, the vectors (1.3) are linearly dependent and non-zero. Excluding a vector g k we obtain a set of vectors, say G  , with the property that every vector in V is a linear combination of the elements of G  . Repeating this procedure we finally end up with the set F with the same property. Since the vectors f i (i = 1, 2, . . . , n) are linearly independent they form a basis of V. Any further vectors in V, say f n+1 , f n+2 , . . . are thus linear combinations of F . Hence, any set of more than n vectors is linearly dependent. Theorem 1.4. Every set F = {f 1 , f 2 , . . . , f m } of linearly independent vectors in an n-dimensional vector space V can be extended to a basis. 1.3 Components of a Vector, Summation Convention 5 Proof. If m = n, then F is already a basis according to Theorem 1.3. If m < n, then we try to find n − m vectors f m+1 , f m+2 , . . . , f n , such that all the vectors f i , that is, f 1 , f 2 , . . . , f m , f m+1 , . . . , f n are linearly independent and consequently form a basis. Let us assume, on the contrary, that only k < n − m such vectors can be found. In this case, for all x ∈ V there exist scalars α, α1 , α2 , . . . , αm+k , not all zero, such that αx + α1 f 1 + α2 f 2 + . . . + αm+k f m+k = 0 , where α = 0 since otherwise the vectors f i (i = 1, 2, . . . , m + k) would be linearly dependent. Thus, all the vectors x of V are linear combinations of f i (i = 1, 2, . . . , m + k). Then, the dimension of V is m + k < n, which contradicts the assumption of this theorem. 1.3 Components of a Vector, Summation Convention Let G = {g 1 , g 2 , . . . , g n } be a basis of an n-dimensional vector space V. Then, x= n xi g i , ∀x ∈ V. (1.4) i=1 Theorem 1.5. The representation (1.4) with respect to a given basis G is unique. Proof. Let x= n i=1 xi g i and x = n yi gi i=1 be two different representations of a vector x, where not all scalar coefficients xi and y i (i = 1, 2, . . . , n) are pairwise identical. Then, 0 = x + (−x) = x + (−1) x = n i=1 xi g i + n n i −y i g i = x − yi gi, i=1 i=1 where we use the identity −x = (−1) x (Exercise 1.1). Thus, either the numbers xi and y i are pairwise equal xi = y i (i = 1, 2, . . . , n) or the vectors g i are linearly dependent. The latter one is likewise impossible because these vectors form a basis of V. The scalar numbers xi (i = 1, 2, . . . , n) in the representation (1.4) are called components of the vector x with respect to the basis G = {g 1 , g 2 , . . . , g n }. The summation of the form (1.4) is often used in tensor analysis. For this reason it is usually represented without the summation symbol in a short form by 6 1 Vectors and Tensors in a Finite-Dimensional Space x= n xi g i = xi g i (1.5) i=1 referred to as Einstein’s summation convention. Accordingly, the summation is implied if an index appears twice in a multiplicative term, once as a superscript and once as a subscript. Such a repeated index (called dummy index) takes the values from 1 to n (the dimension of the vector space in consideration). T...
View Full Document

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture