Tensor Products, Wedge Products and Differential

0 downloads 0 Views 3MB Size Report
Mar 2, 2017 - 2.5 The basis vectors e'n and u'n and a summary . ...... This monograph is meant as a user guide for both tensor products and wedge products. ..... Survey of Modern Algebra (1941/1997) which is known to many students as ...
Tensor Products, Wedge Products and Differential Forms Phil Lucht Rimrock Digital Technology, Salt Lake City, Utah 84103 last update: June 4, 2016

Maple code is available upon request. Comments and errata are welcome. The material in this document is copyrighted by the author. The graphics look ratty in Windows Adobe PDF viewers when not scaled up, but look just fine in this excellent freeware viewer: http://www.tracker-software.com/product/pdf-xchange-viewer . The table of contents has live links. Most PDF viewers provide these links as bookmarks on the left. Overview and Summary ......................................................................................................................... 5 Notation.................................................................................................................................................... 7 1. The Tensor Product .......................................................................................................................... 10 1.1 The Tensor Product as a Quotient Space ...................................................................................... 10 1.2 The Tensor Product in Category Theory....................................................................................... 15 2. A Review of Tensors in Covariant Notation .................................................................................. 18 2.1 R, S and how tensors transform : Picture A .................................................................................. 18 2.2 The metric tensors g and g' and the dot product ........................................................................... 23 2.3 The basis vectors en and en ........................................................................................................... 25 2.4 The basis vectors un and un ........................................................................................................... 26 2.5 The basis vectors e'n and u'n and a summary ................................................................................ 28 2.6 How to compute a viable x' = F(x) from a set of constant basis vectors en ................................. 31 2.7 Expansions of vectors onto basis vectors...................................................................................... 33 2.8 The Outer Product of Tensors and Use of ⊗................................................................................. 36 2.9 The Inner Product (Contraction) of Tensors ................................................................................. 39 Dot products in spaces V⊗V, V⊗W, V⊗V⊗V and V⊗W⊗X....................................................... 40 2.10 Tensor Expansions ...................................................................................................................... 42 (a) Rank-2 Tensor Expansion and Projection ................................................................................. 42 (b) Rank-k Tensor Expansions and Projections .............................................................................. 43 2.11 Dual Spaces and Tensor Functions ............................................................................................. 45 (a) The Dual Space V* in Matrix and Dirac Notation .................................................................... 46 (b) Functional notation.................................................................................................................... 47 (c) Basis vectors for the dual space V*........................................................................................... 47 (d) Rank-2 functionals and tensor functions ................................................................................... 51 (e) Rank-k functionals and tensor functions ................................................................................... 54 (f) The Covariant Transpose ........................................................................................................... 57 (g) Linear Dirac Space Operators ................................................................................................... 57 (h) Completeness ............................................................................................................................ 64 3. Outer Products and Kronecker Products ....................................................................................... 66 3.1 Outer Products Reviewed: Compatibility of Chapter 1 and Chapter 2 ........................................ 66 3.2 Kronecker Products....................................................................................................................... 68 1

4. The Wedge Product of 2 vectors built on the Tensor Product...................................................... 76 4.1 The tensor product of 2 vectors in V2 ........................................................................................... 76 4.2 The tensor product of 2 dual vectors in V*2 ................................................................................. 80 4.3 The wedge product of 2 vectors in L2 ........................................................................................... 83 4.4 The wedge product of 2 dual vectors in Λ2................................................................................... 92 5. The Tensor Product of k vectors : the vector spaces Vk and T(V) ............................................... 97 5.1 Pure elements, basis elements, and dimension of Vk .................................................................... 97 5.2 Tensor Expansion for a tensor in Vk ; the ordinary multiindex .................................................... 98 5.3 Rules for product of k vectors....................................................................................................... 99 5.4 The Tensor Algebra T(V) ........................................................................................................... 100 5.5 Comments about tensors ............................................................................................................. 102 5.6 The Tensor Product of two or more tensors in T(V)................................................................... 102 6. The Tensor Product of k dual vectors : the vector spaces V*k and T(V*)................................ 107 6.1 Pure elements, basis elements, and dimension of V*k ................................................................ 107 6.2 Tensor Expansion for a tensor in V*k ; the ordinary multiindex ................................................ 108 6.3 Rules for product of k vectors..................................................................................................... 108 6.4 The Tensor Algebra T(V*) ......................................................................................................... 109 6.5 Comments about Tensor Functions............................................................................................. 110 6.6 The Tensor Product of two or more tensors in T(V*)................................................................. 110 7. The Wedge Product of k vectors : the vector spaces Lk and L(V)............................................. 114 7.1 Definition of the wedge product of k vectors.............................................................................. 114 7.2 Properties of the wedge product of k vectors.............................................................................. 116 7.3 The vector space Lk and its basis ................................................................................................ 119 7.4 Tensor Expansions for a tensor in Lk .......................................................................................... 121 7.5 Various expansions for the wedge product of k vectors ............................................................. 124 7.6 Number of elements in Lk compared with Vk............................................................................. 126 7.7 Multiindex notation..................................................................................................................... 127 7.8 The Exterior Algebra L(V) ......................................................................................................... 128 Associativity of the Wedge Product.............................................................................................. 129 7.9 The Wedge Product of two or more tensors in L(V) .................................................................. 132 (a) Wedge Product of two tensors T^ and S^ ................................................................................ 132 (b) Special cases of the wedge product T^^ S^ ............................................................................. 134 (c) Commutivity Rule for the Wedge Product of two tensors T^ and S^ ...................................... 135 (d) Wedge Product of three or more tensors ................................................................................. 136 (e) Commutativity Rule for product of N tensors ......................................................................... 139 (f) Theorems from Appendix C : pre-antisymmetrization makes no difference........................... 141 (g) Spivak Normalization.............................................................................................................. 143 8. The Wedge Product of k dual vectors : the vector spaces Λk and Λ(V) .................................... 147 8.1 Definition of the wedge product of k dual vectors...................................................................... 147 8.2 Properties of the wedge product of k dual vectors...................................................................... 148 8.3 The vector space Λk and its basis................................................................................................ 149 8.4 Tensor Expansions for a dual tensor in Λk.................................................................................. 151 8.5 Various expansions for the wedge product of k dual vectors ..................................................... 153 8.6 Number of elements in Λk compared with V*k. ......................................................................... 155 2

8.7 Multiindex notation..................................................................................................................... 155 8.8 The Exterior Algebra Λ(V) ......................................................................................................... 157 Associativity of the Wedge Product.............................................................................................. 157 8.9 The Wedge Product of two or more dual tensors in Λ(V) .......................................................... 160 (a) Wedge Product of two dual tensors T^ and S^ ........................................................................ 160 (b) Special cases of the wedge product T^^ S^ ............................................................................. 161 (c) Commutivity Rule for the Wedge Product of two dual tensors T^ and S^ .............................. 161 (d) Wedge Product of three or more dual tensors ......................................................................... 162 (e) Commutativity Rule for product of N dual tensors ................................................................. 165 (f) Theorems from Appendix C : pre-antisymmetrization makes no difference........................... 165 (g) Spivak Normalization.............................................................................................................. 167 9. The Wedge Product as a Quotient Space...................................................................................... 169 9.1. Development of Lk as Vk/S........................................................................................................ 169 9.2. Development of L as T/I ............................................................................................................ 172 10. Differential Forms......................................................................................................................... 174 10.1. Differential Forms Defined...................................................................................................... 174 10.2. Differential Forms on Manifolds ............................................................................................. 176 10.3. The exterior derivative of a differential form .......................................................................... 180 10.4. Commutation properties of differential forms ......................................................................... 190 10.5. Closed and Exact, Poincaré and the Angle Form..................................................................... 190 10.6 Transformation Kinematics...................................................................................................... 193 (a) Axis-Aligned Vectors and Tangent Base Vectors : The Kinematics Package ........................ 193 (b) What happens for a non-square tall R matrix? ........................................................................ 195 (c) Some Linear Algebra for non-square matrices ........................................................................ 199 (d) Implications for the Kinematics Package ................................................................................ 201 (e) Basis vectors for the Tangent Space at point x' on M.............................................................. 202 10.7 The Pullback Operator R and properties of the Pullback Function F*.................................... 203 10.8 Alternate ways to write the pullback of a k-form .................................................................... 210 10.9 A Change of Notation and Comparison with Sjamaar and Spivak .......................................... 214 10.10 Integration of functions over surfaces and curves................................................................... 222 10.11 Integration of differential k-forms over Surfaces.................................................................... 233 10.12 Integration of 1-forms ............................................................................................................. 238 10.13 Integration of 2-forms ............................................................................................................. 246 Appendix A: Permutation Support .................................................................................................. 255 A.1 Rearrangement Theorems and Determinants ............................................................................. 256 A.2 The Alt Operator in Generic Notation ....................................................................................... 260 A.3 The Sym Operator in Generic Notation ..................................................................................... 264 A.4 Alt, Sym and decomposition of functions.................................................................................. 267 A.5 Application to Tensors ............................................................................................................... 269 (a) Alt Equations (translated from Section A.2) ........................................................................... 269 (b) Sym Equations (translated from Section A.3)......................................................................... 271 (c) Alt, Sym and decomposition of tensors (translated from Section A.4) ................................... 272 A.6 The permutation tensor ε............................................................................................................ 272 A.7 The wedge-product-of-vectors Alt equation .............................................................................. 275 A.8 Application to Tensor Functions................................................................................................ 276

3

(a) Alt Equations (translated from Section A.2) ........................................................................... 276 (b) Sym Equations (translated from Section A.3)......................................................................... 278 (c) Alt/Sym and Other Equations (translated from Section A.4, A.6 and A.7)............................. 279 (d) Alt/Sym when there are two sets of indices ............................................................................ 280 A.9 The Ordered Sum Theorem........................................................................................................ 284 A.10 Tensor Products in Generic Notation ....................................................................................... 285 Appendix B: Direct Sum of Vector Spaces ...................................................................................... 287 Appendix C: Theorems on Pre-Symmetrization ............................................................................. 294 C.1 Theorem One.............................................................................................................................. 294 C.2 Theorem Two ............................................................................................................................. 297 C.3 Theorem Three ........................................................................................................................... 300 C.4 Summary and Generalization ..................................................................................................... 301 Appendix D: A Unified View of Tensors and Tensor Functions..................................................... 305 D.1 Tensor functions in Dirac notation............................................................................................. 305 D.2 Basis change matrix ................................................................................................................... 306 D.3 Transformations of tensors and tensor functions ....................................................................... 308 D.4 Tensor Functions and Quantum Mechanics .............................................................................. 310 Appendix E: Kinematics Package with x' = F(x) changed to x = φ(t) ........................................... 311 Appendix F: The Volume of an n-piped embedded in Rm .............................................................. 315 F.1 Volume of a 2-piped in R3 .......................................................................................................... 316 F.2 Volume of a 2-piped in R4, R5 and Rm ........................................................................................ 319 F.3 Volume of a 3-piped in R4 .......................................................................................................... 321 F.4 Volume of a n-piped in Rm .......................................................................................................... 322 F.5 Application: The differential volume element of the tangent space Tx'M ................................ 323 Appendix G : The det(RTR) theorem and its relation to differential forms .................................. 325 G.1 Theorem: det(RTR) is the sum of the squares of the full-width minors of R ............................ 325 G.2 The Connection between Theorem G.1.1 and Differential Forms ............................................. 328 Appendix H : Hodge Star, Differential Operators, Integral Theorems and Maxwell .................. 335 H.1 Properties of the Hodge star operator in Rn ............................................................................... 335 H.2 Gradient...................................................................................................................................... 339 H.3 Laplacian.................................................................................................................................... 340 H.4 Divergence ................................................................................................................................. 341 H.5 Curl ............................................................................................................................................ 342 H.6 Exercise: Maxwell's Equations in Differential Forms............................................................... 345 References ............................................................................................................................................ 348

4

Overview

Overview and Summary This monograph is meant as a user guide for both tensor products and wedge products. These objects are sometimes glossed over in literature that makes heavy use of them, the assumption being that everything is obvious and not worth describing too much. As we shall show, there is in fact quite a lot to be said about tensor and wedge products, and much of it is not particularly obvious. Our final chapter discusses aspects of differential k-forms which inhabit the wedge product spaces, with an emphasis on the notion of pullbacks and integration on manifolds. We attempt to include both the mathematical view and the engineering/physics view of things, but the emphasis is on the latter. The discussion is more about activities in the engine room and less about why the ship travels where it does. The study of wedge products is known as the exterior algebra and is credited to Grassmann. Maple is used as appropriate to do basic calculations. Covariant notation is used throughout. Equations which are repeats of earlier ones are shown with italic equation numbers. Here is a brief summary of our document which has ten Chapters and eight Appendices : Chapter 1 surveys the mathematician's description of the tensor product as a quotient space, and then places the tensor product in the framework of category theory. This approach is resumed much later in Chapter 9 for the wedge product, after the reader is more familiar with that object. Chapter 2 reviews tensor algebra and then introduces a meaning for the tensor product symbol ⊗ in terms of outer products of tensors. After a quick review of tensor expansions and projections, the last section introduces the notion of a dual space and includes the use of the Dirac bra-ket notation. The notion of a tensor function is introduced. Chapter 3 discusses the theory of Chapter 1 versus the practicality of Chapter 2 in terms of outer products. It then derives the Kronecker product of two matrices in covariant notation. This topic is somewhat tangential to the main development, but is included since it is sometimes not explained very well in the literature. Maple is used to compute a few such Kronecker products. Chapter 4 has four parts involving products of two vectors and their vector spaces: tensor product, dual tensor product, wedge product, and then dual wedge product. This chapter serves as an introduction to the four chapters which follow. Chapters 5, 6, 7, 8 continue this order of presentation for products of k vectors and then for products of any number of general tensors. The order is: tensor product (Ch 5), dual tensor product (Ch 6), wedge product (Ch 7) and then dual wedge product (Ch 8). The chapters intentionally have a high degree of parallelism, though some details are omitted from the later chapters to reduce repetition. The dual tensor chapters involve tensor functions as the closure of tensor functionals onto a general set of vectors. The tensor-product tensor functions are multilinear, whereas the wedge-product ones are multilinear and totally antisymmetric. Alternate wedge product normalizations are discussed. The reader is warned that these four chapters (especially the last two) are exceedingly tedious because there is a huge amount of detail involved in laying out these subjects. The silver lining is that all notations are heavily exercised and many examples are provided.

5

Overview

Chapter 9 returns to the mathematician's world giving two descriptions of the wedge product in terms of quotient spaces. Chapter 10 presents an outline of differential k-forms and pullbacks with an emphasis on underlying transformations. The contents of Chapter 2 on covariant tensor algebra and Chapter 8 on dual wedge products (exterior algebra) come into play. Various k-form facts are derived and cataloged. Manifolds are described without rigor, leading to a discussion of the integration of both functions and k-forms over manifolds. A special notation is used to distinguish dual space functionals like λi = a Dirac ket = |v> |ui> |vi>

7

Overview ei ei

tangent base vector in V dual of the above

|ei> |ei>

W e'i w

real vector space of dimension n' basis vector in W vector in W

|e'i> |w>

tensor product of two vectors = a pure element of the vector space V2 = V⊗V tensor product of two basis vectors = basis vector of V2 = V⊗V wedge product of two vectors = a pure element of the vector space L2 = V^V ⊂ V2 wedge product of two basis vectors = basis vector of L2 = V^V ⊂ V2

a⊗b u i⊗ uj a^b u i^ u j T

tensor of rank k

|T>

S R

tensor of rank k' tensor of rank k"

|S> |R>

T^

general element of Lk

S^ R^

general element of Lk' general element of Lk"

T = Σi1i2....ik Ti1i2....ik ui1⊗ ui2 .....⊗ uik

T^ = Σi1i2....ik Ti1i2....ik (ui1^ ui2^ .....^ uik)

V* dual space to V α,β vector functionals in V* α = ] = ΣI,I'TISI'|uI>⊗|uI'> = ΣI (T⊗S)I |uI>

(5.6.5)D

|T> ∈ Vk and |S> ∈ Vk' ⇒

(5.6.6)D

|T⊗S> = |T>⊗|S> ∈ Vk+k' ⊂ T(V) .

Operators on the tensor product space Recall from above the following tensor product space vector, | T1,T2....TN> = | T1⊗T2....⊗TN> = |T1> ⊗ |T2>...⊗ |TN>

(5.6.13)D

which is an element of the tensor product space Vk1 ⊗ Vk2 ⊗...⊗ VkN. The action of a linear operator P on such a tensor product vector is defined in terms of its action in the spaces from which the tensor product is composed, P [ |T1> ⊗ |T2>...⊗ |TN>] = P |T1> ⊗ P |T2>...⊗ P |TN> .

(5.6.17)

106

Chapter 6: Dual Tensor Products 6. The Tensor Product of k dual vectors : the vector spaces V*k and T(V*) Every equation in Chapter 5 can be converted to an appropriate equation of Chapter 6 using this simple set of translation rules: 1. |X> → → = T(v1,v2.....vk) = a tensor function (a new item)

(6.1)

In general, translation of a Chapter 5 equation to Chapter 6 is most easily done if the Chapter 5 equation is first stated in Dirac notation. We could end Chapter 6 right here, allowing the reader to apply the above rules, but that seems unsportsmanlike, so we proceed with a partial mimicry of Chapter 5. 6.1 Pure elements, basis elements, and dimension of V*k A generic pure ("decomposable") element of V*k is this tensor product of k functionals, α1 ⊗ α2 ⊗ .....⊗ αk .

all αi ∈ V*

(6.1.1)

= n, all wedge products vanish since the vectors in the wedge product are linearly dependent, see (8.2.6). The dimensionality of the space Λ(V) is as follows, based on (8.8.1) and (B.10)', dim[Λ(V)] = dim[Λ0 ⊕ Λ1 ⊕ Λ2 ⊕ Λ3 + ....] = dim(Λ0) + dim(Λ1) + dim(Λ2) + dim(Λ3) + ... but for dim(V*) = n this series truncates with Λn and we find from (7.3.6), n n n n dim[Λ(V)] = 1 + n + ⎛⎝ 2 ⎞⎠ + ⎛⎝ 3 ⎞⎠ + ... + ⎛⎝ n ⎞⎠ = Σk=0n ⎛⎝ k ⎞⎠ = 2n

= a finite number

(8.8.12)

Recall from the discussion above (4.4.34) that the space Λ2 of rank-2 tensor functionals is isomorphic to the space Λ2f of rank-2 tensor functions, where we added a subscript f to distinguish these two vector spaces. We apply this similar notation to the full space Λ(V) to obtain this tensor function version of (8.8.1),

159

Chapter 8: Dual Wedge Products Λf(V) ≡ Λ0f ⊕ Λ1f ⊕ Λ2f ⊕ Λ3f + ....

// Λf(V) = Σ

⊕ ∞ k=0

Λkf(V)

(8.8.13)

where now Λf(V) is the space of all multilinear totally antisymmetric (alternating) functions of any number of vector arguments. 8.9 The Wedge Product of two or more dual tensors in Λ(V) (a) Wedge Product of two dual tensors T^ and S^ Rather than translate the many details of this section from Chapter 7, we will skip these details and state the conclusions. The details may be obtained from Section 7.9 by making these simple replacements: u i1 → λ i1

uI → λI

u^I → λ^I

Ti1i2....ik → Ti1i2....ik , TI → TI ,

T^ → T ^

→ Si1i2....ik ,

S ^ → S^ .

S

i1i2....ik

I

S → SI ,

In subsection (d) below on the product of three tensors, more details are provided. Here then are selected results: Tensor product of two tensors: T^^ S^ = ΣI (T⊗S)I λ^I Closure:

I ≡ I, I' = i1,i2...ik+k', λ^I ≡ (λi1^ λi2 .....^ λik) .

T^ ∈ Λk and S^ ∈ Λk'



T^^ S^ ∈ Λk+k' ⊂ Λ(V)

.

(8.9.a.5) (8.9.a.6)

Basis relation: (λi1^ λi2 .....^ λik) = Alt(λi1 ⊗ λi2 ⊗ .... ⊗ λik)

T⊗S == ΣI (T⊗S)I λI

λ^I = Alt(λI)

(8.3.8)

I ≡ I, I' = i1,i2...ik+k', uI ≡ (ui1⊗ ui2 ....⊗ uik+k') .

(5.6.5)

Alt(T⊗S)(vJ) = AltJ(T⊗S)(vJ) = ΣI (T⊗S)I AltJ (λI(vJ))

// (5.6.5) and (A.5.10) that Alt is linear

= ΣI (T⊗S)I AltI (λI(vJ)) // (A.8.31), λI(vJ) has factored form λi1(vj1) λi2(vj2) ... = ΣI (T⊗S)I λ^I(vJ)

// (8.3.8)

= (T^^ S^)(vJ)

// (8.9.a.5)

so T^^ S^ = Alt(T⊗S) .

(8.9.a.7)

The "components" (tensor functions) are

160

Chapter 8: Dual Wedge Products

(T^^ S^)(vJ) = Alt(T⊗S)(vJ) 1 = (k+k')! ΣP(-1)S(P) (T⊗S)(vP(J)) 1 = (k+k')! ΣP(-1)S(P) T(vP(J)) S(vP(J'))

(8.9.a.8)

where J ≡ j1, j2...jk

J' ≡ jk+1, jk+2, ....jk+k'

J ≡ J, J' = j1,j2...jk+k'

(7.9.a.4)

The above is an explicit instruction for computing the "components" of the tensor T^^ S^ . We have added this new notation, T(vP(J)) ≡ T(vjP(1), vjP(2)... vjP(k))

for J ≡ j1, j2...jk .

(8.9.a.9)

Example: Let S and T both be rank-2 dual tensors so k = k' = 2 . Then (T^^ S^)(vI) = (T^^ S^)(v1,v2,v3,v4) = (1/4!) ΣP(-1)S(P)T(viP(1), viP(2))S(viP(3), viP(4)) = (1/24) [ T(vi1,vi2)S(vi3,vi4) - T(vi2,vi1)S(vi3,vi4) + T(vi2,vi3)S(vi1,vi4) + 21 more terms] (8.9.a.10) Here as elsewhere we show in red the indices to be swapped to make the next term. From (8.9.c.6) below, T^^ S^ = (-1)2*2 S^^ T^ = S^^ T^.

(8.9.a.11)

(b) Special cases of the wedge product T^^ S^ Same as Section 7.9 (b) with T^→ T^ and S^→ S^ . Here are the conclusions : T^^S^ = κ^S^ = S^^T^ = S^^κ = κS^

if T^ = κ ∈ V*0 [ V*0 = V0 ]

T^^S^ = T^^κ' = S^^T^ = κ'^T^ = κ'T^

if S^ = κ' ∈ V*0

T^^S^ = κ^κ' = S^^T^ = κ'^κ = κκ'

if T^,S^ = κ,κ' ∈ V*0

(8.9.b.3)

(c) Commutivity Rule for the Wedge Product of two dual tensors T^ and S^ Same as Section 7.9 (c) with T^→ T^ and S^→ S^ and u→λ. Here are some of the translated conclusions: (λ^J ^ λ^I) = (-1)kk' (λ^I ^ λ^J) .

dual basis vectors

(8.9.c.5)

S^^ T^ = (-1)kk'T^^ S^

ranks of the two dual tensors are k and k' .

(8.9.c.6) 161

Chapter 8: Dual Wedge Products

(d) Wedge Product of three or more dual tensors For this section we do a full translation of Section 7.9 (d) : T^^S^^R^ = [ΣITIλ^I]^[ΣJ SJλ^J]^[ΣK RKλ^K] (a)

= ΣI,J,K TISJRK (λ^I) ^ (λ^J) ^ (λ^K)

(b)

= ΣI,J,K TISJRK (λ^I ^ λ^J ^ λ^K)

// associative of ^ used here

(d)

= ΣI,I',I" TISI'RI" (λ^I ^ λ^I' ^ λ^I")

// rename multiindices J→I',K→I"

I ≡ i1, i2...ik λ^I ≡ (λi1^....^ λik) (e)

= ΣI (T⊗S⊗R)I λ^I

I' ≡ ik+1, ik+2, ....ik+k' λ^I' ≡ (λik+1^...^λik+k')

I" ≡ ik+k'+1, ik+k'+2, ....ik+k'+k" λ^I" ≡ (λik+k'+1^...^λik+k'+k")

λ^I ≡ (λi1^....^ λik+k'+k")

I ≡ I, I',I" = i1,i2...ik+k'+k"

(8.9.d.1)

The outer product form is TISI'RI" = (T⊗S⊗R)I,I',I" = (T⊗S⊗R)I . The conclusion is this: T^^S^^R^ = ΣI (T⊗S⊗R)Iλ^I

I ≡ I, I',I" = i1,i2...ik+k'+k" , λ^I ≡ (λi1^....^ λik+k'+k") (8.9.d.2)

Since the λ^I are basis vectors in Λk+k'+k", we have shown that: T^ ∈ Λk and S^ ∈ Λk' and R^ ∈ Λk"



T^^S^^R^ ∈ Λk+k'+k" ⊂ Λ(V) .

(8.9.d.3)

Recalling the Chapter 6 result, T⊗S⊗R = ΣI (T⊗S⊗R)I λI

I ≡ I, I',I" = i1,i2...ik+k'+k"

λI ≡ (λi1⊗....⊗ λik+k'+k")

(6.6.5)

and (8.3.8) that λ^I = Alt(λI), we find, Alt(T⊗S⊗R) = ΣI (T⊗S⊗R)I Alt(λI)

// Alt is linear, see (7.9.d.4)

= ΣI (T⊗S⊗R)I λ^I

// (8.3.8)

= T^^S^^R^

// (8.9.d.2)

so T^^S^^R^ = Alt(T⊗S⊗R)

(8.9.d.4)

and then

162

Chapter 8: Dual Wedge Products

[T^^S^^R^](vI) = [Alt(T⊗S⊗R)](vI) = ΣP(-1)S(P) (T⊗S⊗R)(vP(I))

// (A.5.3)

= ΣP(-1)S(P) T(vP(I))S(vP(I'))R(vP(I"))

(8.9.d.5)

which gives instructions for how to compute the "components" of T^^S^^R^ . Using the systematic notation outlined in (5.6.10) through (5.6.12), and generalizing the above development for the wedge product of three tensors, we find the following expansion for the wedge product of N tensors of Λ(V), (T1)^^(T2)^^...^(TN)^ = ΣI (T1)I1(T2)I2 .... (TN)IN λ^I = ΣI (T1⊗T2....⊗TN)I λ^I (8.9.d.6) where

λ^I = λi1^ λi2 .....^ λik1+k2+...+kN = λi1^ λi2 .....^ λiκ

and

(T1⊗T2....⊗TN)I = (T1)I1(T2)I2 .... (TN)IN .

The rank of this product tensor is then κ = Σi=1N ki and the tensor is an element of Λκ ⊂ Λ(V). Notice that if κ > n, the tensor product (8.9.d.6) vanishes since there are then > n factors in λ^I so one or more are then duplicated, (T1)^^(T2)^^...^(TN)^ = 0

if κ = Σi=1N ki ≥ n+1 .

(8.9.d.7)

For example, if all the tensors are the same tensor T^ of rank k, then T^N ≡ T^^T^^...^T^ = 0

if Nk ≥ n+1 or N ≥ (n+1)/k .

(8.9.d.8)

If N ≥ (n+1), then N ≥ (n+1)/k for any k ≥ 1. Thus T^N = 0

for any N ≥ n+1 assuming k ≠ 0.

(8.9.d.9)

Recall (6.6.16), T1⊗T2⊗...⊗TN = ΣI (T1I1T2I2 .... TNIN) λI = ΣI (T1⊗T2....⊗TN)I λI .

(6.6.16)

Applying Alt to both sides again with λ^I = Alt(λI) shows that, as in (8.9.d.4), (T1)^^(T2)^^...^(TN)^ = Alt(T1⊗T2⊗...⊗TN) .

(8.9.d.10)

"Components" (the tensor function) of this tensor are computed as follows:

163

Chapter 8: Dual Wedge Products [(T1)^^(T2)^^...^(TN)^](vI) = [Alt(T1⊗T2⊗...⊗TN)](vI) = ΣP(-1)S(P) (T1⊗T2⊗...⊗TN)(vP(I))

// (A.5.3)

= ΣP(-1)S(P) T1(vP(I1))T2(vP(I2)) ...TN(vP(IN)) where

(8.9.d.11)

T1(vP(I1)) ≡ T1(viP(1),viP(2)...viP(k))

for I1 = i1, i2...iκ1

T2(vP(I2)) ≡ T2(viP(κ1+1), viP(κ1+2)...viP(κ2))

for I2 ={iκ1+1, iκ1+2.....iκ2}

etc.

// see (5.6.10 thru 12) for details

In the Dirac notation of Section 2.11 one can write (8.9.d.10) as are independent of x, and so the λi = k+k') and Σs = Σs=1n . Consider, using (10.3.6), α = Σ'I fI(x) λ^I β = Σ'J gJ(x) λ^J

// a k-form // a k'-form

⇒ ⇒

dα = Σ'I Σs [∂sfI(x)] λs ^ λ^I dβ = Σ'J Σs [∂sgJ(x)] λs ^ λ^J

α ^ β = ( Σ'I fI(x) λ^I) ^ (Σ'J gJ(x) λ^J) = Σ'I Σ'J fI(x)gJ(x) λ^I ^ λ^J ⇒ d(α ^ β) = Σ'I Σ'J Σs ∂s[fI(x)gJ(x)] λs ^ λ^I ^ λ^J . Then just evaluate the right side of (10.3.27) : (dα) ^ β = ( Σ'I Σs [∂sfI(x)] λs ^ λ^I) ^ (Σ'J gJ(x) λ^J) = Σ'I Σ'J Σs [∂sfI(x)] gJ(x) λs ^ λ^I ^ λ^J α ^ (dβ) = (Σ'I fI(x) λ^I) ^ (Σ'J Σs=1n [∂sgJ(x)] λs ^ λ^J) = Σ'I Σ'J Σs fI(x)[∂sgI(x)] λI ^ λ^s ^ λ^J = Σ'I Σ'J Σs fI(x)[∂sgI(x)] (-1)k λs ^ λ^I ^ λ^J . Here λI ^ λ^s = (-1)k λs ^ λ^I because λs has to slide left through k vector wedge products. Then (dα) ^ β + (-1)k α ^ (dβ) = Σ'I Σ'J Σs=1n [∂sfI(x)] gJ(x) λs ^ λ^I ^ λ^J + Σ'I Σ'J Σs=1n fI(x)[∂sgI(x)] λs ^ λ^I ^ λ^J = Σ'I Σ'J Σs=1n { [∂sfI(x)] gJ(x) + fI(x) [∂sgI(x)] } λs ^ λ^I ^ λ^J = Σ'I Σ'J Σs=1n ∂s[fI(x)gJ(x)] λs ^ λ^I ^ λ^J = d(α ^ β) .

QED

(10.3.28) Reader Exercises: (a) Show that d is a linear operator so d(s1α + s2β) = s1dα + s2dβ for any forms α and β . (b) Use (10.4.1) below three times in (10.3.27) and show result is consistent with (10.3.27) for d(β ^ α). (c) Write an expression for d(α^β^γ) where α,β,γ are forms of rank k, k' and k". (d) Write an expression for d(α1^ α2^ ...αM) where αi are ki-forms.

189

Chapter 10: Differential Forms

10.4. Commutation properties of differential forms Recall these three results from Chapter 8 concerning elements of Λ(V), •

S^^ T^ = (-1)kk'T^^ S^

ranks of the two dual tensors are k and k' .

(8.9.c.6)

• In a product of tensors (T1)^^(T2)^^(T3)^....of rank k1, k2, k3 ... , if two tensors are swapped (Tr)^ ↔ (Ts)^ (with r < s), the resulting tensor incurs the following sign relative to the starting tensor,



sign = (-1)m

where m = (kr+1+kr+2 ...+ks-1)(kr+ks) + krks

(8.9.e.6)

T^ N = 0

for any N ≥ n+1 assuming k ≠ 0 .

(8.9.d.9)

In the language of differential forms these three results become •

α ^ β = (-1)kk'β ^ α



α1 ^ α2 ^ ... αr ... αs ... ^ αk = (-1)m α1 ^ α2 ^ ... αs ... αr ... ^ αk

α = k-form, β = k'-form

where m = (kr+1+kr+2 ...+ks-1)(kr+ks) + krks •

αN = 0 for N ≥ n+1

dim(V) = n

(10.4.1)

(10.4.2)

α = any k-form with k ≥ 1

where αN ≡ α ^ α ... ^ α .

(10.4.3)

Equations (10.4.1) and (10.4.3) appear in Sjamaar as "2.1 Proposition" and the preceding equation on page 19 . In Sjamaar, Buck and many other source all ^ symbols are suppressed so (10.4.1) is written αβ = (-1)kk'βα and one must understand that these are wedge products in Λ(V). 10.5. Closed and Exact, Poincaré and the Angle Form Closed: If dα = 0 for a k-form α, α is said to be closed. The analogous fact for a function f(x) with df = 0 would be that f(x) = constant. (10.5.1) Exact: Sometimes one finds that a form α can be written α = dβ where β is some other form. If α is a kform, we know from (10.3.7) that β must be a (k-1)-form. When α = dβ for some form β, α is said to be exact. We showed in (10.3.10) that d2β = 0 for any form β, so it follows that if α = dβ, then dα = 0 and α is closed. Thus we have shown that : (10.5.2) Fact: If α is exact, then α is closed.

(10.5.3)

In 1D calculus if f = dh/dx one says that dh = f dx is an "exact (perfect) differential" and one then writes

190

Chapter 10: Differential Forms

∫a

b

f(x) dx =

∫a

b

dh (dx ) dx =

∫a

b

dh = h(a) - h(b)

dh dh = (dx ) dx .

(10.5.4)

In nD calculus if f = ∇h one says that dh = ∇h • dx is an exact (perfect) differential. The above integral then becomes a line integral over a smooth curve C having endpoints a and b,

∫a

b

f(x) • dx = ∫ ∇h • dx = ∫C dh = h(b) - h(a) b

a

where dh = ∇h • dx = Σi=1n (∂ih(x))dxi = Σi=1n fi(x) dxi = f(x) • dx .

(10.5.5)

The line integral depends only on the line endpoints a and b, and not on the particular shape of the curve C joining a and b. For a closed curve a = b and one finds

∫C

dh = h(b) - h(a)

∫{ dh

= h(a) - h(a) = 0 .

(10.5.6)

In physics if f(x) is a "conservative force field" (like gravity) then h(a) - h(a) = 0 is the work done in moving a particle that senses the field (has mass) around a closed path. A similar theorem exists for α = dg where g is a 0-form (a function) and α is 1-form. Here we provide a preview of things to come. C' is a curve in x'-space running from point a' to point b', while C is the pulled-back curve in x-space running from a to b, where a' = F(a) and b' = F(b) :

∫C' αx' = ∫C'dg(x') = ∫C F*(dg)

// αx' = dg so αx' is an exact 1-form (g is a function) // pullback of a 1-form, (10.11.2) with βx = F*(dg)

= ∫C d[F*(g(x'))] // fact (10.7.22) that d commutes with F* =∫C d[g(F(x)]

// fact (10.7.19) item 1 (pullback of a function) that F*(f(x')) = f(F(x))

= g(F(b))-g(F(a)) // think of g(F(x)) as h(x) so d[g(F(x)] = dh = g(b')-g(a') .

(10.5.7)

Then for a closed curve C' the line integral of an exact 1-form vanishes,

∫{ c' αx'

= g(a') - g(a') = 0

(10.5.8)

in analogy with (10.5.6). A 1-form α being exact is like dh being an exact differential.

191

Chapter 10: Differential Forms Fact (10.5.3) above says α exact ⇒ α closed. Is it possibly also true that α closed ⇒ α exact and so then the two descriptions are one in the same? The answer is "not quite" as expressed in this claim: Poincaré Lemma: If any differential form α on Rn is closed for x in some open star-shaped domain in Rn (Poincare for PDF search) (10.5.9) which includes the origin, then α is exact. This Lemma appears on p 94 of Spivak from which we quote,

and Spivak proceeds to give a detailed proof. In topological language, the star-shaped domain is any domain that is "contractible to a point". Certainly the Lemma is valid for a domain which is an open "cube" or "sphere" (n dimensions) about the origin. The domain need not be convex (as the star shows). A classic application of this theorem involves the so-called angle form defined on R2 with coordinates (x1,x2), α = Σi=12 fi(x)λi

where f1(x) = - (x2/r2) f2(x) = (x1/r2)

r2 = x 1 2 + x 2 2 . (10.5.10)

Then dα = Σi dfi(x)λi = Σij (∂jfi) λj ^ λi . Notice that, using the fact that ∂ir = xi/r, (∂1f2) = ∂1(x1/r2) = [ r2 * 1 - x1 (∂1r2)] / r4 = [ r2 - x12r (∂1r)] / r4 = - [r2 - x12r(x1/r)] / r4 = [r2 - 2x12] / r4 = [x12 + x22 - 2x12] / r4 = (x22 - x12) / r4 and

(∂2f1) = - ∂2 (x2/r2) = - [ r2 * 1 - x2 (∂2r2)] / r4 = - [ r2 - x22r (∂2r)] / r4 = - [r2 - x22r(x2/r)] / r4 = - [r2 - 2x22] / r4 = - [x12 + x22 - 2x22] / r4 = (x22 - x12) / r4 = (∂1f2) .

Thus it turns out that the quantity (∂jfi) is symmetric under i ↔ j. Then by the argument (10.3.11) we get dα = Σij (∂jfi) λj ^ λi = Σij (Sij)(Aji) = 0

⇒ α = closed

so α is a closed 2-form. As we shall show below in (10.12.21), the line integral of α around a circle { α = 2π. Thus the angle form is not exact because if it were one would have centered at the origin gives ∫

∫{ α = 0 as in (10.5.8). So here is a form α which is closed, but which is not exact. The condition of the 192

Chapter 10: Differential Forms Poincaré Lemma must therefore be violated, and that is indeed the case since the form α is undefined for r = 0 where f1 and f2 blow up, so α is then defined only on R2 punctured at the origin, sometimes written R2/ {0} or R2 - {0}. Thus we can't have any open star-shaped domain including the origin for α, so Poincaré's Lemma does not apply. Note that R2 - {0} is not "simply connected" due to the puncture hole. The presence of holes ("multiply connected") means that line integrals are no longer path independent. Here a line integral around the hole gives 2π, whereas one not looping the hole gives 0. Our plan now is first to define the "pullback" of a differential form, and then in later sections to use the pullback to define the meaning of integration of a differential form over a manifold. But we wish to show how the notion of a pullback fits into the general transformation scenario of Chapter 2, and this requires several digressions before we get to the pullback discussion in Section 10.7 through 10.9. 10.6 Transformation Kinematics Much mathematical hardware accompanies a mapping. In mechanics, the selection of an appropriate set of coordinates and corresponding basis vectors is sometimes referred to as stating the kinematics of a problem (as opposed to the dynamics which involves equations of motion). Here we apply this term loosely to the cloud of equations associated with a mapping. Not all these equations will be used in our analysis, but we like being able to see them all in one place just in case something is needed. In the following Sections we shall move in and out of the Dirac notation of Section 2.11 in a somewhat repetitive fashion intended to make the reader more comfortable with that notation. The notion of a pullback is often presented as "something new", but the main point of the following sections is to show that the pullback operator is just the R/R matrix/operator of the underlying transformation. In Chapter 2 we discussed the transformation x' = F(x) from x-space to x'-space using Picture A (2.1.1). The vector transformation and "the differential" (the R-matrix) of the transformation were given by V'a = RabVb

Rab ≡ (∂x'a/∂xb) = ∂bx'a ≡ (∇F)ab ≡ (DF)ab ≡ (DF)ab

dx'a = Rabdxb

dx' = Rdx .

(2.1.12)

(2.1.2) (10.6.2)

Here V'a = RabVb shows the transformation of a contravariant vector under x' = F(x). In matrix notation one would write V' = RV. Repeated indices are always summed unless otherwise stated. Above we have defined ∇F and DF as alternate names for matrix R because many authors (like Spivak) use this notation. In Tensor (E.4.4) we show that this is in fact a "reverse dyadic notation". Often (DF)ab is written unbolded (DF)ab so then R = (DF) with the idea that a matrix like R is normally not bolded. (a) Axis-Aligned Vectors and Tangent Base Vectors : The Kinematics Package We gather here various facts derived in Chapter 2 which comprise our "kinematics package" for the transformation x' = F(x) . We cosmetically flip Picture A of (2.1.1) left to right.

193

Chapter 10: Differential Forms

Rn (a) x' = F(x) V' = R V

Rm Rij ≡ (∂x'i/∂xj) = ∂jx'i Sij ≡ (∂xi/∂x'j) = ∂'jxi

transformation vector

m≥n R = (DF) (2.1.2)

(b) e'i with (e'i)j = δij ei= Se'i ei

axis-aligned basis vectors in x'-space (i = 1..m) (2.5.2) tangent base vectors in x-space (i = 1..n) (2.5.1)

(c) ui with (ui)j = δij u'i= Rui u'i (u'i)j = Rjk (ui)k

axis-aligned basis vectors in x-space (i = 1..n) tangent base vectors in x'-space (i = 1..n)

(2.11.h.8) completeness in x'-space completeness in x-space

(d) 1' = | e'i> = Rij = Sji = = g'ij from (e) // = = Rij from (e) // = Rij. The identity operator in a Dirac space we then write as 1 for x-space and 1' for x'-space, as appear in the completeness statements of (10.6.a.1) item (d). (b) What happens for a non-square tall R matrix? In Chapter 2 and in Tensor it was assumed that x' = F(x) was an invertible mapping F: RN→RN . Now however we wish to consider the non-invertible mapping x' = F(x) where F: Rn → Rm F: x-space → x'-space

m>n x ∈ Rn,

x' ∈ Rm

F(x) = x' .

(10.6.b.1)

In Rab = (∂x'a/∂xb) the row index a ranges 1 to m, while column index b ranges 1 to n. Thus the down-tilt R matrix is a "tall" non-square matrix having m rows and n columns with m > n. As outlined in Section 10.2 and Fig (10.2.1). if we let the variable x exhaust some domain U within xspace, the mapping x' = F(x) generates a "surface" embedded within x'-space = Rm which has dimension n. We assume that the mapping F has appropriate properties so that this surface is a Manifold M. Thus, the mapping x' = F(x) is defined in effect for all x in Rn (or perhaps for a region U in Rn as in Fig (10.2.1), and produces (as its image) the manifold M within Rm . The inverse mapping x = F-1(x') is then only defined for points x' on the manifold M. For such points, the mapping and its inverse are assumed one-to-one. This inverse mapping is a set of n equations which one can presumably write down. The equations represent x = F-1(x') only when x' lies on M. For other values of x', the set of equations still exists but no longer represents the inverse function x = F-1(x'). This point is hopefully clarified by some Examples. Example 1: Let U be a square in R2 x-space with corners (-1,-1) to (1,1). We map this square into R3 using the following map x' = F(x): x'1 = x1 x'2 = x2 x'3 =

22- (x1)2-(x2)2

x' = F(x)

(10.6.b.2)

195

Chapter 10: Differential Forms The image in R3 x'-space is a partial upper hemispherical surface of radius 2 (see below). What is the inverse mapping x = F-1(x') ? One can take it to be the first two lines above, x1 = x'1 x2 = x'2

x = F-1(x')

(10.6.b.3)

but the inverse mapping only applies to points x' on the hemisphere. The above two equations of course exist for points x' not on the hemisphere, but they only act as the inverse mapping for points on the hemisphere. Here is Maple code for Example 1. The transformation is first entered and plotted, xp = x' :

Maple then computes the "tall" R matrix, Rij ≡ (∂x'i/∂xj),

. The S matrix Sij= (∂xi/∂x'j) is computed by hand from (10.6.b.3) and is then entered into Maple. Maple then computes the matrix products RS and SR,

196

Chapter 10: Differential Forms

Notice that RS ≠ 1 while SR = 1. Example 2: Let U be the same square as in Example 1, but the new mapping is this x'1 = x1 + 2x2 x'2 = 2x1 + x2 x'3 = x1 + 3x2

1 2 3

x' = F(x)

.

(10.6.b.4)

The image in R3 x'-space is a tilted plane passing through the origin. We reuse the above Maple code for this example, but don't display the Maple output. What is the inverse mapping x = F-1(x') ? If one solves the first two equations for x1 and x2 the result is x1 = -1/3 x'1 + 2/3 x'2 x2 = 2/3 x'1 - 1/3 x'2

x = F-1(x')

(10.6.b.5)

and this then can be taken to be the inverse mapping x = F-1(x'). Inserting these expressions into the third equation gives 5/3 x'1 - 1/3 x'2 - x'3 = 0

(10.6.b.6)

which is the equation of the tilted image plane passing through the origin whose normal is (5/3,-1/3,-1). On the other hand, if one instead solves the second two equations in (10.6.b.4) one finds x1 = 3/5 x'2 - 1/5 x'3 x2 = - 1/5 x'2 +2/5 x'3

x = F-1(x') .

(10.6.b.7)

Notice that this inverse mapping is different from (10.6.b.5). When these two expressions are inserted into the first equation of (10.6.b.4), one gets

197

Chapter 10: Differential Forms x'1 - 1/5 x'2 - 3/5 x'3 = 0

(10.6.b.8)

Multiplication by 5/3 gives (10.6.b.6) so this is, of course, the equation for the same tilted plane. In this Example we find that the inverse equation set x = F-1(x') is not unique. If we work with the first and third equations in (10.6.b.4) we get a third set of inverse equations which we leave to the reader. By visual inspection, the R matrix computed from x' = F(x) (10.6.b.4) is this:

⎛1 2 ⎞ R = Rab = (∂x'a/∂xb) = ⎜ 2 1 ⎟ ⎝1 3 ⎠

(10.6.b.9)

and is the "tall" R matrix for this example. For the two inverse transformations stated in (10.6.b.5) and (10.6.b.7) we compute an S matrix, again by inspection (Maple did the products on the right) -1/3 2/3 0 S = Sab = (∂xa/∂x'b) = ⎛⎝ 2/3 -1/3 0 ⎞⎠

0 3/5 -1/5 S = Sab = (∂xa/∂x'b) = ⎛⎝ 0 -1/5 2/5 ⎞⎠

1 2 ⎞ -1/3 2/3 0 ⎛ 1 0 SR = ⎛⎝ 2/3 -1/3 0 ⎞⎠ ⎜ 2 1 ⎟ = ⎛⎝ 0 1 ⎞⎠ ⎝1 3 ⎠ (10.6.b.10) 1 2 ⎞ 0 3/5 -1/5 ⎛ 1 0 SR = ⎛⎝ 0 -1/5 2/5 ⎞⎠ ⎜ 2 1 ⎟ = ⎛⎝ 0 1 ⎞⎠ ⎝1 3 ⎠

Thus we have found two different "left inverses" S of the tall matrix R. If we try out these S matrices on the right of R, we find

⎛ 1 2 ⎞ -1/3 2/3 0 ⎛ 1 0 0 ⎞ RS = ⎜ 2 1 ⎟ ⎛⎝ 2/3 -1/3 0 ⎞⎠ = ⎜ 0 1 0 ⎟ ⎝1 3 ⎠ ⎝ 5/3 -1/3 0 ⎠

⎛ 1 0 0 ⎞ ≠ ⎜0 1 0 ⎟ ⎝0 0 1 ⎠

⎛ 1 2 ⎞ 0 3/5 -1/5 ⎛ 0 1/5 3/5 ⎞ ⎛ 1 0 0 ⎞ ⎛ ⎞ 2 1 0 1 0 RS = ⎜ ⎟ ⎝ 0 -1/5 2/5 ⎠ = ⎜ ⎟ ≠ ⎜0 1 0 ⎟ ⎝1 3 ⎠ ⎝ 0 0 1 ⎠ ⎝0 0 1 ⎠

(10.6.b.11)

Example 2 serves then to illustrate that a tall R matrix might have multiple left inverses, but those left inverses are not also right inverses. It turns out that there are in fact no right inverses for a tall R, as shown in section (c) below. Before leaving this example, we comment on the "coordinate lines" in x-space using our first inverse solution (10.6.b.5). x1 = -1/3 x'1 + 2/3 x'2 x2 = 2/3 x'1 - 1/3 x'2

x = F-1(x')

(10.6.b.5)

198

Chapter 10: Differential Forms If we vary only x'1 (keeping the other two coordinates in x'-space fixed) both x1 and x2 vary, and not surprisingly they define a certain line in x-space, and this is the coordinate line in x-space for x'1 . If we instead vary only x'2, again both x1 and x2 vary and they define some other line in x-space, the x'2 coordinate line. If we vary only x'3 , then x1 and x2 do not vary and this coordinate line is just a point! Recall that the tangent base vectors en are tangent to the coordinate lines in x-space. As shown in (10.6.a.1) (e) one has (ej)i = Sij so the tangent base vectors are the columns of S, S = [e1, e2, e3]. -1/3 2/3 0 Looking at S = ⎛⎝ 2/3 -1/3 0 ⎞⎠ for our first inverse solution, we see that the first two tangent base vectors are indeed reasonable tangents to coordinate lines in x-space. Since the third coordinate line is just a point, it can have no tangent base vector, and in fact e3 = (0,0) which "resolves" this problem. (c) Some Linear Algebra for non-square matrices The linear algebra for non-square matrices is a topic often omitted in linear algebra presentations. Here we consider only the special case of two matrices where each has the shape of the transpose of the other, and we cherry-pick a few relevant theorems. As shown below, non-square matrices never have two-sided inverses, so one talks only about the possibility of such a matrix having a "right inverse" or a "left inverse". Consider then the following matrix products where we assume m > n :

(10.6.c.1) A nameless matrix rank theorem states the following : Fact:

rank(AB) ≤ min{rank(A), rank(B)} .

(10.6.c.2)

Consider first the upper part of Fig (10.6.c.1). S and R each have some rank ≤ n, since n is the smaller matrix dimension. The Fact then says rank(SR) ≤ n. Since SR is an n x n matrix, it could therefore have

199

Chapter 10: Differential Forms full rank n, and then it is possible that one could have SR = 1. This says that it is possible for R to have a left inverse S, and for S to have a right inverse R. Another nameless theorem states that if R has full rank n then in fact it has at least one left inverse S, and if S is full rank it has at least one right inverse R. The theorem does not say how to compute these inverses, nor does it suggest how many inverses there might be (a non-trivial problem). So, Fact:

tall R has full rank wide S has full rank

⇒ ⇒

R has at least one left inverse S S has at least one right inverse R

(10.6.c.3)

In our Example 2 above, matrix R in (10.6.b.9) has full rank 2, so we know it has at least one left inverse S. We explicitly found two such left inverses S as shown in (10.6.b.10). Since each of these left inverses has R as a right inverse, we know (and confirm) that each S must have full rank 2. Thus, we know (and confirm) that two of the tangent base vectors en are linearly independent (these being columns of S). Now consider the lower part of Fig. (10.6.c.1). Fact (10.6.c.2) says rank(RS) ≤ n, but the matrix RS is m x m. Thus it cannot possibly have full rank m, so it can never be the m x m identity matrix (which would have rank m). We may then conclude that R has no right inverses and S has no left inverses: Fact:

tall R has no right inverses wide S has no left inverses

Corollary: A non-square matrix cannot have a two-sided inverse.

(10.6.c.4) (10.6.c.5)

If we take S = RT, then the two matrices on the right in the drawing are RTR and RRT. Yet another matrix rank theorem says, Fact:

rank(RRT) = rank(RTR) = rank(R).

(10.6.c.6)

If R has full rank n, then the small matrix RTR has rank n and so is full rank, det(RTR) ≠ 0, and RTR is invertible. But the m x m larger matrix RRT having rank n must have det(RRT) = 0 and is not invertible. Fact:

If tall R has full rank n, then (RTR)-1 exists. For any tall R, (RRT)-1 does not exist.

(10.6.c.7)

With this in mind, another theorem says that if tall R is full rank, then we know one of its left inverses: Fact:

If tall R has full rank n, then one left inverse is given by S = (RTR)-1RT .

(10.6.c.8)

Proof: By the previous fact we know (RTR)-1 exists, so SR = [(RTR)-1RT]R = (RTR)-1 (RTR) = 1 . Fact:

If wide S has full rank n, then one right inverse is given by R = ST (SST)-1. .

(10.6.c.8)

Proof: Reader exercise.

200

Chapter 10: Differential Forms We mention in passing two other matrix theorems for arbitrary conforming matrices A,B,C: Fact: (Sylvester's Inequality) rank(A) + rank(B) ≤ rank(AB) + n where n is the conforming dimension Fact: (Frobenius Inequality) rank(AB) + rank(BC) ≤ rank(ABC) + rank(B)

(10.6.c.9)

(10.6.c.10)

(d) Implications for the Kinematics Package The set of relations shown in (10.6.a.1) still stands for F: Rn→ Rm with its tall R matrix, with the exception of the last item (i), (i) S = R-1

R = S-1

RS = SR = 1

RRT = RTR = SST = STS = 1 .

(10.6.a.1)

This must be replaced by (i) SR = 1

SST = RTR = 1

(10.6.d.1)

since RS ≠ 1 and two-sided inverses R-1 and S-1 do not exist for F: Rn→ Rm with m>n. A second implication is that certain items in the kinematics package are no longer unique. We have already seen that Sij is not unique, so anything depending on Sij is also not unique. Here is a list showing which objects are unique, and which are not: Metric tensors gij, gij unique ij unique, since g'ij = RiaRjbgab g' not unique, since g'ij = RiaRjbgab = SaiSbjgab and Sij not unique g'ij Transformation matrices Rij = Sji unique (tall R matrix from x' = F(x)) unique since Rij = gjaRia and both gja and Ria are unique Rij = Sji not unique Rji = Sij not unique, since Rij = g'ia Raj and g'ia not unique Rij = Sji Axis-aligned basis vectors (uj)i unique since (uj)i = gji unique since (uj)i = gji (uj)i unique since (uj)i = gji (uj)i unique since (uj)i = gji (uj)i

(e'j)i (e'j)i (e'j)i (e'j)i

unique since (e'j)i = g'ij (= δij) not unique since (e'j)i = g'ij unique since (e'j)i = g'ij unique since (e'j)i = g'ij (= δij)

201

Chapter 10: Differential Forms Tangent base vectors (ej)i not unique since (ej)i = Rji not unique since (ej)i = Rji (ej)i j i unique since (ej)i = Rji (e ) unique since (ej)i = Rji (ej)i

(u'j)i (u'j)i (u'j)i (u'j)i

unique since (u'j)i = Rij not unique since (u'j)i = Rij unique since (u'j)i = Rij not unique since (u'j)i = Rij

(10.6.d.2)

(e) Basis vectors for the Tangent Space at point x' on M From (10.6.a.1) we select as a basis for x-space the set of n axis-aligned basis vectors ui, {ui}

i = 1,2...n

basis for x-space

(ui)j = δij

components of these basis vectors in x-space .

(10.6.e.1)

These map into a set of n tangent base vectors u'i in x'-space, |u'i> = R |ui>

u'i = R ui or

(u'i)j = Rja (ui)a = Rja δia = Rji

(2.5.1) i = 1,2...n

j = 1,2..m .

(10.6.e.2)

We know that u'i = R ui because this is the way any vector transforms: v' = R v. Since there are m basis vectors in x'-space, we define the rest of the u'i arbitrarily such that the m basis vectors {u'i} in Rm are linearly independent, so u'i = as needed

i = n+1, n+2 .....m .

(10.6.e.3)

Note in (10.6.e.2) that (u'i)j = Rja(ui)a = Σa=1n Rja(ui)a is a "component sum equation", in contrast with the "vector sum equation" ei = Σj=1nRijuj appearing in (10.6.a.2). To summarize for u'i : R ui ⎩ as needed ⎧

u'i = ⎨

i = 1 through n . i = n+1 through m

(10.6.e.4)

We show just below that the first n u'i span the tangent space Tx'M. Since the remaining u'i must be selected so that the full set of m u'i is a basis for x'-space, we know that the higher m-n u'i must span the perp space (Tx'M) ⊥ of the tangent space, and this space is said to have codimension m-n within Rm. Based on (10.6.e.2) that Rji = (u'i)j, one concludes that the columns of R** are the contravariant basis vectors u'i which span Tx'M. Each of these u'i has m components and R** has m rows. R** = [u'1, u'2 ....u'n] .

(2.5.9)

(10.6.e.5)

As long as R** has full rank n, the columns are linearly independent so the u'i form a (complete) basis.

202

Chapter 10: Differential Forms We now show that the first n tangent base vectors u'i do in fact span the tangent space Tx'M. Assume that, as x ranges over some portion of x-space, the mapping x' = F(x) describes a "smooth surface" M embedded in x'-space, hopefully a manifold or a piece thereof. If we start at some x and move to x + dx in x-space, we move from some point x' on M to some nearby point x' + dx' on M. By the definition of M, this dx' lies on the surface M and so is tangent to the surface M at x' and thus lies in the tangent space Tx'M of M at point x'. Applying R to each of the n axis-aligned differentials dxi = dxi(ui) in x-space (no i sum), we thereby generate a set of n differential vectors dx'i = Rdxi in x'-space which are in effect a set of short basis vectors which span the tangent space Tx'M. Since dx'i = dx'i (u'i), we may take the basis vectors {u'i, i=1,2..n} as spanning Tx'M. The upper u'i are orthogonal to M and span the perp space (Tx'M) ⊥ as noted. We know from the fact u'i • u'j = δij that the up-label (dual) vectors {u'i, i=1,2..n} also form a basis for the tangent space Tx'M. This conclusion can be reached as well by raising all i indices in the previous paragraph. In this case, the set {u'i, i=n+1,n+2..m} are then all orthogonal to the "surface" M. These last paragraphs and (10.6.e.5) have shown that: Fact: The first n x'-space tangent base vectors u'i, which are the columns of full-rank R** , span the (10.6.e.6) tangent space Tx'M at point x' on M, and this is true as well for the u'i . 10.7 The Pullback Operator R and properties of the Pullback Function F* The Pullback Operator R From (10.6.e.2), or just from the fact that vectors transform as v' = Rv, we know that u'i= Rui

|u'i> = R |ui>

i = 1,2..n .

(2.5.1)

(10.7.1)

One can say that the n axis-aligned basis vectors ui in x-space are "pushed forward" by R to become the tangent-space-spanning vectors u'i in x'-space. Applying S to both sides and using (10.6.d.1) that SR = 1, one finds that ui= S u'i

|ui> = S |u'i>

i = 1,2..n .

(10.7.2)

From the package (10.6.a.1) item (h) we know that S = RT and S = RT for the corresponding Dirac operators, so the above may be written, ui= RT u'i

|ui> = RT |u'i>

i = 1,2..n .

(10.7.3)

Thus, while operator R "pushes forward" the |ui> to the |u'i>, the operator RT "pulls back" the |u'i> from x'-space into the |ui> in x-space, just reversing the first process. For the label-up u and e basis vectors one then has, u'i = Rui ui = RT u'i

|u'i> = R |ui> |ui> = RT |u'i>

i = 1,2..n i = 1,2..n

push forward pull back

203

Chapter 10: Differential Forms

e'i = Rei ei = RT e'i

|e'i> = R |ei> |ei> = RT |e'i>

i = 1,2..n i = 1,2..n

push forward pull back .

(10.7.4)

Here is a picture, reminiscent of Fig (2.5.4) (but reversed left to right), showing the above activity just for the u1 and u'1 basis vectors,

(10.7.5) In the dual space of bras (linear functionals) (10.7.4) becomes, according to (2.11.g.10), (u'i)T= (ui)T RT (ui)T = (u'i)TR

= Rij |uj>



= < αx'|R [ | v1> ⊗ | v2> ..... ⊗ | vk> ]

// definition of |v1,v2....vk>

= < αx'|[ | Rv1> ⊗ | Rv2> ..... ⊗ | Rvk> ]

// (5.6.17)

= < αx'| Rv1,Rv2....Rvk> = αx'(Rv1,Rv2....Rvk) .

(10.7.16)

The object αx'(Rv1,Rv2....Rvk) is a rank-k tensor function in Λ'kf(Rm) : the functional αx' lies in Λ'k(Rm) while the k vector arguments v'i = Rvi all lie in Rm . In contrast, the object [αx'R] (v1,v2....vk) is a rank-k tensor function in Λkf(Rn): the functional [αx'R] lies in Λk(Rn) while the k vector arguments vi all lie in Rn. The functional [αx'R] is the pullback of the functional αx'. Equation (10.7.16) says that the pulled206

Chapter 10: Differential Forms back tensor function [αx'R] in Λkf when evaluated at arguments (v1,v2....vk) is equal in value to the unpulled-back tensor function αx' in Λ'kf evaluated at arguments (Rv1,Rv2....Rvk). These tensor functions are the objects that Spivak [1965] uses and he refers to them as k-tensors. Presentations which use only tensor functions regard (10.7.16) as the definition of a pullback [αx'R] of a differential k-form αx'. The Pullback Function F* The notation used above with the Dirac operator R acting to the left on a dual space vector is a bit clumsy, so one defines the following pullback function where 0 : F*( s1α' + s2β') = = = (1/A') ∫S'dA' B(x') • ^ n' .

(10.10.10)

In spherical coordinates, A' = 4πR2, ^ n' = ^ r and dA' = R2sinθdθdφ. This area measure can be deduced by looking at a picture of spherical coordinates where dA' = (Rdθ)(Rsinθdφ) is a surface patch. Then, = (1/4π)

∫0





∫0

= (1/L') ∫C'ds' Bt'(x') = (1/L') ∫C'ds' B(x') • ^t ' , ds' = dx' • ^t ' = dx' = (1/L')

∫C'dx' • B(x') ,

ds' ^t ' = dx' .

(10.10.27)

so = (1/L')

∫C'dx' • B(x')

.

(10.10.30)

229

Chapter 10: Differential Forms

The ring is assumed centered at the origin of the x',y' plane so we use cylindrical coordinates with z' = 0, which then is just polar coordinates, so ^t ' = ^ θ , dx' = Rdθ ^ θ , ds' = dx' • ^t ' = Rdθ, and L' = 2πR. Then, = (1/2π)

∫0



in many equivalent ways),

231

Chapter 10: Differential Forms



= (1/L') ∫C'ds' T(x') = (1/L') ∫ dx1 K(x) T(F(x)) a

0

≡ Σ'J | dxJ> = Σ'J | dxj1> ⊗ | dxj2>.⊗ ... ⊗ | dxjk> = Σ'J | dxj1, dxj2 ... dxjk >

(10.11.11)

where the differential vectors are aligned with the axes of x-space Rn, dxj ≡ dxj uj

// no sum on j

or

| dxj> = dxj | uj> .

(10.11.12)

Here dxj is a vector in Rn and | dxJ> is a vector in (Rn)k called Vk in Chapter 5. Thus, | μ > = Σ'J dxj1dxj2 ... dxjk | uj1, uj2 ... ujk > = Σ'J dxJ | uJ >

// multiindex notation,

dxJ ≡ dxj1dxj2 ... dxjk .

(10.11.13)

For example, for k = 1,2,3 in Rn = R3 the vector | μ> would be | μ > = | dx1> + | dx2> + | dx3>

// k = 1

| μ > = | dx1, dx2> + | dx1, dx3> + | dx2, dx3>

// k = 2

| μ > = | dx1, dx2, dx3> .

// k = 3

(10.11.14)

We then define "the integral of the k-form αx' in x'-space" as follows, " ∫S' αx'" ≡ < ∫S' αx' | R | μ > = < ∫S βx | μ > =

∫S



(10.11.15)

where we end up with the integral of a certain tensor function over S. Next, write

236

Chapter 10: Differential Forms

∫S

=

∫S

Σ'I fI(F(x)) ΣM RIM

// (10.11.10)

=

∫S

Σ'I fI(F(x)) ΣM RIM Σ'J

// (10.11.11)

=

∫S

Σ'I fI(F(x)) ΣM RIM Σ'J dxJ .

// (10.11.13)

(10.11.16)

In our Chapter 8 normalization for wedge products, we write = (λm1 ^ λm2 ^ ... ^ λmk) (uj1,uj2, ...ujk) = (1/k!) det(δMJ) .

// (2.11.c.2) λi ≡ =

∫S Σ'I fI(F(x)) Σ'J

where we have shifted the M sum to the right. Now consider, ΣM RIM [det(δMJ )] = ΣM RIM [ΣP (-1)S(P) δMP(J) ]

// (A.1.21)

= ΣP (-1)S(P) ΣM RIM δMP(J)

// reorder

= ΣP(-1)S(P) RIP(J)

// k matrix multiplications

= det(RIJ) .

// (A.1.21)

(10.11.20)

Inserting this result into (10.11.19) gives

∫S

< βx | μ > =

∫S Σ'I fI(F(x)) Σ'J

dxJ det(RIJ)]

= ∫S Σ'I fI(F(x)) Σ'J det(RIJ) dxJ .

(10.11.21)

237

Chapter 10: Differential Forms The final result then is " ∫S' αx'" ≡ < ∫S' αx' | R | μ > = < ∫S βx | μ > = ∫S < βx | μ > = ∫S Σ'I fI(F(x)) Σ'J det(RIJ) dxj1dxj2 ... dxjk = ∫S gJ(x) dxj1dxj2 ... dxjk .

(10.11.22)

This result then is the same as (10.11.7) obtained by making the "two definitions". Our resulting tensor function turns out to be just a constant function which is a real number which is the result of doing the above regular calculus multivariable integration. Our alternate approach lacks rigor since the integration is treated as a sum over points x' on a manifold and this really means that the Dirac space used above is some kind of fiber bundle space (the tangent bundle of Section 10.2). Moreover, the measure ket |μ> = Σ'J | dxJ> seems arbitrary, but it does manage to "sweep up" all contributions to the integration and we do get the correct result. The method does at least provide an alternative explanation of how the functional dx^J wedge product is replaced by the calculus product dxJ. 10.12 Integration of 1-forms General Review of k-form integration This section is presented in the x = φ(t) notation introduced in Section 10.9 and illustrated in hybrid Fig (10.9.3) which we replicate here,

(10.9.3) The main result of Section 10.11 is this description of the integration of a k-form over a surface,

238

Chapter 10: Differential Forms

∫S' αx'

∫S'[Σ'I fI(x') dx'i1 ^ dx'i2

=

where

^ ... ^ dx'ik ]

// αx' = Σ'I fI(x') dx'^I



∫S

[ Σ'J gJ(x) dxj1 ^ dxj2 ^ ... ^ dxjk ]

// first definition (pull back)



∫S

[ Σ'J gJ(x) dxj1dxj2 ... dxjk]

// second definition

gJ(x) = Σ'I fI(F(x)) det(RIJ)

and

x' = F(x) , R = (DF) .

(10.11.7)

Using the x = φ(t) notation we rewrite the above (with some specialization) as,

∫φ αx ≡

∫φ

Σ'I fI(x) dxi1 ^ dxi2 ^ ... ^ dxik

∫[0,1]k

Σ'J gJ(t) dtj1 ^ dtj2 ^ ... ^ dtjk

=

≡ (∫

1

0

where

∫0

1

...

∫0

1

) Σ'J gJ(t) dtj1dtj2 ... dtjk

gJ(t) = Σ'I fI(φ(t)) det(RIJ)

and

// αx = Σ'I fI(x) dx^I // first definition (pull back) // second definition x = φ(t) , R = (Dφ) .

(10.12.1)

Here the pulled-back integration region formerly called S is taken to be the unit cube in k dimensions, written above as [0,1]k and referred to as a k-cube. The pre-pullback integration region formerly called S' is here called φ, with the idea that this region is φ([0,1]k). Note: A k-chain is a linear combination of k-cubes and is used by both Sjamaar (p 65) and Spivak (p 97) in their derivations of Stokes' Theorem. In fact, Spivak's entire Chapter 4 which includes his discussion of tensor products, wedge products and tensor functions is entitled Integration on Chains. Integration of 1-forms We wish now to look in more detail at the integration of 1-forms. There is much repetition of statements below because the meaning of objects tends to quietly diffuse away as one proceeds. Consider this general 1-form in x-space Rm , αx = Σi fi(x) xλi = Σi fi(x) dxi .

(10.12.2)

We wish to define a meaning for the integration of this 1-form αx over a piece of the curve x = φ(t),

∫φ αx = ∫φ Σi fi(x) dxi

= integral of a 1-form over a piece of the curve φ in Rm .

(10.12.3)

The transformation x = φ(t) is a mapping φ: t → Rm. Variable t is often called "the parameter".

239

Chapter 10: Differential Forms Comment: Officially it is the mapping φ which is "the curve", but one loosely refers to the image (trace) of this mapping in Rm as "the curve". The distinction is necessary because many mappings can have the same image curve, such as φ(t) and φ(t2), where the parameter is "re-speeded" (reparametrized). This picture shows the general respeeding idea :

(10.12.4) Here the same red curve is the image of two different transformations x = φ(t) and x = ψ(t) with different domain intervals, and ψ(t) = φ(f(t)) where f(t) is a monotonic respeeding function. A special case would be [a,b] = [c,d] = [0,1] to which our example φ(t) and ψ(t) = φ(t2) would apply. Mappings φ and ψ are called smoothly equivalent curves and ∫φ αx is the same for any two such curves (Buck p 386 Theorem 2 (i)). A similar but generalized reparametrization comment applies to integration of 2-forms and k-forms. So imagine that we have a curved line hanging in Rm space and as t varies perhaps from 0 to 1 in t-space, we move along the image curve in Rm. The problem is how to integrate a 1-form along this curve. We can define the calculational meaning of the above integral in two steps, each being a definition, as outlined in Section 10.11. First definition:

∫φ αx ≡ ∫[0,1] φ*(αx) = the integral in t-space of the pullback of αx over the 1-cube [0,1]

(10.12.5)

On the left is an integral of the 1-form αx over a curve φ in Rm. On the right is an integral of a different 1-form φ*(αx) (the pullback of αx) over a 1-cube [0,1] in R1. Note that αx lies in xΛ1(Rm) while φ*(αx) lies in tΛ1(R). Since our usual 1-form pullback mapping is φ* : xΛ1(Rm) → tΛ1(Rn), we have n = 1 (see (10.7.18)). The "tall" m x n R-matrix for this problem is then an m x 1 matrix which is just a column vector of m elements ∂tφi , Ri1 = (D(t)φ)i1 = ∂φi(t)/∂t .

// t1 ≡ t, the only coordinate in t-space

(10.12.6)

240

Chapter 10: Differential Forms We then compute the pullback of φ*(αx) of αx : αx = Σi fi(x) xλi = Σi fi(x) dxi = f(x) • dx , φ*(αx) = Σi φ*(fi(x)) φ*(dxi)

dx ≡ (dx1, dx2, .... dxm)

// (10.9.5) 3

= Σi fi(φ(t)) Σj=1n Rij dtj

// (10.9.5) 1 and 5

= Σi fi(φ(t)) Ri1 dt1

// n = 1

= Σi fi(φ(t)) [∂φi(t)/∂t] dt

// (10.12.6) and t1 = t

= g(t) dt

(10.12.7)

where g(t) ≡ Σi fi(φ(t)) [∂φi(t)/∂t] = Σi fi(φ(t)) ∂tφi(t) = f(φ(t)) • (∂tφ) .

(10.12.8)

The object φ*(αx) = g(t) dt is a 1-form in dual t-space tΛ1(R). Using the definition given above, one then has,

∫φ αx = ∫φ αx

=

∫[0,1] φ*(αx)

=

∫[0,1]

g(t) dt .

(10.12.9)

Thus the integral of the 1-form αx over the curve φ in x-space is defined to be equal to the integral of the 1-form g(t) dt over a 1-cube in t-space. So far no regular calculus integrals have appeared. Second definition:

∫[0,1] g(t) tλ = ∫[0,1]

g(t) dt



∫0

1

g(t) dt .

(10.12.10)

On the left is the integral of a 1-form on a 1-cube, on the right is an ordinary calculus integral of a function over the interval [0,1] of the real axis. It is this second definition that motivates giving the dual space basis vector tλ the cosmetic name dt. If one flips the "orientation" of the integration domain, so that [0,1] becomes [1,0], the result changes sign, and of course this fact agrees with the usual notion that ∫ g(t) dt = 0

1

∫0

1

g(t) dt .

We have then shown that the integral of a 1-form is described by,

241

Chapter 10: Differential Forms

∫φ αx =

=

∫φ

∫0

1

f(x) • dx = ∫[0,1] g(t) dt f(φ(t)) • (∂tφ(t)) dt =

∫0

1

fi(φ(t)) [∂φi(t)/∂t] dt .

(10.12.11)

This result appears in Sjamaar Ch 4 Eq (4.1) with φ = c and m = n. Notice that the only locations where f(x) is "sensed" in this integral are points on the curve x = φ(t). Since dx = (∂tφ(t)) dt, the above can be written concisely as

∫φ αx

=

∫φ

f(x) • dx = ∫

1

0

f(φ(t)) • dx

where dx = (∂tφ(t)) dt .

(10.12.12)

We redisplay the earlier Fig (10.9.3b) to illustrate the above discussion, where βt = g(t) dt :

(10.12.13) Recall now our "no differential forms" integration done in (10.10.42), L' =

∫0

a

dx Σj=13 Bi(F(x)) Ri1(x) .

// Bt means Btangent

(10.12.14)

In the x = φ(t) notation this reads, setting a = 1 and replacing 3 by m, L = ∫ dt Σj=1m Bi(φ(t)) (Dφ)i1(t) 1

0

=

∫0

dt Σj=1m Bi(φ(t)) ∂iφ(t)

=

∫0

B(φ(t)) • ∂φ(t) dt

1

1

(10.12.15)

242

Chapter 10: Differential Forms

which is the same integral appearing in (10.12.11) with f = B. Therefore, we can interpret (10.12.15) as being the integral of the 1-form, αx = Σi Bi(x) xλi = Σi Bi(x) dxi = B(x) • dx

(10.12.16)

and one then has

∫φ αx

=

∫φ

B(x) • dx =

∫0

1

B(φ(t)) • ∂φ(t) dt =

∫0

1

B(φ(t)) • dx

(10.12.17)

where dx = (∂tφ(t)) dt. This integral is normally written ∫φ B(x) • dx showing again the motivation for the cosmetic functional notation dx. This is the "line integral of a vector field B over a curve φ ". Now return to (10.12.11),

∫φ αx

=

∫0

1

f(φ(t)) • (∂tφ(t)) dt .

(10.12.11)

Suppose the vector field f(φ(t)) happens to be tangent to the curve φ for all values of t. In this case f(φ(t)) • (∂tφ(t)) = | f(φ(t)) | | (∂tφ(t)) |

(10.12.18)

since ∂tφ(t) is tangent to the curve at t. Note that | (∂tφ(t)) |2 = Σi=1m (∂tφi(t))2 = Σi=1m (Ri1) 2

.

(10.12.19)

which we recognize as the K2 object of (10.10.41). Setting | f(φ(t)) | = T(φ(t)), we find that

∫φ αx

=

∫0

1

T(φ(t)) K(t) dt

(10.12.20)

and this shows how the temperature integral of (10.10.42) can be fitted into the 1-form framework. Example for Rm = R2: The "angle form" problem mentioned in (10.5.10).

(10.12.21)

In this problem we have specific functions f1 and f2, a specific range [0,2π] for the t-space domain, and a specific curve (a circle) x = φ(t) = (x1,x2). αx = Σi=12 fi(x) xλi

= f1(x) dx1 + f2(x) dx2 = - (x2/r2) dx1 + (x1/r2) dx2

where r2 ≡ (x1)2 + (x2)2

243

Chapter 10: Differential Forms x1 = φ1(t) = cos t x2 = φ2(t) = sin t

∂tφ1(t) = -sin t ∂tφ2(t) = cos t

r2 = (x1)2 + (x2)2 = cos2 t + sin2 t = 1

t = [0,2π] t is the polar angle of the vector x = (x1,x2) vector x lies on the unit circle in x-space

f1(φ(t)) = - (x2/r2) = - sin t f2(φ(t)) = + (x1/r2) = cos t αx = - sin t dx1 + cos t dx2 φ*(αx) = g(t) dt

pullback of αx (10.12.7)

g(t) = fi(φ(t)) [∂φi(t)/∂t] = [ f1(φ(t))∂tφ1(t) + f2(φ(t))∂tφ2(t)] = [ (-sin t)(-sin t) + (cos t)(cos t) ] =1 so φ*(αx) = 1 dt

∫φ αx = ∫[0,2π] φ*(α) = ∫[0,2π]

g(t) dt = ∫[0,2π] dt =

∫0



dt

= 2π . So the integral of this particular 1-form α around the unit circle gives the number 2π. In this example we are trying to "cover" a full circle with a single mapping x = φ(t) and the circle has a "seam" which maps back to both t = 0 and t = 2π resulting in the 2π above. See comments below (10.5.9) concerning how this 1-form example provides a counterexample to the Poincaré Lemma and shows that αx is not exact. Integration of 1-forms over more general regions of t-space In the general mapping picture where φ : Rn → Rm one is allowed to have k-forms with k ≤ n but we are usually interested in the case that k = n since this makes the most "efficient" use of t-space on the left. But there is no reason not to consider k < n. Consider then this 1-form situation in the context φ: R2 → R3 :

244

Chapter 10: Differential Forms

(10.12.22) Now the simple 1-cube in R1 t-space is replaced by a general curve U in R2, but we are still mapping a curve U to a curve V. We go through the steps above: αx = Σi fi(x) xλi = Σi fi(x) dxi .

(10.12.2)

∫φ αx = ∫V Σi fi(x) dxi

(10.12.3)

= integral of a 1-form over a piece of the curve V in Rm .

First definition:

∫V αx ≡ ∫U φ*(αx) = the integral in t-space of the pullback of αx over the curve U in R2

(10.12.23)

We then compute the pullback of φ*(αx) of αx : αx = Σi fi(x) xλi = Σi fi(x) dxi = f(x) • dx , φ*(αx) = Σi=1m φ*(fi(x)) φ*(dxi) = Σi=1m fi(φ(t)) Σj=12 Rij dtj

dx ≡ (dx1, dx2, .... dxm)

// (10.9.5) 3 // (10.9.5) 1 and 5

= Σi Σj fi(φ(t)) [∂φi(t)/∂tj] dtj = Σj gj(t) dtj = g(t) • dt

(10.12.24)

where gj(t) ≡ Σi fi(φ(t)) [∂φi(t)/∂tj] = Σi fi(φ(t)) ∂jφi(t) = f(φ(t)) • (∂jφ) .

(10.12.25) 245

Chapter 10: Differential Forms

Second definition:

∫U gj(t) tλj = ∫U

g(t) • dt =

∫U

g(t) • dt .

(10.12.26)

Assembling the pieces,

∫V αx = ∫V

f(x) • dx = ∫U φ*(αx) = ∫U g(t) • dt = ∫U g(t) • dt .

(10.12.27)

When one curve is mapped into another by x = φ(t), this result shows how to reduce the integral of the 1form αx to a calculus line integral in t-space. In effect, the line integral

∫V

f(x) • dx in x-space is

replaced by the line integral ∫U g(t) • dt in t-space. 10.13 Integration of 2-forms We wish now to look in more detail at the integration of 2-forms. The general k-form integration result is stated in (10.12.1). Once again, there is much repetition below intended to reinforce the meaning of various objects. For the moment we set m = 3 and consider this 2-form in x-space R3 , αx = Σ'I fI(x) xλ^I = Σ'I fI(x) dx^I = Σ1≤i1k'

(A.10.1) (A.10.4)

where for example

285

Appendix A: Permutation Support | 1,2,.....k>k = |1>1 ⊗ |2>1 ⊗ ...⊗ |k>1 k+k'k

k'
k

k'