Next: Fields and Matrices Up: Calculators Previous: Variables in Calc

## Vector and Matrix Arithmetic

• scalar  and cross products of two vectors: a scalar product is calculated simply on pressing *, assuming that there are two arrays of matching length on the stack:
'[1,2,3]
1:  [1, 2, 3]
'[4,5,6]
1:  [4, 5, 6]
*
1:  32
'1 * 4 + 5 * 2 + 6 * 3
1:  32

A cross product  makes sense for vectors in a 3D space only, because a cross product is really an archaic notation for a skew-symmetric tensor product, the so called external product between two 1-forms

A product like that can be mapped back onto a vector space in a 3D space only. However this mapping is not exact, which is why vectors produced by a cross product are sometimes called pseudo-vectors. They behave unusually under reflections. A cross product in Calc is invoked by VC:
'[1,2,3]
1:  [1, 2, 3]
'[4,5,6]
1:  [4, 5, 6]
VC
1:  [-3, 6, -3]

or, if you prefer to use an algebraic notation, by cross:
'cross([1,2,3],[4,5,6])
1:  [-3, 6, -3]

• There are   three matrix and vector norms that Calc will evaluate for you. These are:
Frobenius norm
which  is defined as follows:

where indicates complex conjugation. This norm is calculated by function abs, key-binding A.
infinity norm
(row norm) which  is defined a little differently for vectors and for matrices. For vectors it is

for matrices:

The norm is calculated by function rnorm, key-binding vn.
one norm
(column norm) which  is defined, again, a little differently for vectors and for matrices. For vectors it is

for matrices:

This norm is calculated by function cnorm, key-binding VN.

• There are some standard matrix operations that are very commonly used. Calc does
• conjugate transpose, and normal transpose
• matrix determinant
• matrix inversion
• trace
• LU decomposition.
• Conjugate Transpose is defined  as follows:

Here's a Calc example:
'[[1,(0,2),(3,-2)],[(-1,-2),2,(0,-1)],[(1,1),(2,3),3]]
1:  [ [    1,     (0, 2), (3, -2) ]
[ (-1, -2),   2,    (0, -1) ]
[  (1, 1),  (2, 3),    3    ] ]
VJ
1:  [ [    1,    (-1, 2), (1, -1) ]
[ (0, -2),    2,    (2, -3) ]
[ (3, 2),  (0, 1),     3    ] ]

Conjugate transpose is sometimes also called Hermitian conjugate. Matrices that are invariant with respect to Hermitian conjugation are called Hermitian matrices. The matrix is Hermitian:
sr var-sx
1:  [ [   0,    (0, -1) ]
[ (0, 1),    0    ] ]
VJ
1:  [ [   0,    (0, -1) ]
[ (0, 1),    0    ] ]

Hermitian   matrices, and, more broadly, Hermitian operators are identified with measurables of quantum mechanics. The VJ key-binding corresponds to function ctrn.
• A normal, i.e., not conjugate, transpose  is invoked by vt:
'[[1,(0,2),(3,-2)],[(-1,-2),2,(0,-1)],[(1,1),(2,3),3]]
1:  [ [    1,     (0, 2), (3, -2) ]
[ (-1, -2),   2,    (0, -1) ]
[  (1, 1),  (2, 3),    3    ] ]
vt
1:  [ [    1,    (-1, -2), (1, 1) ]
[ (0, 2),     2,     (2, 3) ]
[ (3, -2), (0, -1),    3    ] ]

The vt key-binding corresponds to function trn.
• Once you  have a square matrix, you can compute its determinant, which is defined by:

where n is the dimension of the matrix, and where is the fully antisymmetric Levi-Civita symbol, which is +1 for an even permutation of , -1 for an odd permutation of , and 0 for all other combinations of index values. The above formula assumes   Einstein summation convention. For example:

And in Calc:
'[[1,2][3,4]]
1:  [ [ 1, 2 ]
[ 3, 4 ] ]
VD
1:  -2

Let us try a larger matrix:

And now the same in Calc:
'[[1,4,3][9,5,6][7,8,2]]
1:  [ [ 1, 4, 3 ]
[ 9, 5, 6 ]
[ 7, 8, 2 ] ]
VD
169

• A characteristic (or eigen-) equation   for a matrix is:

where I is the identity matrix and is the unknown. There are usually several solutions, up to the dimension of the matrix. Try the characteristic equation for :
sr var-sx
1:  [ [   0,    (0, -1) ]
[ (0, 1),    0    ] ]
'[[x,0][0,x]]
1:  [ [ x, 0 ]
[ 0, x ] ]
-
1:  [ [   -x,   (0, -1) ]
[ (0, 1),   -x    ] ]
VD
1:  x^2 - 1

• Now press the backquote, , Calc will put you right  into the editor, where you can edit the top of the stack value. The editing window will look as follows:
Calc Edit Mode.  Press M-# M-# or C-c C-c to finish, M-# x to cancel.
x^2 - 1

Replace x^2 - 1 with x^2 - 1 = 0:
Calc Edit Mode.  Press M-# M-# or C-c C-c to finish, M-# x to cancel.
x^2 - 1 = 0

and press C-c C-c. Once you get out of the editing window, press HaS to find all solutions  to this equation:
1:  x^2 - 1 = 0
HaS
Variable to solve for: x<ret>
x = s1

The message Variable to solve for... will appear in the minibuffer.

s1 is a symbol that represents independent arbitrary signs, i.e., a + or a -. So, the answer really is:

• Because is an operator that describes an act of measuring spin of a fermion  in the x direction, what  this result tells us is that the x component of a spin must be or , but nothing in between or beyond, sic!
• The inverse: consider again the matrix we have  used to calculate the determinant.
'[[1,4,3][9,5,6][7,8,2]]
1:  [ [ 1, 4, 3 ]
[ 9, 5, 6 ]
[ 7, 8, 2 ] ]

Let us duplicate  this item on the stack, simply by pressing return:
<ret>
2:  [ [ 1, 4, 3 ]
[ 9, 5, 6 ]
[ 7, 8, 2 ] ]
1:  [ [ 1, 4, 3 ]
[ 9, 5, 6 ]
[ 7, 8, 2 ] ]

Now let us invert the last matrix on the stack. This is accomplished by typing &:
&
1:  [ [ -0.224852071006, 0.094674556213,  0.0532544378698 ]
[  0.14201183432,  -0.112426035503,  0.12426035503  ]
[ 0.218934911243,  0.118343195266,  -0.183431952663 ] ]

Is this indeed the inverse? That's easy to check. Simply multiply the last two matrices on the stack:
*
1:  [ [  1.,       -1e-12,     1e-12 ]
[ 1e-11, 0.999999999998,  0.   ]
[ 6e-12,     -1e-12,      1.   ] ]

This is almost an identity matrix. With the accuracy of 10-11.
• The trace of a matrix  is a very simple operation. It is defined as:

If a matrix is of the Aij variety, i.e., if it represents a tensor then this type of an operation has a geometrical meaning, it is called tensor contraction and is often written using Einstein's summation convention:

The VT key-binding evaluates a trace of a square matrix. For example:
'[[1,4,3][9,5,6][7,8,2]]
1:  [ [ 1, 4, 3 ]
[ 9, 5, 6 ]
[ 7, 8, 2 ] ]
VT
1:  8
'1+5+2
1:  8

• One of the most  important and useful operations on matrices is the so called  LU decomposition. The easiest way to describe what it does is to show how it works on an example:
'[[1,4,3][9,5,6][7,8,2]]
1:  [ [ 1, 4, 3 ]
[ 9, 5, 6 ]
[ 7, 8, 2 ] ]
VL
1:   [ [ 0, 0, 1 ]    [ [       1,              0,        0 ]    [ [ 9,       5,             6        ]
[  [ 1, 0, 0 ]  ,   [ 0.777777777778,       1,        0 ]  ,   [ 0, 4.11111111111, -2.66666666667 ]  ]
[ 0, 1, 0 ] ]    [ 0.111111111111, 0.837837837838, 1 ] ]    [ 0,       0,       4.56756756757  ] ]

The answer is a vector of 3 matrices. The first one is called a permutation matrix, the second is called the L matrix and the last one is called the U matrix. The L matrix has zeros above the diagonal, and the U matrix has zeros below the diagonal.
• Now we are going to perform a few cumbersome stack manipulations.
• First we duplicate the stack by typing <ret>.
• The next step is to extract the first item from the array with vr:
vr
Row number: 1
1:  [ [ 0, 0, 1 ]
[ 1, 0, 0 ]
[ 0, 1, 0 ] ]

• Now I exchange the top two elements on the stack by pressing a <tab>, duplicate the last item on the stack by pressing <ret>, and extract a second matrix from the vector
<tab>
<ret>
vr
Row number: 2
1:  [ [       1,              0,        0 ]
[ 0.777777777778,       1,        0 ]
[ 0.111111111111, 0.837837837838, 1 ] ]

• The next step is to swap the last two items on the stack again, with a <tab>, forgo duplication this time, and extract the third matrix:
vr
Row number: 3
1:  [ [ 9,       5,             6        ]
[ 0, 4.11111111111, -2.66666666667 ]
[ 0,       0,       4.56756756757  ] ]

• In summary we now have on the stack, in this order:
1.
The U matrix
2.
The L matrix
3.
The permutation matrix
• Simply type * twice, to multiply the three matrices by each other:
**
1:  [ [ 0.999999999999, 4., 3. ]
[       9.,       5., 6. ]
[       7.,       8., 2. ] ]
`
• This is our original matrix with such accuracy as we have requested (we've been using default settings so far).
• In summary, the LU decomposition results in the following:

Next: Fields and Matrices Up: Calculators Previous: Variables in Calc
Zdzislaw Meglicki
2001-02-26