Matrix Computation

 

| HOME |

Matrix is in fact one of the foundation classes of Noobeed.  Its original design is oriented toward photogrammetery, image processing, and GIS.  However, since Matrix mathematic is proved to be very useful in solving scientific problems, it natural characteristics is made available by Noobeed as much as possible.  At the moment almost all basic matrix operations are included in Noobeed.  These include those functionalities of image processing if a matrix is thought of as an image.  The following are summary of built-in class functions of Matrix object.

  • basic matrix operations, such as inversion, transpose, multiplication, addition, subtraction, reduced row-echelon form, eigen values and eigen vectors, rank, determinant, LU decomposition, etc.
  • extraction and assignment from and to any portion of a matrix.
  • extraction and assignment from and to a row or a column or a group of rows and column.
  • reset, linear generated, initialize, diagonal, identity matrix
  • concatenation left right up down
  • flip, rotate, in any direction, and by any arbitrary angle, swap row, swap column
  • scientific and trigonometric functions (sin, cos, tan, asin, acos, atan, atan2, abs, ln, log, random, etc.)
  • vector products, cross, dot, norm, angle
  • statistics, min, max, mean, standard  deviation, variance, covariance, correlation, percentile.
  • sorting by row, column, or all.
  • convolution and predefined filters (mean, median, Gaussian, mode)
  • matrix comparison (min, max)
  • graphic drawing on a matrix, line, poly-line, text, circle, ellipse, rectangle, square, symbol, fill polygon
  • load and read from TIF, BMP, ASCII and generic binary format, with an option to skip a header.
  • image processing (boundary, polyline finding, flood polygon, binarize, reclass, reassign, edge detection, no of connected pixels, Fast Fourier Transform and Inverse Fast Fourier Transform, lookup table, image stretching, anaglyph image generation)
  • convert to RGB, RGBAUTO, including some 10 predefined color-maps, besides user defined colormaps are enable.
  • virtually loaded matrix, (interactive directly read from a file) capable of handle an unlimited size of matrix.
  • find data in a matrix with an unlimited of any combination of logical operations, e.g. find(A.>=78 & A <= B)
  • 8 types of premitive data available, namely complex, double, float, integer, short integer, unsigned character, and boolean. All types are convertible to each other, as well as converting to vector, vector of point (2D, 3D, with/without ID).
  • ability to assign value, turn on and off of null data

Example 1  Compute the reduced row echelon form of a matrix

->a = [ 1 2 3; 4 5 6; 7 8 9]
->b = a.rref()
->b

ans =

no of row    : 3
no of column : 3

0:    1.00000    0.00000   -1.00000
1:    0.00000    1.00000    2.00000
2:    0.00000    0.00000    0.00000



->

Example 2  Inverse matrix & determinant

->a = [1 2 ; 3 4]
->print a

no of row    : 2
no of column : 2

0:    1.00000    2.00000
1:    3.00000    4.00000

->a.inv()

ans =

no of row    : 2
no of column : 2

0:    -2.00000    1.00000
1:     1.50000    -0.50000

->a.det()

ans = -2.00000

->

 

Example 3 

The following is an example to use Matrix operation to solve a linear regression problem.  Here we have the following data set, in which the first column is x and the second column is y.

 1.00000    12.19802
 2.00000    14.01881
 3.00000    16.12763
 4.00000    18.05701
 5.00000    20.09303
 6.00000    22.03744
 7.00000    24.17206
 8.00000    26.18877
 9.00000    28.00460
10.00000    30.07630
11.00000    32.05227
12.00000    34.00503
13.00000    36.04334
14.00000    38.12287
15.00000    40.14812
16.00000    42.03936
17.00000    44.08916
18.00000    46.19562
19.00000    48.12926
20.00000    50.00753

The relation between x and y is y = ax + b, where a and b are unknown parameters needed to be solved.   Suppose that the data is stored in a file, named "data_xy.txt". 

To read the data into a matrix, we do the following:

->XY = Matrix()

->XY.loadasc("data_xy.txt")

Now we want to extract x coordinates and y coordinates and stored them in a different matrices.  We do the following.

->X = XY.getcol(0)

->Y = XY.getcol(1)

We are going to solve this problem by the Least Square Adjustment technique.  The first thing is to form the so-called design matrix, "A", which is nothing but the following.

 1.00000    1
 2.00000    1
 3.00000    1
 4.00000    1
 5.00000    1
 6.00000    1
 7.00000    1
 8.00000    1
 9.00000    1
10.00000    1
11.00000    1
12.00000    1
13.00000    1
14.00000    1
15.00000    1
16.00000    1
17.00000    1
18.00000    1
19.00000    1
20.00000    1

And we will have the so-called observation equation, writen in a matrix form as follows.

Y = A x
 

where, Y is an observation vector, and x is an unknown parameter vector, [a    b]'.  The reader should realize the difference between matrices "X" and "x".

To create the matrix "A", we do the following.

->A = X.concat(Matrix(20,1,1.000))

The above instruction is to take the matrix "X" and cancate it with a column matrix of 1.00.

Now, to solve for the values of vector "x", we have to calculate   x = (A'A)-1(A'Y), as follows.

->x = (A.tsp()*A).inv()*A.tsp()*Y

->print x

 no of row : 2
 no of column : 1

0:     1.99925
1:    10.09815

Therefore, the value of a = 1.99925 and b = 10.09815.  The linear regression model ends up the following formula:

y   = 1.99925 x + 10.09815


| HOME |