|
|
Purpose
To make an anaglyph image. The left photo of the model is to be the left stereo image and the right one is to be the right stereo image. They are to be overlaid on top of each other, assigning gray scale of the left image as a shade of red color and that of the right image as a shade of cyan color. Anaglyph glasses, red and cyan or red and green, are needed to view the image.
This function requires that no of rows of the left and the right photo are the same.
Then the right photo is place on top of the left photo, such that the left edges of the two images touch each other. This is called the initial position. Then the right photo is to be moved to the left or to the right direction from the initial position. This amount of shift is specified by the user, and it must be in the same unit as photo coordinate system. By adjusting the amount of shift, the user can control the strength of the 3D perception.
The easiest way to determine the amount of shift is to measure the column number of the left and the right image of the same point at which the base level of the 3D image is set. Then subtarct the left column number by the right column number to gain the amount of shift in pixels. Then multiply the no of pixels with the photo resolution to get the amount of shift in millimeter.
Class
Model
Usage
{Image_rgb} ret = object.ANAGLYPH({double} argm1)
argm1 = amount of shift, in photo coordinate unit, of the right image
(positive = shift to the right, negative = shift to the left)
Example:
->Img_color = M.anaglyph(80) -> |
See also (class function)
angphtif
Purpose
This function works almost exactly the same as the function "anaglyph", except that rather than creating an Image_rgb object, it stores the result anaglyph image as a TIF format in a file specified by the user.
Class
Model
Usage
{void} object.ANGPHTIF({int} argm1, {String} argm2)
argm1 = amount of shift, in photo coordinate unit, of the right image
(positive = shift to the right, negative = shift to the left)
argm2 = TIF image file name
Example:
->M.anaglyph(80,
"my_3D_image") -> |
See also (class function)
anaglyph
Purpose
To perform Absolute Orientation (AO) using 3D conformal transformation. The function requires at least 3 ground control points, with x y z coordinates are know in both model coordinate system and ground coordinate system. See details in Understanding Coordinate System.
In most circumstances, it is necessary to perform Relative Orientation (RO) prior to AO. RO parameters are used to determine model coordinates of corresponding points from left and right photo.
The results Absolute Orientation Parameters (AOP) are used to updated AO_para, a vector containing AOP, of the calling model.
Moreover, an ASCII text file is automatically generated after the AO is done. Also a matrix containing AOP and all related information are return to the user. This matrix is is actually where the information in the report is from. By comparing the returning matrix to the report, the user can easily figure out what output information is represented by which part of the matrix.
The computation of this function does not involve image data, therefore, it is possible to load the Model in virtual mode, using function "vload". This can reduce computer time tremendously, especially when the size of the left and right photo are very large.
Unlike RO function, which will automatically update the EOPs the left and right photo, the AO function comes with the No Update Flag, the last argument, which will force the function not to update EOPs.
Class
Model
Usage
{Matrix} ret = object.AO({VecIdPt3D} argm1, {VecIdPt3D} argm2, [{String} argm3, {int} argm4])
argm1 = list of model coordinates with ID
argm2 = list of ground control coordinates with ID
argm3 = output file name for the report (default = "RPT_AO.txt")
argm4 = flag no update EOP ; 0 = update others = not update (default = 0)
Example:
->Mtmp = mod.AO(vec_model,
vec_gcp, "my_AO_report") -> |
See also (class function)
EO, RO
Purpose
To report the current Absolute Orientation Parameters of the model..
Class
Model
Usage
{Vector} ret = object.AO_PARA()
Example:
->vec_para = mod.ao_para() -> |
See also (class function)
ro_para
Purpose
To report the length of the base of the model. The model base vector consists of 3 vectors in x y z direction, namely bx, by and bz.
Class
Model
Usage
{double} ret = object.BASE()
Example:
->base = mod.base() -> |
See also (class function)
ro_para, ao_para
BX
Purpose
To report the current value of bx of the model. Bx is one of the three base components and can be set by the user or the default value will be given when a Model obhect is created. The other two, the By and Bz, are dependent parameters and will be determined when performing RO.
Class
Model
Usage
{double} ret = object.BX()
Example:
->base = mod.bx() -> |
See also (class function)
ro_para, ao_para
Purpose
To automatically generate a Digital Elevation Model (DEM) from a stereo-model.
This function calls to function "matching" of class matching, where area-based match is performed.
This function returns points that are on the surface of terrain in the object space, 3D. If the model has already established an exterior orientation, then the result are spot heights of a DEM, otherwise they will be in model coordinate system.
The model must be a normalized model, that is each photo id resample to the epipolar geometry, and they must have the same number of rows.
Several critical parameters are needed by the functions, for example window size, search distance, interval step, etc. They must be expressed in the unit of photo coordinate system, e.g. millimeter.
In general, the area-based method might work fine in a non-urban area. In urban area, this function can have troubles if the photo scale is large due to high relief displacement and image occlusion.
The search distance is a function of the height variation of the terrain. If set to small, matched points might not be possibly found. If set too large, it will be too much computation time consuming. A relation between change in Px (x-parallax) and change in Z (elevation) is given below.
dPx = (Bc/Z^2)dZ
where dPx = change in x-parallax due to change in flying height (dZ)
B = air base
c = focal length
Z = flying height
For example c = 150 mm, overlap 60%, photo scale = 1/3750 and dZ = 30 meter, the approximate dPx is about 5 millimeters.
When searching for a corresponding point on the right photo, the function first set a center of the window at an approximate location, then it will search to the left and to the right by the amount given by the search distance,.
The window size is very much dependent on details appear on the individual scene. In general it should be big enough to be able to include unique features, or texture, so that the correlation function can work reliably. The bigger is more reliable but may lose the quality of localization.
The interval step is the interval on the left photo at which the center of a window is placed and the corresponding point on the right photo will be searched. It is basically the distance between each DEM grid, however expressed in the image space.
A minimum value of correlation should be given, or the function will use a default value (0.6), to discard wrong matched points. Points that have its maximum correlation coefficient (value between -1 to 1), less than the given minimum value will be considered wrong match. They are then assigned an value of "na", not applicable, which is set by the command "set na".
If request, a file containing correlation results of all matched point can be stored for later investigation. It is a Matrix object of the same size as the matrix of the left photo.
Class
Model
Usage
{VecPt3D} ret = object.DEM({double} argm1, {double} argm2, {double} argm3, [{double} argm4, {String} argm5])
argm1 = interval step (in photo coordinate unit)
argm2 = window size (in photo coordinate unit)
argm3 = search distance (in photo coordinate unit)
argm4 = minimum correlation coefficient value (default = 0.6)
argm5 = file name to store correlation results
(If not given, there will be no file created)
Example:
->resolution = 0.015 ->Vec_DEM_point = M.dem(step, winsize, search, min_cor, "Correlation_result") -> |
See also (class function)
normalize
Purpose
To perform Exterior Orientation (EO) for a Model, using bundle adjustment. The function requires at least 3 ground control points, with x y z coordinate, and their corresponding photo coordinates on the left and right photo. In fact, the input measure coordinates on the photograph can be either row and column numbers, x y image rectangular coordinates, or x y photo coordinates. See details in Understanding Coordinate System.
The function assumes that row and column numbers as input data are measured from the photos of the calling model. If not, they must be converted to x y image rectangular coordinates, or x y photo coordinates. Sometimes the user might measure row and column numbers of points on the original full size image, while the model just contains portions, sub-images, of the original photos.
In most circumstances, it is necessary to perform Interior Orientation (IO) of each photo in the model prior to EO. IO parameters are used to transform between row and column numbers or x y image rectangular coordinates to x y photo coordinates.
The results Exterior Orientation Parameters (EOP) are used to updated EOPs of the left and the right photo of the model.
Moreover, an ASCII text file is automatically generated after the EO is done. Also a matrix containing EOPs and all related information are return to the user. This matrix is is actually where the information in the report is from. By comparing the returning matrix to the report, the user can easily figure out what output information is represented by which part of the matrix.
The computation of this function does not involve image data, therefore, it is possible to load the photo in virtual mode, using function "vload". This can reduce computer time tremendously, especially when the size of the photo is very large.
Class
Model
Usage
{Matrix} ret = object.EO({VecIdPt2D} argm1, {VecIdPt2D} argm2, {VecIdPt3D} argm3, [{String} argm4, {int} argm5])
argm1 = list of coordinates, measured on the left photo
argm2 = list of coordinates, measured on the right photo
argm3 = list of ground control coordinates
argm4 = output file name for the report (default = "RPT_EO.txt")
argm5 = type of measuring coordinates on the photo,
must be 0(row and column), 1(image rectangular coordinate), or 2(photo coordinate)
(default = 0 (row and column number))
Example:
->Mtmp = mod.EO(vec_left, vec_right,
vec_gcp, "my_EO_report", 0) -> |
See also (class function)
AO, RO
Purpose
To convert from a ground control point coordinate, x y z, to a model coordinate.
Class
Model
Usage
{Pt3D} ret = object.GCP2XMYMZM({double} argm1, {double} argm2, {double} argm3)
argm1 = x ground control coordinate
argm2 = y ground control coordinate
argm3 = z ground control coordinate
Example:
->pt_model = mod.gcp2xmymzm(600, 700, 100) -> |
See also (class function)
xmymzm2gcp
Purpose
To report an ID of the model.
Class
Model
Usage
{int} ret = object.ID()
Example:
->id = mod.id() -> |
See also (class function)
Purpose
To initialize a Model object.
Class
Model
Usage
{void} object.INIT({Photo} argm1, {Photo} argm2)
argm1 = left photo
argm2 = right photo
Example:
->mod = Model() ->mod.init(ph1, ph2) -> |
See also (class function)
Purpose
To load a Model object saved as Noobeed format. Unlike, Matrix, Image, or Photo, there is only 1 file needed, and it is a documentation file that basically tells the model where on the disk the left and the right photo object are. The documentation file is an ASCII file. The filenames for the left and right photo object must exist, so that a model can load data successfully. Therefore in all, there are 5 files needed to load a model, one is a documentation file, and the other 4 files are for two photo objects, two per each photo.
If file name extension is omitted, the function will add default extension to the file. The default extension of the documentation file is ".txt" and that of the data file is ".raw".
Unless the path name is given in the file names, the function will search for the files in the current working directory, defined by command "set path".
The detail structure of the documentation file is described in function "save".
Class
Model
Usage
{void} object.LOAD({String} argm1)
argm1 = file name for the documentation file (default extension is ".txt")
Example:
->M = model() ->M.load("my_model") -> |
See also (class function)
save
Purpose
To return a Photo object of the left photo of the model.
Class
Model
Usage
{Photo} ret = object.LPHOTO()
Example:
->ph1 = mod.lphoto() -> |
See also (class function)
rphoto
Purpose
To return the name of the model.
Class
Model
Usage
{String} ret = object.NAME()
Example:
->model_name = mod.name() -> |
See also (class function)
id
Purpose
To create a normalized model. A normalized model consists of two photos, the left and the right, both of which are resample so that the corresponding points on the left and the right photo are on the same row. In other words, point on the left photo will be at the same row number as its corresponding point on the right photo. The is the so-called epipolar geometry and it is very important in generating a 3D stereogram for stereoscopic viewing. Another benefit is in area of matching, for surface reconstruction, in which the search space is restricted to a 1D search, rather than a 2D search.
We can think of the normalized image as a NEW image of the object space scene taken by a NEW ideal camera that does not have distortion and PP is exactly at coordinate 0,0. Therefore a new camera will be assigned to the normalized images.
The new image will have its own photo coordinate system, which is the normalized coordinate. The new normalized coordinate system is embeded in the new image via the LL and UR point coodinates.
Now the problem is how to link the normalized
coordinates to ground coordinates. This can be done by modifying the EOP of
the original image so that they can be used with the normalized coordinate.
Inside Noobeed, the new EOP of the normalized image will be calculated and
store to the new image.
Only in the ORIGINAL Model environment that
the normalized coordinates can be traced back and forth to the photo
coordinates of the original camera. Once the normalized image is saved
and reload as individual photo, this relation information will be lost.
Class
Model
Usage
{Model} ret = object.NORMALIZE({String} argm1)
argm1 = resample method, must be "nearest" or "bilinear" or "bicubic"
(default = "nearest")
Example:
->model_new =
mod.normalize("bicubic") -> |
See also (class function)
norm_left, norm_right
Purpose
To create a normalized photo of the left photo of the model.
This function is in fact a subset function of function "normalize", hence it only generate the normalized photo of the left image.
Class
Model
Usage
{Photo} ret = object.NORM_LEFT({String} argm1)
argm1 = resample method, must be "nearest" or "bilinear" or "bicubic"
(default = "nearest")
Example:
->ph_left_new =
mod.norm_left("bicubic") -> |
See also (class function)
norm, norm_right
Purpose
To create a normalized photo of the right photo of the model.
This function is in fact a subset function of function "normalize", hence it only generate the normalized photo of the right image.
Class
Model
Usage
{Photo} ret = object.NORM_RIGHT({String} argm1)
argm1 = resample method, must be "nearest" or "bilinear" or "bicubic"
(default = "nearest")
Example:
->ph_right_new =
mod.norm_right("bicubic") -> |
See also (class function)
norm, norm_left
Purpose
To convert from row and column numbers, of a point on the left photo and its corresponding point on the right photo, to an x y z ground control coordinate. Please note that the arguments of the function are of type double, which allows sub-pixel row and column numbers as input.
This function requires that both photos in the model have EOPs. The computation algorithm utilizes the Least Square adjustment technique.
Class
Model
Usage
{Pt3D} ret = object.RC2XY({double} argm1, {double} argm2, {double} argm3, {double} argm4)
argm1 = row number on the left photo (in pixel unit)
argm2 = column number on the left photo (in pixel unit)
argm3 = row number on the right photo (in pixel unit)
argm4 = column number on the right photo (in pixel unit)
Example:
->pt_xyz = mod.rc2gcp(12.544,
22.986, 90.256, 23.004) -> |
See also (class function)
gcp2rc (class function of Photo)
Purpose
To convert from row and column numbers, of a point on the left photo and its corresponding point on the right photo, to an x y z model coordinate. Please note that the arguments of the function are of type double, which allows sub-pixel row and column numbers as input.
This function requires that the model has its RO done. The computation algorithm utilizes the Least Square adjustment technique.
Class
Model
Usage
{Pt3D} ret = object.RC2XMYMZM({double} argm1, {double} argm2, {double} argm3, {double} argm4)
argm1 = row number on the left photo (in pixel unit)
argm2 = column number on the left photo (in pixel unit)
argm3 = row number on the right photo (in pixel unit)
argm4 = column number on the right photo (in pixel unit)
Example:
->pt_xyz =
mod.rc2xmymzm(12.544, 22.986, 90.256, 23.004) -> |
See also (class function)
gcp2rc (class function of Photo) - model coordinate can be thought of a type of ground control coordinate
Purpose
To perform Relative Orientation (RO) for a Model, using bundle adjustment. This function is very similar to function "EO", except that it does not requires ground control points. The function requires at least 5 tie points, points appear on both images, are measured. In fact, the input measure coordinates on the photograph can be either row and column numbers, x y image rectangular coordinates, or x y photo coordinates. See details in Understanding Coordinate System.
The function assumes that row and column numbers as input data are measured from the photos of the calling model. If not, they must be converted to x y image rectangular coordinates, or x y photo coordinates. Sometimes the user might measure row and column numbers of points on the original full size image, while the model just contains portions, sub-images, of the original photos.
In most circumstances, it is necessary to perform Interior Orientation (IO) of each photo in the model prior to RO. IO parameters are used to transform between row and column numbers or x y image rectangular coordinates to x y photo coordinates.
The RO function assumes that the origin of the model coordinate system is at the left projection center with a coordinate of 0,0,0. The left orientation angles, namely omega, phe and kappa, are kept zero. Lastly the bx component of the model base is assigned using the the value specifgied in the model definition, otherwise a default value of bx will be used. Together there are 7 parameters being held fixed, and this leaves 5 parameters being solved by the RO function.
The results Relative Orientation Parameters (ROP) are used to updated EOPs of the left and the right photo of the model.
Moreover, an ASCII text file is automatically generated after the RO is done. Also a matrix containing ROP and all related information are return to the user. This matrix is is actually where the information in the report is from. By comparing the returning matrix to the report, the user can easily figure out what output information is represented by which part of the matrix.
The computation of this function does not involve image data, therefore, it is possible to load the photo in virtual mode, using function "vload". This can reduce computer time tremendously, especially when the size of the photo is very large.
Please be informed that, this function resets all the previous values of EOP of the left and right photos and replaces them with those from the RO calculation accordingly.
At the end of RO calculation, Noobeed will print adjusted model coordinates together with the two projection centers to the file named by "set Fout" command. These data willl be appended to the exising data in the file each time RO is done. the first line of the data wll be the model ID and the two projection center ID will be the ID of the left and the right photo of the stereo model. If independent model AT is reqiued, these data can be directly feeded as input data for AT processing. An example of model coodinate data are given below. Please note that the first two lines are model coordinates of the let and right projection centers.
101102
101
0.000 0.000 0.000
102
2.344 -0.200 -0.045
8
-3.498 -3.153 -10.984
9
6.889 -3.683 -11.206
16
8.172 6.765 -11.324
10201 -2.722
4.937 -7.640
10202 1.058
4.658 -7.719
10203 6.454 10.659
-11.313
10204 3.908 10.570 -11.231
10205 -2.960 2.062 -7.640
10206
0.759 0.878 -7.689
10207
6.113 3.121 -11.225
10209 -4.512
-3.848 -10.957
10210 -4.567 -9.040
-10.932
10211 -0.597 -4.428 -11.045
10212 2.045 -9.854 -11.061
10215
6.419 6.910 -11.299
Class
Model
Usage
{Matrix} ret = object.RO({VecIdPt2D} argm1, {VecIdPt2D} argm2, [{int} argm3], {String} argm4, {int} argm5])
argm1 = list of coordinates of tie points with ID, measured on the left photo
argm2 = list of coordinates of tie points with ID, measured on the right photo
argm3 = flag flightline direction, with respect to the left image (default = 0)
0 = flightline points toward the right side of the image
1 = flightline points toward the upper side of the image
2 = flightline toward the left side of the image
3 = flightline points toward the lower side of the image
argm4 = output file name for the report (default = "RPT_EO.txt")
argm5 = type of measuring coordinates on the photo,
must be 0(row and column), 1(image rectangular coordinate), or 2(photo coordinate)
(default = 0 (row and column number))
Example:
->Mtmp = mod.RO(vec_left,
vec_right, 3, "my_RO_report", 0) -> |
See also (class function)
AO, EO
Purpose
To report the current Relative Orientation Parameters of the model..
Class
Model
Usage
{Vector} ret = object.RO_PARA()
Example:
->vec_para = mod.ro_para() -> |
See also (class function)
ao_para
Purpose
To return a Photo object of the right photo of the model.
Class
Model
Usage
{Photo} ret = object.RPHOTO()
Example:
->ph1 = mod.rphoto() -> |
See also (class function)
lphoto
Purpose
To save as Noobeed format. The function creates five files, one for documentation, two for the left photo and two for the right photo.
The
documentation file will have an extension ".txt". It is
actually an ASCII file and must have exactly 8 lines (not including comment
lines) , as below.
|
In the example above, the file is save by a name of "temp". Noobeed will use the given filename and add letter "L" and "R" in front of it to save the left and the right photo of the model. Apparently, this creates redundant data, if the data of the left and the right photo already exist on the disk. An alternative way is to save only documentation file of the model, by using function "savedoc", then edit the documentation file to point to the existing photo files. Also, the user may need to edit the values of EOPs of existing photo documentation files.
if a file name extension is not given, the function assumes an extension of ".txt".
if a path name is not include in the specified file name, the function will search in the current data path, set by command "set path".
Class
Model
Usage
{void} object.SAVE({String} argm1)
argm1 = documentation file name for model
Example:
->mod.save("temp") -> |
See also (class function)
load
Purpose
To save only information of a Model to a documentation file. The documentation file will have an extension ".txt". It is actually an ASCII file and must have exactly 8 lines, see details in function "save".
Class
Model
Usage
{void} object.SAVEDOC({String} argm1)
argm1 = output file name (default extension = ".txt")
Example:
->mod.savedoc("temp") -> |
See also (class function)
save