# Fontaine tutorial

(diff) ← Version précédente | Voir la version courante (diff) | Version suivante → (diff)

## Description

In this tutorial we go further in command details, process, results, products etc... Here you will learn how to process a dataset of a circular point of view around an object. It's also a good way to compare the old Malt and the new pipeline Pims, specially for geometry based on images.

You can find this dataset at http://micmac.ensg.eu/data/fontaine_dataset.zip
Once you have downloaded it, you have to unzip the ".zip" archive.

## Presentation

The folder contain 30 JPG files. In this dataset there isn't any data about the camera which was used. The shooting set contains 30 images, all taken with the Canon 70D camera with an 18mm lens. The camera saves metadata for all the pictures (exif data). If you are looking in the property data of each pictures, the 18mm lens is mentioned and the various chosen settings while the production of this dataset was realized (opening, break, ...).
There are 5 parts in this dataset :

• 4 parts includes 4 differents points of view of the fountain. Each part was created with a cross points of view. This means there is one master image at the center and 4 images around it (top, bottom, left and right from the master image).
• 1 part contain the others images (IMG.*JPG). These images are images link, this means that they will allow to link the 4 other parts together. There won't be used to generate the dense clouds points.

We will work this dataset with the image geometry pattern, that means that we will choose one master image, and from this image, it will compute a depth map. This master image can't cover the whole object at 360degree. To modelize this 3D object, we need to calculate a few more depth maps with the image geometry pattern, in order to get the whole 3D fountain. In this exercise, we will calculate 4 depth maps : one for each part.

## Tutorial

### 1. Tie-points search

The tie points search for all the images should be computed simultaneously so the 4 parts are linked with each others. The orientation will be in an arbitrary system, but the same system will be kept for all the images. The 4 depth maps generated (as well as the 4 clouds) will therefore be in the same coordinate system.

First, we need to run the tie-points search :

mm3d Tapioca MulScale ".*JPG" 500 2500

The MulScale pattern allows to make a search for similar points firstly on sub-sampled images. On this dataset, it will be on 500pixels on the biggest side instead of 5472 on the originals images. This allows to know which images have some tie points between them and to only run the tie-points search at a bigger resolution (here 2500) on the optimal sets of images.

### 2. Internal Orientation+Relative Orientation

We are looking now to know the position of the camera, in relative to each others, but also to know the calibration of the camera used :

mm3d Tapas RadialStd ".*JPG" Out=Fontaine

For that in Tapas, there is the calibration type that we need to choose. Here, RadialStd is the pattern generally used for classic cameras. The Out pattern is affiliated to the exportation name (here Fontaine). The calibration will be determined directly with the images which will be used for the 3D reconstruction. In some cases, it can be interesting to choose another site with more depth textures on which some pictures will be used to know the calibration of the camera which will be given as a Tapas input (with the InCal tool) run on images from the object. In the command prompt, we can control the residues as the calculation goes. At the last step, we can see that the image residual are for all the images lower than a half-pixel. We also have to see the number of tie-points, as well as the percentage of keeping points ("99.8258 of 38466" : 99.8% of tie-points kept than 38466 calculated points).

### 3. Visualize Relative Orientation

AperiCloud visualization

The AperiCloud command allows to generate a 3D clouds points, containing all the tie-points obtained with Tapioca, and the position of the camera used as a Tapas input.

mm3d AperiCloud ".*JPG" Fontaine

The result of this command can be seen for example, with the Meshlab software. So we can see that the 4 parts containing 5 images around the fountain are connected between them thanks to the images link.

### 4. 3D Reconstruction of one part

SaisieMasqQT

Now we will work on one part each. The images link won't be used now : they were used to set up all the part in the same system. Now we can start with the first part. The master image from this part is the the image AIMG_2470.JPG on which we need to define a mask to limit the correlation zone.

mm3d SaisieMasqQT AIMG_2470.JPG

it can be useful to see if everything is saved by checking if the file AIMG_2470_Masq.xml was created. The contents of AIMG_2470_Masq.tif (binary image) can also be verified.

We can now calculate the dense correlation with the image geometry pattern :

 mm3d Malt GeomImage "A.*JPG" Fontaine Master=AIMG_2470.JPG ZoomF=2

We choose the image pattern that contains our master image aswell as the secondary images and we define the master image thanks to the Master setting. The ZoomF setting allows to define the last stage of the image pyramid which will be used. For this computation, we don't do it on full resolution images. The Malt pattern will begin to compute first on sub-sampled images and then increase the image sizes until sub-sampled images from a level 2.

Within the process, you can verify if the correlation is working by analyzing the files MM-Malt-Img-AIMG_2470/Correl_STD-MALT_Num_#.tif each files matches to a stage of the images pyramid. These files contains correlation scores : white means there is a very good matching score. More the gray is dark, less the matching process went well.

### 5. Visualize 3D products results

8bits
• From the depth map previously computed, we can generate more 3D products, after moving into the folder from the Malt output :
 cd MM-Malt-Img-AIMG_2470/
• Create a faded relief image :
mm3d GrShade Z_Num7_DeZoom2_STD-MALT.tif ModeOmbre=IgnE Mask=AutoMask_STD-MALT_Num_6.tif
• Create an hypsometric colors image :
mm3d to8Bits Z_Num7_DeZoom2_STD-MALT.tif Circ=1 Coul=1 Mask=AutoMask_STD-MALT_Num_6.tif

The file Z_Num7_DeZoom2_STD-MALT_8Bits.tif can be seen in a image viewer software.

• Create a 3D points cloud :
mm3d Nuage2Ply NuageImProf_STD-MALT_Etape_7.xml Attr=../AIMG_2470.JPG RatioAttrCarte=2

The Nuage2Ply command allows to take the depth map as an input, which will be converted into a points cloud. This cloud will be colorized with the image you will give as an input into the Attr pattern. The setting RatioAttrCarte=2 allows to remember that our cloud points is appearing with a half-resolution than the origin image, because of the ZoomF=2 pattern. The .ply file generated into the folder MM-Malt-Img-AIMG_2470 can be seen into Meshlab for example.

###### Go further : cleaning clouds
After applying a mask to your image, the clouds can be more or less raw on the edge. The first solution is to clean up the clouds directly with Meshlab or CloudCompare and their editing tools. Another solution is to use the faded relief images, on which there is some raw zone and to apply a new mask :
mm3d SaisieMasqQT MM-Malt-Img-AIMG_2470/Z_Num7_DeZoom2_STD-MALTShade.tif Out=MasqCorrelA.tif

Then, use this mask when converting the depth map into points cloud :

mm3d Nuage2Ply MM-Malt-Img-AIMG_2470/NuageImProf_STD-MALT_Etape_7.xml Attr=AIMG_2470.JPG RatioAttrCarte=2 Mask=MasqCorrelA.tif

### 6. Automatic method

SaisieMasqQT

We will work here with the same dataset of 30 images, obtained with the Canon 70D on a 18mm lens. We don't make any difference between each points of view for this exercise. The setting up computation is the same that the previous exercise : we can get as an output, the results from the Ori-Fontaine/, from the command Tapas. We can obtain aswell the AperiCloud_Fontaine.ply file, as a result of the AperiCloud command we did before. the calculation mode that we will use, based on the geometry pattern. MicMac will choose itself master images and associated images, from a 3D mask that we will draw on points cloud : AperiCloud_Fontaine.ply.

We will start to draw a 3D mask, from the AperiCloud_Fontaine.ply file.

mm3d SaisieMasqQT AperiCloud_Fontaine.ply

Then, we can run the 3D reconstruction with the C3DC command :

mm3d C3DC BigMac "(A|B|C|D).*JPG" Fontaine Masq3D=AperiCloud_Fontaine_selectionInfo.xml
C3DC

This command run the depth maps computation, the 3D cloud points will be converted into a ply file and the merge of all the ply files. The result : C3DC_BigMac.ply can be seen into Meshlab.

### 7. 3D Reconstruction

NB : The tools we will use now aren't completely finished yet, so please remind developers if errors occurred.
When you get a 3D point cloud, you can compute a 3D textured model. First, you have to create a mesh. The tool to create mesh is TiPunch :

###### Go further : triangulation

To compute a mesh, MicMac process a Delaunay triangulation and then a Poisson 3D surface reconstruction.