Fontaine tutorial

De MicMac
Aller à : navigation, rechercher

Download

Once you have download it, you have to UnZip the ".zip" archive.

Description

30 JPG files. In this dataset there isn't any data about the camera which was used. The shooting set contain 30 images, all taken with the Canon 70D camera with an 18mm lens. The camera saves metadata for all the pictures (exif data). If you are looking in the property data of each pictures, the 18mm lens is mentionned and the various chosen settings while the production of this dataset was realized (opening, break, ...).

There are 5 parts in this dataset :

  • 4 parts includes 4 differents points of view of the fountain. Each part was created with a cross points of view. This means there is one master image at the center and 4 images around it (top, bottom, left and right from the master image).
  • 1 part contain the others images (IMG.*JPG). These images are images link, this means that they will allow to link the 4 other parts together. There won't be used to generate the dense clouds points.

We will work this dataset with the image geometry pattern, that means that we will choose one master image, from this image it will be calculate a depth map. This master image can't cover the whole object at 360degree. To modelize this 3D object, we need to calculate a few more depth map with the image geometry pattern in order to get the whole 3D fountain. In this exercise, we will calculate 4 depths maps : one for each part.

Tie-points search

The tie points for all the images should be computed simultaneously so the 4 parts are linked with each other. The orientation will be in an arbitrary system, but the same system will be kept for all the images. The 4 depths maps generated (as well as the 4 clouds) will therefore be in the same coordinate system.

First, we need to run the tie-points search :
mm3d Tapioca MulScale ".*JPG" 500 2500
The MulScale pattern allows to make a search for similar points firstly on sub-sampled images. On this dataset, it will be on 500pixels on the biggest side instead of 5472 on the originals images. This allows to know which images have some tie points between them and to only run the tie-points search at a bigger resolution (here 2500) on the the optimal sets of images.

Internal Orientation+Relative Orientation

We are looking now to know the position of the camera relative to each other but also to know the calibration of the camera used :
mm3d Tapas RadialStd ".*JPG" Out=SU
For that in Tapas, there is the calibration type that we need to choose. Here, RadialStd is the pattern generally used for classic cameras. The Out pattern is affiliated to the exportation name (here SU for Setting up). The calibration will be determined directly with the images which will be used for the 3D reconstruction. In some cases, it can be interesting to choose another site with more depth textures on which some pictures will be used to know the calibration of the camera which will be given as a Tapas input (with the InCal tool) run on images from the object.

In the command prompt, we can control the residues as the calculation goes. At the last step, we can see that the image residual are fo all the images are lower than a half-pixel. We also have to see the number of tie-points, as well as the percentage of keeping points ("99.8258 of 38466" : 99.8% of tie-points kept than 38466 calculated points).

Visualize Relative Orientation

The AperiCloud command allows to generate a 3D clouds points, containing all the tie-points obtained with Tapioca, and the position of the camera used as a Tapas input.
mm3d AperiCloud ".*JPG" SU
The result of this command can be seen for example, with the Meshlab software. So we can see that the 4 parts settings of 5 images around the fountain are connected between them thanks to the images link.