Fontaine tutorial : Différence entre versions
| Ligne 15 : | Ligne 15 : | ||
All the images should be set simultaneously, in order that the 4 parts are consistent between them. The setting will be done in an arbitary system, but the same system is keeped for all the images. So, the 4 depths maps generated (aswell at the 4 clouds) will be in the same cordonate system. | All the images should be set simultaneously, in order that the 4 parts are consistent between them. The setting will be done in an arbitary system, but the same system is keeped for all the images. So, the 4 depths maps generated (aswell at the 4 clouds) will be in the same cordonate system. | ||
| − | First, we need to run the tie-points search :<br> <code>mm3d Tapioca MulScale ".*JPG" 500 2500</code> | + | First, we need to run the tie-points search :<br> <code>mm3d Tapioca MulScale ".*JPG" 500 2500</code> <br> |
The MulScale pattern allows to make a search for similar points firstly on sub-sampled images. On this dataset, it will be on 500pixels on the biggest side instead of 5472 on the originals images. This allows to know which images have bigger common side between them and to only run the tie-points search at a bigger resolution (here 2500) than on the images with common sides. | The MulScale pattern allows to make a search for similar points firstly on sub-sampled images. On this dataset, it will be on 500pixels on the biggest side instead of 5472 on the originals images. This allows to know which images have bigger common side between them and to only run the tie-points search at a bigger resolution (here 2500) than on the images with common sides. | ||
==2 Internal Orientation+Relative Orientation== | ==2 Internal Orientation+Relative Orientation== | ||
We are looking now to know the position of the camera relative to each other but also to know the calibration of the camera used : <br> | We are looking now to know the position of the camera relative to each other but also to know the calibration of the camera used : <br> | ||
| − | <code>mm3d Tapas RadialStd ".*JPG" Out=SU</code> | + | <code>mm3d Tapas RadialStd ".*JPG" Out=SU</code> <br> |
For that in [[Tapas]], there is the calibration type that we need to choose. Here, RadialStd is the pattern generally used for classic cameras. The Out pattern is affiliated to the exportation name (here SU for Setting up). | For that in [[Tapas]], there is the calibration type that we need to choose. Here, RadialStd is the pattern generally used for classic cameras. The Out pattern is affiliated to the exportation name (here SU for Setting up). | ||
The calibration will be determined directly with the images which will be used for the 3D reconstruction. In some cases, it can be interesting to choose another site with more depth textures on which some pictures will be used to know the calibration of the camera which will be given as a Tapas input (with the InCal tool) run on images from the object. | The calibration will be determined directly with the images which will be used for the 3D reconstruction. In some cases, it can be interesting to choose another site with more depth textures on which some pictures will be used to know the calibration of the camera which will be given as a Tapas input (with the InCal tool) run on images from the object. | ||
In the command prompt, we can control the residues as the calculation goes. At the last step, we can see that the image residual are fo all the images are lower than a half-pixel. We also have to see the number of tie-points, as well as the percentage of keeping points ("99.8258 of 38466" : 99.8% of tie-points kept than 38466 calculated points). | In the command prompt, we can control the residues as the calculation goes. At the last step, we can see that the image residual are fo all the images are lower than a half-pixel. We also have to see the number of tie-points, as well as the percentage of keeping points ("99.8258 of 38466" : 99.8% of tie-points kept than 38466 calculated points). | ||
Version du 4 février 2016 à 15:28
Download
- There is a direct download link to download this dataset at :
http://micmac.ensg.eu/fontaine_dataset.zip
Once you have download it, you have to UnZip the ".zip" archive.
Description
30 JPG files. In this dataset there isn't any data about the camera which was used. The shooting set contain 30 images, all taken with the Canon 70D camera with an 18mm lens. The camera saves metadata for all the pictures (exif data). If you are looking in the property data of each pictures, the 18mm lens is mentionned and the various chosen settings while the production of this dataset was realized (opening, break, ...).
There are 5 parts in this dataset :
- 4 parts includes 4 differents points of view of the fountain. Each part was created with a cross points of view. This means there is one master image at the center and 4 images around it (top, bottom, left and right from the master image).
- 1 part contain the others images (IMG.*JPG). These images are images link, this means that they will allow to link the 4 other parts together. There won't be used to generate the dense clouds points.
We will work this dataset with the image geometry pattern, that means that we will choose one master image, from this image it will be calculate a depth map. This master image can't cover the whole object at 360degree. To modelize this 3D object, we need to calculate a few more depth map with the image geometry pattern in order to get the whole 3D fountain. In this exercise, we will calculate 4 depths maps : one for each part.
Tie-points search
All the images should be set simultaneously, in order that the 4 parts are consistent between them. The setting will be done in an arbitary system, but the same system is keeped for all the images. So, the 4 depths maps generated (aswell at the 4 clouds) will be in the same cordonate system.
First, we need to run the tie-points search :
mm3d Tapioca MulScale ".*JPG" 500 2500
The MulScale pattern allows to make a search for similar points firstly on sub-sampled images. On this dataset, it will be on 500pixels on the biggest side instead of 5472 on the originals images. This allows to know which images have bigger common side between them and to only run the tie-points search at a bigger resolution (here 2500) than on the images with common sides.
2 Internal Orientation+Relative Orientation
We are looking now to know the position of the camera relative to each other but also to know the calibration of the camera used :
mm3d Tapas RadialStd ".*JPG" Out=SU
For that in Tapas, there is the calibration type that we need to choose. Here, RadialStd is the pattern generally used for classic cameras. The Out pattern is affiliated to the exportation name (here SU for Setting up).
The calibration will be determined directly with the images which will be used for the 3D reconstruction. In some cases, it can be interesting to choose another site with more depth textures on which some pictures will be used to know the calibration of the camera which will be given as a Tapas input (with the InCal tool) run on images from the object.
In the command prompt, we can control the residues as the calculation goes. At the last step, we can see that the image residual are fo all the images are lower than a half-pixel. We also have to see the number of tie-points, as well as the percentage of keeping points ("99.8258 of 38466" : 99.8% of tie-points kept than 38466 calculated points).