A Photogrammetry Primer

No Tags | Uncategorized


Photogrammetry is a technology of making measurements from photographs for recovering surface points of objects and terrains. It is a technology as old as early photography and used since the 19th century.

In 3D technology photogrammetry is a powerful but easy instrument to acquire geometry and textures of many kinds of objects and environments without the need of expensive and sophisticated scanning hardware. In short, it is the poor man’s tool set to scan objects and terrains just with images rather than using laser scanners that cost a fortune.

Even though it uses only images to recover surface points, it is a complicated process that requires a software that takes care of all the calculations. The operator needs many technical skills to achieve the final result. The reward is a 3D mesh usable for any kind of 3D application, like games, virtual reality walkthroughs, documentation, rapid prototyping, replication of cultural heritage, 3D printing to name a few.

In this article I will introduce the major steps from choosing the object, through reconstruction the geometry up to the final application.The major steps are as follows:

  • Preparations
  • Image Acquisition
  • Image Registration
  • Point Cloud Reconstruction
  • Mesh Reconstruction
  • Mesh Optimization and Repair
  • Texturing
  • Export
  • Publishing on a 3D Service

Considering the Object

For the sake of simplicity choose a simple flat object that has a certain kind of depth but no undercuts or extending parts. To learn the process and gently introduce possible issues it is important to start with an easy object like this rather than with the Neptun Fountain or the Statue of Liberty.

Make sure the object is clean. Any feature that doesn’t belong to the object will leave a mark in the resulting geometry and the generated texture. If allowed clean it with a soft brush or use a blower to remove dust. Sculptures in nature often contain lots of little disturbances that need to be removed to get a proper scan.

Capturing: Taking the Photos

For an object like this rosette usually 60 to 70 photos around it are sufficient to capture most details. However, the more photos you take the more detail the resulting geometry will contain. If the object was mobile it can be put on a turntable or if its orientation was horizontal you can walk around it to find each perspective. (Since the rosette is on a wall, I use a turning construction on a tripod)

Lighting: Take the photos on a cloudy day or choose a time when the location lies in the shadow for at least one hour or more. Direct sunlight, object shadows and other objects’ shadows that cross an the object and change during the period of taking the photos will lead to inconsistent image content and result later in registration issues during the processing steps. Avoid drastic light changes during dawn or dusk.

Equipment: Use a tripod if you need long exposures (longer than 1/60th). As lens take a wide angle lens with approximately 35mm focal length on a full frame camera or a 35mm equivalent if you are using a camera with a sensor crop. Important, don’t mix photos with different focal length, the software can not calibrate them.

Exposure: Use a low ISO value, i.e. between 100 and 800. More important set the aperture to an F-stop that offers a wide depth of field. Usually an F-stop of F/8 or rather F/11 will do. You will see why a tripod is mandatory with these settings. The image needs to be as bright as possible so the darkst shadow shows still some detail. Since over exposure due to direct sunlight is unlikely here, slightly overexpose each photo (turn off any automatic)If the exposure allows to take the photos hand-held make sure to vary the distance of the camera to the center of the object from image to image slightly. This will help later when the software is aligning the photos.

Composition: To get the most out of your camera’s resolution try to photograph the object filling the frame. Don’t frame to close to the edges since this will distort the object and could reveal the lense’s weekness in the corners.

Shooting in RAW: Finally consider to shoot in RAW if possible. This will open up a whole lot of possibilities when post-processing the photos. Keep in mind the software needs as much information it can get from the photos only and in a most consistent way. Varying exposures, inconsistent white balances, or photos where the object is slightly out of focus in an area are hard to register and process. Shooting in RAW will give you the opportunity to normalize all nuances or dratsic changes in lighting the object encountered during the shooting phase (mind the decreasing light and changing colors during dusk).

Developing the Photos

  • If you chose to shoot in RAW you can develope the photos to JPG files without compression (Quality=100%).
  • Find a white balance for all photos
  • Expose the photos so they all show the same luminosity. If you took them during changing lighting conditions alter each photo’s exposure to settle them
  • The same applies if the colors changed on location: adjsut the white balance for each photo
  • Sharpen the photos a nbit more than you would do if it was a normal photo. The software will thank you when finding features to align the photos to.


This is the step you’ll begin to process the “scan” in a software called Reality Capture. There are some other specialized software applications (Metascan, a product formerly called PhotoScan, 1-2-3-Catch), you can use. Reality Capture is so incredibly fast you won’t use any other once you have tasted it.

The processing in all these applications is the same:

  • First you load the images
  • Aligning the images (also called registration). During this process the software will look for features in each photo that are contained in the neighbouring images as well. The result from this feature detection and registration are the camera positions of all images and a cöoud of tie points
  • Reconstruction: Calculating a model from the images and points
  • Unwrapping: Creating a texture atlas
  • Texturing: Rendering a texture from the images that is projected onto the point cloud / mesh using the texture atlas
  • Exporting: Rendering a mesh of triangles, the geometry, that can be used in any 3D application

Loading the Photos


Algning the Photos

Adjusting the Ground Plane


Ground plane not well aligned


Ground plane should be aligned well to x, y, and z axis

Setting up the Reconstruction Region


Restricting the y-width of the reconstruction region


Reconstruction region adjusted as close to the requested part of the object as possible

Creating a Mesh: Reconstruction in Normal Detail


Reconstruction of a 3D model in normal detail with smooth shading

Unwrapping UV


A checkers pattern indicates the unwrapping of UVs and that a texture atlas has been created



Created texture from some of the photos


Textured geometry


Filtering of unneccessary Parts

Repairing the Mesh




Geometry imported as a tool in zBrush


Texture applied correctly (see the thumbnail on the right)


How to not do it ;-)

No Comments

Comments are closed.