While working on the National Geographic Special “Sunken Treasures of the Nile” we were very interested in how we could represent the terrain of the surrounding area. Unfortunately there were no DEM (Digital Elevation Models) of this remote area and we were unable to charter an aircraft due to the designation of military air-space. The only method we had available to us for re-creating this kind of terrain was through the use of Kite Aerial Photography and Photogrammetry. We were able to capture 57 million 3D points of the terrain from this simple aerial platform and a DSLR.
In the time we were waiting to receive permission from the Egyptian government to fly a kite over the site Mark Eakle of Insight Digital and Xenexus was able to assemble a rig that would allow us to hang a camera beneath the kite to take aerial photos. He then raised the kite close to 200 feet and fired just over 300 shots of the desired landscape via a radio shutter release. Since this shoot, revisions of his rig included motors that would allow control over the exact orientation of the camera. Even from these unstructured photographs we were able to extract an excellent terrain model with an enormous amount of detail. Below you can see a 3D point cloud representation of this terrain. In addition to the 3D data we were also able to color the 3D points from the photographs to help us visualize the scene.
This segment of the project was largely a technology test to see how much data we could extract from an unstructured image set like this. There are 2 holes in the model where we had fewer than 3 photos of the same area and several others where the terrain was too steep for straight down shots to do a complete reconstruction. Both of these issues could be addressed with more control over the camera orientation via remote control motors and a live view of what the camera was shooting.
Merging Land and Air
After processing we noticed that the KAP acquired point cloud contained an underground quarry that we had photographed for photogrammetry from the ground. We decided to see if we could merge the two point clouds using software from Alice Labs. In the following two videos you can see the lighter colored points near the end of the sequence are added from the ground based photogrammetry. You will need red and blue anaglyph glasses to view the 3d stereoscopic rendered video on the right
These point clouds were created using low resolution images that were 1536 to 2048 pixels wide, 1/16th the resolution of the acquired images. We can increase the resolution we use to increase the density and quality of the point clouds. However the point clouds we are getting right now are substantial enough to use automated methods for generating meshes (more on this later). The next step in the process is to create textures for the geometry. We are now not only able to extract good textures but we can extract them at gigapixel resolution. You can read more about this process in an upcoming post “Photogrammetric Gigapixel Images”