Public Lab Research note


Using Photosynth to automatically stitch images from kite-mapping

by espero | September 13, 2014 21:23 | 107 views | 1 comments | #11141 | 107 views | 1 comments | #11141 13 Sep 21:23

What I wanted to do

I want to easily stitch together images taken during kite mapping.

My attempt and results

I used Microsoft's Photosynth. See https://photosynth.net/view.aspx?cid=e623beae-5692-4fe1-aa03-0aea2b27f50a for the automatically generated result produced from 34 images generated during a kite mapping run. Photosynth seems to have correctly placed all 34 images without human input.

The result can be shared with Microsoft's Bing maps if desired.

Questions and next steps

Photosynth also allows geotagging of a resulting image, orienting it relative to Microsoft Bing maps. I do not yet know how to use this feature proficiently.

I do not know whether it is possible to export the results to formats other than Photosynth's internal format.

Why I'm interested

I'm interested as the use of automatic image stitching routines seems to allow considerable time savings.

Related work

Photosynth is mentioned in http://www.publiclab.org/notes/mathew/02-25-2014/3d-data-from-image-sets-autostitching

Thanks

I am grateful to Cindy Regalado and Ted Fjallman for introducing us so enjoyably to kite mapping.


1 Comments

This is amazing technology that has great potential for application to kite mapping photos. Photosynth.net has been home to two types of image presentation for a few years: the "Photosynths" like the one you have made, and more traditional stitched images made in Microsoft ICE (Windows only, I think). ICE can make reasonably good stitched maps from nadir aerial photos, although there is no ground control, so the scaling and georectification can vary throughout the map (example). Overlapping oblique aerial photos can also be stitched into "curved" panoramas which can be viewed at Photosynth.net in a smoothly panning viewer (example). Unlike stitched images, Photosynths compute the location of the camera for each photograph from information in the overlaps among photos. When you navigate a Photosynth, the viewer takes you to the camera's location, so you appear to fly over the subject. But you don't see all of the photos at once, and they are not stitched together into one image.

Recently, Microsoft has added a new type of analysis and presentation (https://photosynth.net/preview/). Not only are the camera locations computed for each photo, but the geometry of the subject is determined. This is the "structure from motion" approach which produces a 3D model of the subject with your photos draped over the model. Photosynth has added a new viewer for navigating these 3D models. I have not tried it with aerial photos, but I think I will.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Login to comment.

Public Lab is open for anyone and will always be free. By signing up you'll join a diverse group of community researchers and tap into a lot of grassroots expertise.

Sign up