Photosynth (Seadragon)

The only place for all Non Celestia Discussion/Stuff
Avatar
Topic author
LordFerret M
Posts: 737
Joined: 24.08.2006
Age: 68
With us: 18 years 1 month
Location: NJ USA

Photosynth (Seadragon)

Post #1by LordFerret » 15.06.2007, 03:47

If you've not seen this, it's worth a look.

(Demonstrated by Microsoft's Blaise Aguera y Arcas, this video is about 7 minutes in duration.)
Photosynth


"Using photos of oft-snapped subjects (like Notre Dame) scraped from around the Web, Photosynth (based on Seadragon technology) creates breathtaking multidimensional spaces with zoom and navigation features that outstrip all expectation."

Now imagine every image ever taken of space applied in this manner. 8O :idea:

glcanon
Posts: 11
Joined: 24.08.2007
With us: 17 years 1 month
Location: Houston

Post #2by glcanon » 24.08.2007, 05:31

The video is like 7 min 42 seconds long, but one of the most impressive and amazing displays of future technology I've ever seen. I can't wait for the future.

Avatar
Topic author
LordFerret M
Posts: 737
Joined: 24.08.2006
Age: 68
With us: 18 years 1 month
Location: NJ USA

Post #3by LordFerret » 26.08.2007, 06:38

I believe that future is already here. It's likely the new Google Sky uses a similar software technology... from its description, it sounds exactly to be that of Seadragon in functionality.

mk
Posts: 19
Joined: 28.11.2003
With us: 20 years 10 months
Location: Warsaw, Poland

Post #4by mk » 26.08.2007, 13:46

LordFerret wrote:I believe that future is already here. It's likely the new Google Sky uses a similar software technology... from its description, it sounds exactly to be that of Seadragon in functionality.

No, it's something completely different... Google Sky is just a flat map/image (well, many images from various sources stitched together) put on a 3D sphere (the idea is similar to the old Google Earth or even Celestia with VT). Photosynth takes many images and process them automatically to create 3D objects (buildings, landscapes). If we are talking about space photos then the result may be the same or very similar (since the photos are taken from one point - Earth and its orbit); Photosynth works best for 'smaller' objects (with images taken from many different points/angles).
mk

Avatar
selden
Developer
Posts: 10190
Joined: 04.09.2002
With us: 22 years
Location: NY, USA

Post #5by selden » 26.08.2007, 14:50

mk wrote:Photosynth takes many images and process them automatically to create 3D objects (buildings, landscapes). If we are talking about space photos then the result may be the same or very similar (since the photos are taken from one point - Earth and its orbit); Photosynth works best for 'smaller' objects (with images taken from many different points/angles).


Well, maybe that's how it's described, but the examples I've seen do not create 3D objects. The tours of the shuttle and its launch pad just blend together pictures taken from different locations. It doesn't create 3D objects that you can look at from any angle: you only can see the pictures that actually were taken. If the photographs were carefully planned and the camera(s) properly positioned, you can use Photosynth to seamlessly select the sequence of photographs which make it seem that your viewpoint is moving around an object.
Selden

ElChristou
Developer
Posts: 3776
Joined: 04.02.2005
With us: 19 years 7 months

Post #6by ElChristou » 26.08.2007, 15:30

Yep, not really new stuff, VR photo exist since a decade or more... Just a question of presentation here...

There is several attempts (projects) to extract 3D models from several photos of the same object, but for now I never saw something really stunning...
Image

ElChristou
Developer
Posts: 3776
Joined: 04.02.2005
With us: 19 years 7 months

Post #7by ElChristou » 26.08.2007, 16:12

Btw, concerning the presentation's beginning, Apple do the same... ...but with video! 8O

http://www.youtube.com/watch?v=pyd8O-2mkgk
Image

Avatar
Topic author
LordFerret M
Posts: 737
Joined: 24.08.2006
Age: 68
With us: 18 years 1 month
Location: NJ USA

Post #8by LordFerret » 27.08.2007, 01:19

mk,

If you'll go back and re-view the Photosynth video and listen to the guy describe how Seadragon functions, you'll see it exactly matches the definition of how Google-sky works... we've all seen the ads and tv news about it yes?

Google-sky starts out as you on Earth viewing the sky, allowing you to select a region of space to zoom into... and its zoom function calls upon existing images of the region at various levels of zoom. Exactly what Seadragon does as demonstrated in the last segment of the video with Notre Dame.

mk
Posts: 19
Joined: 28.11.2003
With us: 20 years 10 months
Location: Warsaw, Poland

Post #9by mk » 27.08.2007, 12:01

selden wrote:Well, maybe that's how it's described, but the examples I've seen do not create 3D objects. The tours of the shuttle and its launch pad just blend together pictures taken from different locations. It doesn't create 3D objects that you can look at from any angle: you only can see the pictures that actually were taken. If the photographs were carefully planned and the camera(s) properly positioned, you can use Photosynth to seamlessly select the sequence of photographs which make it seem that your viewpoint is moving around an object.
Well, maybe "object" wasn't the best word. It creates a 3D cloud of photos(points)... Even "model" would be better :)
But photos are positioned in 3D space automatically by the program itself. There is no external model used to put all those pics.
Whereas in Google Sky (as well as in Google Earth, World Wind, Celestia) we have:
1. image (divided into tiles) - let's say BMNG VT for Celestia made from 86400x43200 image
2. 3D model/object - sphere or ellipsoid
These two things are separate.

LordFerret wrote:Google-sky starts out as you on Earth viewing the sky, allowing you to select a region of space to zoom into... and its zoom function calls upon existing images of the region at various levels of zoom. Exactly what Seadragon does as demonstrated in the last segment of the video with Notre Dame.
Yes, the final result is the same or very similar (at least when we are talking about zooming), the difference is in creating all these things... As I said Photosynth does it all automatically. Also space (or Earth) isn't the best example for showing differences (our "models" are very simple - just spheres), complicated construction (like Notre Dame) are much better.

BTW
ElChristou wrote:There is several attempts (projects) to extract 3D models from several photos of the same object, but for now I never saw something really stunning...
I think that Virtual Earth uses 3D buildings (whole cities) created automatically from aerial photos and IMHO it looks not so bad. (The bad thing is that their 3D plugin doesn't work for me at all :( )
mk

Avatar
Topic author
LordFerret M
Posts: 737
Joined: 24.08.2006
Age: 68
With us: 18 years 1 month
Location: NJ USA

Post #10by LordFerret » 27.08.2007, 22:36

mk wrote:BTW
ElChristou wrote:There is several attempts (projects) to extract 3D models from several photos of the same object, but for now I never saw something really stunning...
I think that Virtual Earth uses 3D buildings (whole cities) created automatically from aerial photos and IMHO it looks not so bad. (The bad thing is that their 3D plugin doesn't work for me at all :( )

I'm not positive, but I think X-Plane does something similar to this, but relies on laser altimeter readings... see the last picture at their homepage (warning: large images, several of them), and check the About page -
http://www.x-plane.com/default.html


Return to “Petit Bistro Entropy”