If you've not seen this, it's worth a look.
(Demonstrated by Microsoft's Blaise Aguera y Arcas, this video is about 7 minutes in duration.)
Photosynth
"Using photos of oft-snapped subjects (like Notre Dame) scraped from around the Web, Photosynth (based on Seadragon technology) creates breathtaking multidimensional spaces with zoom and navigation features that outstrip all expectation."
Now imagine every image ever taken of space applied in this manner.
Photosynth (Seadragon)
-
Topic authorLordFerret
- Posts: 737
- Joined: 24.08.2006
- Age: 68
- With us: 18 years 3 months
- Location: NJ USA
-
Topic authorLordFerret
- Posts: 737
- Joined: 24.08.2006
- Age: 68
- With us: 18 years 3 months
- Location: NJ USA
LordFerret wrote:I believe that future is already here. It's likely the new Google Sky uses a similar software technology... from its description, it sounds exactly to be that of Seadragon in functionality.
No, it's something completely different... Google Sky is just a flat map/image (well, many images from various sources stitched together) put on a 3D sphere (the idea is similar to the old Google Earth or even Celestia with VT). Photosynth takes many images and process them automatically to create 3D objects (buildings, landscapes). If we are talking about space photos then the result may be the same or very similar (since the photos are taken from one point - Earth and its orbit); Photosynth works best for 'smaller' objects (with images taken from many different points/angles).
mk
mk wrote:Photosynth takes many images and process them automatically to create 3D objects (buildings, landscapes). If we are talking about space photos then the result may be the same or very similar (since the photos are taken from one point - Earth and its orbit); Photosynth works best for 'smaller' objects (with images taken from many different points/angles).
Well, maybe that's how it's described, but the examples I've seen do not create 3D objects. The tours of the shuttle and its launch pad just blend together pictures taken from different locations. It doesn't create 3D objects that you can look at from any angle: you only can see the pictures that actually were taken. If the photographs were carefully planned and the camera(s) properly positioned, you can use Photosynth to seamlessly select the sequence of photographs which make it seem that your viewpoint is moving around an object.
Selden
-
- Developer
- Posts: 3776
- Joined: 04.02.2005
- With us: 19 years 9 months
-
- Developer
- Posts: 3776
- Joined: 04.02.2005
- With us: 19 years 9 months
Btw, concerning the presentation's beginning, Apple do the same... ...but with video!
http://www.youtube.com/watch?v=pyd8O-2mkgk
http://www.youtube.com/watch?v=pyd8O-2mkgk
-
Topic authorLordFerret
- Posts: 737
- Joined: 24.08.2006
- Age: 68
- With us: 18 years 3 months
- Location: NJ USA
mk,
If you'll go back and re-view the Photosynth video and listen to the guy describe how Seadragon functions, you'll see it exactly matches the definition of how Google-sky works... we've all seen the ads and tv news about it yes?
Google-sky starts out as you on Earth viewing the sky, allowing you to select a region of space to zoom into... and its zoom function calls upon existing images of the region at various levels of zoom. Exactly what Seadragon does as demonstrated in the last segment of the video with Notre Dame.
If you'll go back and re-view the Photosynth video and listen to the guy describe how Seadragon functions, you'll see it exactly matches the definition of how Google-sky works... we've all seen the ads and tv news about it yes?
Google-sky starts out as you on Earth viewing the sky, allowing you to select a region of space to zoom into... and its zoom function calls upon existing images of the region at various levels of zoom. Exactly what Seadragon does as demonstrated in the last segment of the video with Notre Dame.
Well, maybe "object" wasn't the best word. It creates a 3D cloud of photos(points)... Even "model" would be betterselden wrote:Well, maybe that's how it's described, but the examples I've seen do not create 3D objects. The tours of the shuttle and its launch pad just blend together pictures taken from different locations. It doesn't create 3D objects that you can look at from any angle: you only can see the pictures that actually were taken. If the photographs were carefully planned and the camera(s) properly positioned, you can use Photosynth to seamlessly select the sequence of photographs which make it seem that your viewpoint is moving around an object.
But photos are positioned in 3D space automatically by the program itself. There is no external model used to put all those pics.
Whereas in Google Sky (as well as in Google Earth, World Wind, Celestia) we have:
1. image (divided into tiles) - let's say BMNG VT for Celestia made from 86400x43200 image
2. 3D model/object - sphere or ellipsoid
These two things are separate.
Yes, the final result is the same or very similar (at least when we are talking about zooming), the difference is in creating all these things... As I said Photosynth does it all automatically. Also space (or Earth) isn't the best example for showing differences (our "models" are very simple - just spheres), complicated construction (like Notre Dame) are much better.LordFerret wrote:Google-sky starts out as you on Earth viewing the sky, allowing you to select a region of space to zoom into... and its zoom function calls upon existing images of the region at various levels of zoom. Exactly what Seadragon does as demonstrated in the last segment of the video with Notre Dame.
BTW
I think that Virtual Earth uses 3D buildings (whole cities) created automatically from aerial photos and IMHO it looks not so bad. (The bad thing is that their 3D plugin doesn't work for me at all )ElChristou wrote:There is several attempts (projects) to extract 3D models from several photos of the same object, but for now I never saw something really stunning...
mk
-
Topic authorLordFerret
- Posts: 737
- Joined: 24.08.2006
- Age: 68
- With us: 18 years 3 months
- Location: NJ USA
mk wrote:BTWI think that Virtual Earth uses 3D buildings (whole cities) created automatically from aerial photos and IMHO it looks not so bad. (The bad thing is that their 3D plugin doesn't work for me at all )ElChristou wrote:There is several attempts (projects) to extract 3D models from several photos of the same object, but for now I never saw something really stunning...
I'm not positive, but I think X-Plane does something similar to this, but relies on laser altimeter readings... see the last picture at their homepage (warning: large images, several of them), and check the About page -
http://www.x-plane.com/default.html