Page 1 of 1

managing large scale opengl environments

Posted: 14.07.2004, 23:52
by guest - michael
i'm not a registered user, so first i'm just testing whether or not i have posting rights. -michael

Posted: 15.07.2004, 00:08
by RND
you can still ask your question rather than wasting everyones time. Registering doesnt take that long either :)

Posted: 15.07.2004, 00:20
by guest - michael
salutations,

essentially i'm attempting to render infinitely large environments via opengl.

a more specific instance is rendering the below data set at "true" scale:

http://www.ngdc.noaa.gov/mgg/image/2minrelief.html

i say true, but i mean only that the finished result should appear to be the apropriate scale.

to be more explicit, that data is being used to drive a dynamic spherical simulation of the planet earth.

precision problems eventually arise resulting in visible seams as numbers such as 0.000001 begin to span 2 pixels in screen space.

the system utilizes tiled multiresolution dynamic meshes, each mesh is scaled and oriented in local space for maximal numerical stability and manageability.

what i've tried to do to alleviate the seam problem, is to reverse the imprecision so to speak. rather than transforming the meshes into camera space, i'm transforming the camera in to the mesh space. what this does it introduce the imprecision maximally when the camera is far from the mesh, and results in optimal precision when the camera is near the mesh, which is exactly what is desirable, as when the camera is distant from the mesh, the slight imprecision is negligible.

i feel like i've come upon the holy grail of large scale rendering, but their is one major hitch. because the meshes are scaled to a false scale, that is no matter how small the actual mesh is in world space, they are of equal size in local space... so when transforming the camera into the mesh local space, the result is an inconsistant depth buffer, as the progressively smaller meshes result in greater or lesser depth values depending on how you think about it. also the meshes are centered at x0y0z0 in local space, so there is also not only a scaling of depth but a sort of pushing as well.

my impression is there might be a way to solve this problem using glDepthRange(), but its mechanism is not entirely clear to me, due to limited technical resources on hand.

my thought is it should be posible to correct the depth inconsistancies by using some scale and z offset component to augment the depth range.

for instance: glDepthRange(offset-scale*0.5,offset+scale*0.5);

my scale component seems to be successful, but i've yet to come across a formulation for solving the offset component so that the result is identical to the depth buffer which would result from a traditional aproach.

essentially i would like some equations defining how the depth values are computed including the glDepthRange states, and any other advice anyone can bring to the table.

i will try to bring more to the table later, but i think this is enough information for now.

forgive me, i tend to be not so successful at finding information on the internet, i stumbled across this forum, and thought it might be a start.

sincerely,

michael

Posted: 15.07.2004, 00:37
by Guest
i forgot to add there is also likely an exponential compoent, so it might be necesarry to modify the near and far planes for each mesh as well inorder to align the exponential falloff of depth precision.

if i can work this out, the results would be spectacular, but i'm afraid i'm just trading spatial seams for depth seams. converting the system entirely to 64bit would probably solve the problem as well, but would result in doubling a lot of the memory requirements bot in primary and video memory, and i'm assuming that 64bit instructions on 32bit machines require significantly more cycles.

sincerely,

michael

Posted: 15.07.2004, 03:24
by guest - michael
i've had some time to do some more work with this technique.

i feel like i'm going in the right direction. as far as seams are concerned it seems to be perfect. right now i'm on the surface of a planet, at an amazing scale, maybe a few times larger than earth, 36000 units from the center with zero seams using only 32bit spatial data and 64bit transforms. before working with this system, numbers like 0.000001 would project to 2 pixels at this scale.

if someone from your community can help me work out the depth problem i would be happy to share the details of the technique.

i won't be giving up though, i'm sure i will stare at this problem until i'm unequivically sure it is impossible with opengl.

there is no problem with the depth buffer using traditional transforms. its just the nature of the false depth system that causes discontinuities in the depth buffer.

it may be simply a matter of modulating the near and far planes inorder to correct the depth buffer seams.

anyhow i'm going to spend some more time trying to understand opengl depth buffer system. i found one or two urls here and there that said they had equations, but they were images rather than text, and the references were defunct.

if someone could point me to these specifications i would apreciate it.

sincerely,

michael

Posted: 15.07.2004, 07:18
by guest - michael
hmmm, no advice here it seems.

i would really like to figure this out. the depth buffer calculations and pseudodepth is really not material i'm very comfortable with.

i think what i will do in the mean time is render everything normal, then use some test to determine if a mesh is likely to create seams, and if so turn off the depth mask and set the depth test to something near 1 and use the alternative rendering method to render the borders of the mesh. this will still likely yield the occasional artifact were some protrusion might not be properly occluded within a seam, but should be much less noticible for the time being.

in the meantime i will continue to persue a means to yield proper depth ranges for this technique, a technique which i feel could be of great value to people building large scale geometric visualization systems.

sincerely,

michael

Posted: 15.07.2004, 11:13
by selden
Michael,

Unfortunately, this is primarily a Celestia users' Web site, not an OpenGL users' development forum. I suspect there's only one person here who knows enough about OpenGL and the functionality that's actually available to be able to answer your questions authoritatively, and he's quite busy.

My personal guess is that you'll have to subdivide your models, so that the individual pieces can be scaled to fit.

Posted: 02.08.2004, 02:50
by Paul
Hi Michael,

I suspect that another reason you didn't get many replies is because the 'Celestia Development' forum is rather quiet, judging by the paltry number of posts made since I last visited about three months ago. Celestia's ongoing development has slowed greatly in the past two years, possibly due to catching up with rate at which new (and relevant) astronomical data becomes available (and a general unwillingness to fill the rest of Celestia's mostly empty universe with anything we don't have concrete data on).

What you are doing with camera transformation into mesh space has been done in a couple of other apps I have seen, and sounds equivalent to what Celestia does with its camera coordinates. I'm surprised that nobody has commented on how (or whether) Celestia uses the depth buffer (but then as I said, this forum is pretty much dead).
As to the depth buffer issue, I have been investigating something similar myself and have concluded that there are two courses of action:

1. Don't use the depth buffer - sort the primitives instead (painter algorithm). If your terrain data mesh data is arranged into a grid you can reasonably efficiently render it in sorted order simply based on the camera position and orientation, although it would get slower and more complicated depending on whether you want to render other objects on the terrain.

2. Group primitives into regions that can use the same camera (world) transform, and clear the depth buffer between rendering the regions. This may result in worse performance than option 1, but it would depend on the hardware. And to be honest I haven't tried it so I don't know if there's something preventing it from being possible.

Anyway, I hope that helps a bit (and you've come back to read it).

Cheers,
Paul