salutations,
essentially i'm attempting to render infinitely large environments via opengl.
a more specific instance is rendering the below data set at "true" scale:
http://www.ngdc.noaa.gov/mgg/image/2minrelief.html
i say true, but i mean only that the finished result should appear to be the apropriate scale.
to be more explicit, that data is being used to drive a dynamic spherical simulation of the planet earth.
precision problems eventually arise resulting in visible seams as numbers such as 0.000001 begin to span 2 pixels in screen space.
the system utilizes tiled multiresolution dynamic meshes, each mesh is scaled and oriented in local space for maximal numerical stability and manageability.
what i've tried to do to alleviate the seam problem, is to reverse the imprecision so to speak. rather than transforming the meshes into camera space, i'm transforming the camera in to the mesh space. what this does it introduce the imprecision maximally when the camera is far from the mesh, and results in optimal precision when the camera is near the mesh, which is exactly what is desirable, as when the camera is distant from the mesh, the slight imprecision is negligible.
i feel like i've come upon the holy grail of large scale rendering, but their is one major hitch. because the meshes are scaled to a false scale, that is no matter how small the actual mesh is in world space, they are of equal size in local space... so when transforming the camera into the mesh local space, the result is an inconsistant depth buffer, as the progressively smaller meshes result in greater or lesser depth values depending on how you think about it. also the meshes are centered at x0y0z0 in local space, so there is also not only a scaling of depth but a sort of pushing as well.
my impression is there might be a way to solve this problem using glDepthRange(), but its mechanism is not entirely clear to me, due to limited technical resources on hand.
my thought is it should be posible to correct the depth inconsistancies by using some scale and z offset component to augment the depth range.
for instance: glDepthRange(offset-scale*0.5,offset+scale*0.5);
my scale component seems to be successful, but i've yet to come across a formulation for solving the offset component so that the result is identical to the depth buffer which would result from a traditional aproach.
essentially i would like some equations defining how the depth values are computed including the glDepthRange states, and any other advice anyone can bring to the table.
i will try to bring more to the table later, but i think this is enough information for now.
forgive me, i tend to be not so successful at finding information on the internet, i stumbled across this forum, and thought it might be a start.
sincerely,
michael