Hi!
I have Linux, Celestia 1.4.1 and ATI drivers 8.22 (fglrx). When I run Celestia I got:
Initializing ARB vertex programs . . .
Loading ARB vertex program: shaders/diffuse_arb.vp
Loading ARB vertex program: shaders/specular_arb.vp
Loading ARB vertex program: shaders/haze_arb.vp
Loading ARB vertex program: shaders/bumpdiffuse_arb.vp
Loading ARB vertex program: shaders/bumphaze_arb.vp
Loading ARB vertex program: shaders/shadowtex_arb.vp
Loading ARB vertex program: shaders/diffuse_texoff_arb.vp
Loading ARB vertex program: shaders/rings_arb.vp
Loading ARB vertex program: shaders/ringshadow_arb.vp
Loading ARB vertex program: shaders/night_arb.vp
Loading ARB vertex program: shaders/glossmap_arb.vp
Loading ARB vertex program: shaders/diffuse2_arb.vp
Loading ARB vertex program: shaders/haze2_arb.vp
Loading ARB vertex program: shaders/diffuse_texoff2_arb.vp
Loading ARB vertex program: shaders/specular2_arb.vp
Loading ARB vertex program: shaders/night2_arb.vp
Loading ARB vertex program: shaders/ell_galaxy_arb.vp
All ARB vertex programs loaded successfully.
render path: 3
Segmentation fault
I I uncommented "IgnoreGLExtetensions..." than I got and error:
render path 1
Segmentation fault
Thanks
Linux and ATI drivers 8.22
lumiwa wrote:Now I have ATI drivers 8.23 and same problem. I cannot run Celestia.
I have 8.24 ATI drivers on a Radeon 9200 Pro and I get more or less the same error
Code: Select all
giacomo@ubuntu:~$ celestia
Initializing ARB vertex programs . . .
Loading ARB vertex program: shaders/diffuse_arb.vp
Loading ARB vertex program: shaders/specular_arb.vp
Loading ARB vertex program: shaders/haze_arb.vp
Loading ARB vertex program: shaders/bumpdiffuse_arb.vp
Error in vertex program shaders/bumpdiffuse_arb.vp, line 10: Error on line 12: malformed declaration (hint: '0.5')
render path: 1
Segmentation fault
I suppose something wrong with shaders
Hope u can fix it soon, I'm lost without Celestia
- t00fri
- Developer
- Posts: 8772
- Joined: 29.03.2002
- Age: 22
- With us: 22 years 8 months
- Location: Hamburg, Germany
giacomolg wrote:I have 8.24 ATI drivers on a Radeon 9200 Pro and I get more or less the same errorlumiwa wrote:Now I have ATI drivers 8.23 and same problem. I cannot run Celestia.Code: Select all
giacomo@ubuntu:~$ celestia
Initializing ARB vertex programs . . .
Loading ARB vertex program: shaders/diffuse_arb.vp
Loading ARB vertex program: shaders/specular_arb.vp
Loading ARB vertex program: shaders/haze_arb.vp
Loading ARB vertex program: shaders/bumpdiffuse_arb.vp
Error in vertex program shaders/bumpdiffuse_arb.vp, line 10: Error on line 12: malformed declaration (hint: '0.5')
render path: 1
Segmentation fault
But that's an old story, really. It's not a CELESTIA bug but a familiar ATI driver bug in their proprietary drivers. I have a Radeon 9200SE in one of my desktops and know much of this segfault history. Since you did not tell us what OS you are using, I have to stop here with specific advice.
Anyhow, my DELL laptop performs nicely with Celestia 1.4.1 under /Windows/ with some fairly recent ATI driver (don't recall it's number now) and a Radeon Mobility card.
In Linux, however
the proprietary fglrx drivers always segfault with the 9200SE while the ATI driver coming with the recent xorg X-servers works perfectly fine.
Hope u can fix it soon, I'm lost without Celestia
Definitely NOT, since it's an ATI bug.
Bye Fridger
Thanks for answering
Is there a way to disable shaders in Celestia?
Problem:
--------
Celestia 1.4.1: segfault upon startup under Linux using ATI Binary drivers.
System info:
------------
Linux 2.6.16-gentoo-r7 #6 PREEMPT Sun May 14 17:51:46 EEST 2006 i686 Intel(R) Pentium(R) M processor 1500MHz GNU/Linux
01:00.0 VGA compatible controller: ATI Technologies Inc Radeon R250 Lf [FireGL 9000] (rev 02)
ati-drivers-8.24.8
Segmentation fault in:
----------------------
celestia-1.4.1/src/celengine/render.cpp, line 7115
glDrawArrays(GL_QUADS, 0, nStars * 4);
Possible reason:
----------------
Too deep recursion in some function in the ATI binary file atiogl_a_dri.so
Workaround:
-----------
Disable star rendering.
For example in GNOME:
start gconf-editor and uncheck apps->celestia->render->stars
Quick fix:
----------
celestia-1.4.1/src/celengine/render.cpp, line 162
starVertexBuffer = new StarVertexBuffer(64); //instead of 2048
Suggested fix:
--------------
The buffer capacity should be a configurable parameter
(e.g placed in /usr/share/celestia/celestia.cfg )
--------
Celestia 1.4.1: segfault upon startup under Linux using ATI Binary drivers.
System info:
------------
Linux 2.6.16-gentoo-r7 #6 PREEMPT Sun May 14 17:51:46 EEST 2006 i686 Intel(R) Pentium(R) M processor 1500MHz GNU/Linux
01:00.0 VGA compatible controller: ATI Technologies Inc Radeon R250 Lf [FireGL 9000] (rev 02)
ati-drivers-8.24.8
Segmentation fault in:
----------------------
celestia-1.4.1/src/celengine/render.cpp, line 7115
glDrawArrays(GL_QUADS, 0, nStars * 4);
Possible reason:
----------------
Too deep recursion in some function in the ATI binary file atiogl_a_dri.so
Workaround:
-----------
Disable star rendering.
For example in GNOME:
start gconf-editor and uncheck apps->celestia->render->stars
Quick fix:
----------
celestia-1.4.1/src/celengine/render.cpp, line 162
starVertexBuffer = new StarVertexBuffer(64); //instead of 2048
Suggested fix:
--------------
The buffer capacity should be a configurable parameter
(e.g placed in /usr/share/celestia/celestia.cfg )
-
- Site Admin
- Posts: 4211
- Joined: 28.01.2002
- With us: 22 years 10 months
- Location: Seattle, Washington, USA
giavoltor wrote:Quick fix:
----------
celestia-1.4.1/src/celengine/render.cpp, line 162
starVertexBuffer = new StarVertexBuffer(64); //instead of 2048
Suggested fix:
--------------
The buffer capacity should be a configurable parameter
(e.g placed in /usr/share/celestia/celestia.cfg )
Decreasing the size of the star vertex buffer could significantly hurt performance when many stars are being rendered. In general, batching primitives is very important for attaining good performance with 3D graphics hardware. I'm reluctant to add a hackish workaround such as a config file parameter because of a broken driver. However, if the problem is widespread enough, I might consider it.
--Chris
chris wrote:Decreasing the size of the star vertex buffer could significantly hurt performance when many stars are being rendered. In general, batching primitives is very important for attaining good performance with 3D graphics hardware. I'm reluctant to add a hackish workaround such as a config file parameter because of a broken driver.
I would definitely appreciate if the buffer size is made configurable in some way. This will spare me from patching Celestia any time a new update comes out. If the default configuration value is 2048, the 99,9% of the Celestia users will not notice any difference at all, but for the remaining 0,1% of us who run Linux and are stuck with an ATI card for some reason (for example I cannot replace the Mobility Radeon 9000 that comes in my IBM ThinkPad T41), this parameter will be all the difference between having an application that does not want to run and enjoying a great space simulator program.
As for performance considerations: the ATI binary linux driver (if/when it works) provides a drastically better performance than the X.org one. Although the opensource driver works, I can only achieve a top framerate of 5 (five) frames per second while looking at Earth with the Milky Way as a backdrop and Auto Magnitude set to maximum. Using the proprietary drivers and after decreasing the buffer size to 64, Celestia segfaults no more and for the same scene achieves framerates of at least 20 FPS. The scene in both cases is rendered with the textures that come bundled with the source tar.gz, i.e, without any addons.
Additionally, performance is probably gained by the fact that compiling Celestia from source on Linux allows for processor-specific and other optimizations. As in my case, I compiled it explicitly for the Pentium-M that runs in my Notebook.
- t00fri
- Developer
- Posts: 8772
- Joined: 29.03.2002
- Age: 22
- With us: 22 years 8 months
- Location: Hamburg, Germany
Since I have studied this Radeon 9200SE ATI driver problem since a while on one of my computers, I also have run a number of benchmarks comparing (well configured and /recent/) x.org drivers versus the latest ATI drivers. Note the latest x.org drivers have MUCH improved. I get fps differences around a factor of <=1.5, but certainly NOT a factor of 4! It could be that in addition your CPU is slow... Mine is 2.8 GHz/1GB ram.
My bechmarks were also taken with other OpenGL applications and the overall statement remains true there, too.
Bye Fridger
My bechmarks were also taken with other OpenGL applications and the overall statement remains true there, too.
Bye Fridger