Looks Like the Driver Limit the max fps to the Hz vom your display.
I have installed fglrx driver for my ati radeon hd4670 and I get very low frame rates on fgl_glxgears. Can somebody provide me some help on this?
Code:fgl_glxgears : -------------- 234 frames in 5.0 seconds = 46.800 FPS 300 frames in 5.0 seconds = 60.000 FPS 300 frames in 5.0 seconds = 60.000 FPS 300 frames in 5.0 seconds = 60.000 FPS My glxinfo is: --------------- name of display: :0.0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: SGI server glx version string: 1.2 server glx extensions: GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_visual_select_group client glx vendor string: SGI client glx version string: 1.4 client glx extensions: GLX_ARB_create_context, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_allocate_memory, GLX_MESA_swap_control, GLX_MESA_swap_frame_usage, GLX_NV_swap_group, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_swap_barrier, GLX_SGIX_swap_group, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap GLX version: 1.2 GLX extensions: GLX_ARB_create_context, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_swap_control, GLX_NV_swap_group, GLX_OML_swap_method, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_swap_barrier, GLX_SGIX_swap_group, GLX_SGIX_visual_select_group OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Radeon HD 4670 OpenGL version string: 2.1.8494 Release OpenGL shading language version string: 1.20 OpenGL extensions: GL_AMDX_vertex_shader_tessellator, GL_AMD_performance_monitor, GL_AMD_texture_texture4, GL_ARB_color_buffer_float, GL_ARB_depth_buffer_float, GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_draw_instanced, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_fragment_shader, GL_ARB_framebuffer_object, GL_ARB_framebuffer_sRGB, GL_ARB_half_float_pixel, GL_ARB_half_float_vertex, GL_ARB_instanced_arrays, GL_ARB_map_buffer_range, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_pixel_buffer_object, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shader_objects, GL_ARB_shader_texture_lod, GL_ARB_shading_language_100, GL_ARB_shadow, GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_compression_rgtc, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_float, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_transpose_matrix, GL_ARB_vertex_array_object, GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_vertex_shader, GL_ARB_window_pos, GL_ATI_draw_buffers, GL_ATI_envmap_bumpmap, GL_ATI_fragment_shader, GL_ATI_meminfo, GL_ATI_separate_stencil, GL_ATI_texture_compression_3dc, GL_ATI_texture_env_combine3, GL_ATI_texture_float, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_bindable_uniform, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array, GL_EXT_copy_texture, GL_EXT_draw_buffers2, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_object, GL_EXT_framebuffer_sRGB, GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4, GL_EXT_multi_draw_arrays, GL_EXT_packed_depth_stencil, GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_wrap, GL_EXT_subtexture, GL_EXT_texgen_reflection, GL_EXT_texture3D, GL_EXT_texture_array, GL_EXT_texture_compression_latc, GL_EXT_texture_compression_rgtc, GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic, GL_EXT_texture_integer, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_rectangle, GL_EXT_texture_sRGB, GL_EXT_texture_shared_exponent, GL_EXT_transform_feedback, GL_EXT_vertex_array, GL_KTX_buffer_region, GL_NV_blend_square, GL_NV_conditional_render, GL_NV_copy_depth_to_color, GL_NV_texgen_reflection, GL_SGIS_generate_mipmap, GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_WIN_swap_hint, WGL_EXT_swap_control .......
Last edited by nascentmind; 03-23-2009 at 02:28 AM.
... glxgears is not a benchmark.
do a quick search for the terms: glxgears bridgman site:phoronix.com/forums
and you'll get posts like this : http://www.phoronix.com/forums/showp...16&postcount=4
or this one : http://www.phoronix.com/forums/showp...7&postcount=41The glxgears program is so far removed from a typical application that it's barely worth running other than making sure you have some kind of 3D support. I don't think your glxgears numbers sound unreasonable.
Right. The glxgears program uses only fixed-function graphics (no shaders), and draws shaded triangles with no textures. Between shaders and textures that's maybe between 80% of a modern GPU sitting idle. Vertex shaders get used to emulate the fixed-function transform and lighting, but that's about it. The only block that works hard is the ROP/RBE, ie the part that handles depth compare (Z-buffer) and writes pixels into video memory.
Even worse, since older chips used relatively more of their silicon area for ROP/RBE than new chips, it's not unusual for an old GPU to outperform a new GPU on glxgears.
Next, do a search for the terms: glxgears is not a benchmark
and you'll get results like this: http://wiki.cchtml.com/index.php/Glx...ot_a_Benchmark
and this one : http://qa-rockstar.livejournal.com/7869.htmlglxgears is an OpenGL program that reports FPS (frames per second) numbers. However, it is a very limited 'test'. Unlike most modern 3D games glxgears:
* has an extremely low vertex/polygon count
* does no texturing at all
* only simple, flat shading is used (except inside the hole in each gear is simple smooth shading)
* all vertex data is stored in a display list, so almost nothing passes between host CPU and video card once rendering is started. This mostly implies video card fill rate is limited. But, see next point.
* the default window size is 300x300, a large part of which is not even rendered into, so it's not even a good fill rate test
* the entire render step consists of only 21 OpenGL functions calls, of which only 6 are unique. This is not a very good OpenGL API stress test. Something like glean would be better.
So to summarize, glxgears only tests a small part of what you typically see in a 3D game. You could have glxgears FPS performance increase, but your 3D game performance decrease. Likewise, you could have glxgears performance decrease and your 3D game performance increase.
Take the advice of the people writing the drivers. Don't use glxgears to determine the speed of your rendering.One interesting fact came out of yesterday's Intel KMS Test Day. Everyone noticed that glxgears is much slower under KMS/DRI2 than it was before (e.g. in Fedora 9 or 10). A typical result was ~1000FPS in F10, but only ~440FPS in Rawhide. So what's changed?
Here's the deal, as best as I understand it (thanks to krh/halfline/ajax for trying to explain it to me): glxgears is rendering an insanely simple scene - so simple that the actual 3D rendering time is basically zero. So the only thing glxgears really tests is the performance of glXSwapBuffers() - basically, how fast we can push render buffers into the card. This operation is slower with DRI2, but - roughly speaking - unless it was an order of magnitude slower (e.g. glxgears drops from 1000FPS to under 100FPS) it wouldn't make any real difference.
So if you're going to be comparing "3D performance" - please, don't bother with glxgears. ajax has suggested a couple of things that might make better benchmarks:
* The mesa-demos package has a few useful things - teapot in particular is nicely simple.
* sierpinski3d and glblur from xscreensaver-gl-extras also work well.
* extremetuxracer is, in my opinion, the most fun way to benchmark 3D.
If you see major (100% or more) performance differences between KMS and non-KMS in those apps, it's probably worth investigating further.
I understand that glxgears is not a benchmarking tool. I had done some tests and it was running 1000fps and suddenly after some days not sure whether it was some updates or removing the effects from kde the frame rates drops. I check even amarok visualization and it becomes sluggish after i keep atleast 10 visualizations open. Also the whole of kde feels sluggish with/without effects. Also if i open glxgears and glxheads the cpu usage jumps to 100% on one of my cores of my quad core.
Nille: How did you find out that the driver is limiting because of the vertical sync?
I tried Unigine and I am getting 15 fps. Something is seriously wrong.
I don't remember the details but my recollection is that the Unigine demo is *very* demanding and did result in some pretty slow frame rates. It depends on the specific demo though... a lot of people are reporting frame rates lower than yours (albeit with slower cards) on the Tropics demo, for example.
What resolution are you running at, and which demo ?
Here are the benchmarks.
Code:Binary: Linux 32bit GCC 4.1.2 Release Oct 29 2008 Operating system: Linux 2.6.27-14-generic x86_64 CPU model: Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz CPU flags: 2333MHz MMX SSE SSE2 SSE3 HT GPU model: ATI Radeon HD 4670 2.1.8494 Release 512Mb Settings Render: opengl Mode: 1024x768 6xAA fullscreen Shaders: high Textures: high Filter: trilinear Anisotropy: 16x Occlusion: enabled Reflection: enabled Refraction: enabled Volumetric: enabled
fgl_clxgears, looks like you have vsync enabled, so it's limiting FPS to the refresh rate of the panel, 60Hz = 60FPS.
which is what I get with the 4850 mobile + P8600, 64b Ubuntu 8.10 + cat 9.2.Code:aurelius:~$ fgl_glxgears Using GLX_SGIX_pbuffer 13021 frames in 5.0 seconds = 2604.200 FPS 14333 frames in 5.0 seconds = 2866.600 FPS 14182 frames in 5.0 seconds = 2836.400 FPS
I briefly ran unigine tropics demo @ 1280x1024, but haven't had a chance to really let it complete run and save the data, but from observations I had a low FPS of 6FPS w/lots of trees/lightsource and a max of 60 FPS. I just had left everything at default values. I'd hazard that the avg would've been c. 30FPS, but I'll try to run the complete demo as a bench later. (I noticed that when I exited from it, that it failed to correctly reset desktop res to 1680x1050... annoying.
Last edited by cutterjohn; 03-25-2009 at 09:15 PM.
I disabled it using ati catalyst center. Also is vsync important for LCDS? I have a dell 21inch lcd screen (Dell S2209W).
Code:Using GLX_SGIX_pbuffer 2771 frames in 5.0 seconds = 554.200 FPS 2555 frames in 5.0 seconds = 511.000 FPS 2772 frames in 5.0 seconds = 554.400 FPS
Last edited by nascentmind; 03-24-2009 at 05:24 PM.