Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.naic.edu/~phil/hardware/nvidia/doc/driverInstall/chapter-11.html
Дата изменения: Fri Oct 30 20:31:11 2009
Дата индексирования: Tue Nov 24 16:31:03 2009
Кодировка:

Поисковые слова: 47 tuc
Chapter 11. Specifying OpenGL Environment Variable Settings

Chapter 11. Specifying OpenGL Environment Variable Settings

Full scene antialiasing

Antialiasing is a technique used to smooth the edges of objects in a scene to reduce the jagged "stairstep" effect that sometimes appears. By setting the appropriate environment variable, you can enable full-scene antialiasing in any OpenGL application on these GPUs.

Several antialiasing methods are available and you can select between them by setting the __GL_FSAA_MODE environment variable appropriately. Note that increasing the number of samples taken during FSAA rendering may decrease performance.

To see the available values for __GL_FSAA_MODE along with their descriptions, run:

    nvidia-settings --query=fsaa --verbose

The __GL_FSAA_MODE environment variable uses the same integer values that are used to configure FSAA through nvidia-settings and the NV-CONTROL X extension. In other words, these two commands are equivalent:

    export __GL_FSAA_MODE=5

    nvidia-settings --assign FSAA=5

Note that there are three FSAA related configuration attributes (FSAA, FSAAAppControlled and FSAAAppEnhanced) which together determine how a GL application will behave. If FSAAAppControlled is 1, the FSAA specified through nvidia-settings will be ignored, in favor of what the application requests through FBConfig selection. If FSAAAppControlled is 0 but FSAAAppEnhanced is 1, then the FSAA value specified through nvidia-settings will only be applied if the application selected a multisample FBConfig.

Therefore, to be completely correct, the nvidia-settings command line to unconditionally assign FSAA should be:

    nvidia-settings --assign FSAA=5 --assign FSAAAppControlled=0 --assign FSAAAppEnhanced=0

Anisotropic texture filtering

Automatic anisotropic texture filtering can be enabled by setting the environment variable __GL_LOG_MAX_ANISO. The possible values are:

__GL_LOG_MAX_ANISO Filtering Type
0 No anisotropic filtering
1 2x anisotropic filtering
2 4x anisotropic filtering
3 8x anisotropic filtering
4 16x anisotropic filtering

4x and greater are only available on GeForce3 or newer GPUs; 16x is only available on GeForce 6800 or newer GPUs.

Vblank syncing

Setting the environment variable __GL_SYNC_TO_VBLANK to a non-zero value will force glXSwapBuffers to sync to your monitor's vertical refresh (perform a swap only during the vertical blanking period).

When using __GL_SYNC_TO_VBLANK with TwinView, OpenGL can only sync to one of the display devices; this may cause tearing corruption on the display device to which OpenGL is not syncing. You can use the environment variable __GL_SYNC_DISPLAY_DEVICE to specify to which display device OpenGL should sync. You should set this environment variable to the name of a display device; for example "CRT-1". Look for the line "Connected display device(s):" in your X log file for a list of the display devices present and their names. You may also find it useful to review Chapter 13, Configuring TwinView "Configuring Twinview" and the section on Ensuring Identical Mode Timings in Chapter 19, Programming Modes.

Controlling the sorting of OpenGL FBConfigs

The NVIDIA GLX implementation sorts FBConfigs returned by glXChooseFBConfig() as described in the GLX specification. To disable this behavior set __GL_SORT_FBCONFIGS to 0 (zero), then FBConfigs will be returned in the order they were received from the X server. To examine the order in which FBConfigs are returned by the X server run:

nvidia-settings --glxinfo

This option may be be useful to work around problems in which applications pick an unexpected FBConfig.

OpenGL yield behavior

There are several cases where the NVIDIA OpenGL driver needs to wait for external state to change before continuing. To avoid consuming too much CPU time in these cases, the driver will sometimes yield so the kernel can schedule other processes to run while the driver waits. For example, when waiting for free space in a command buffer, if the free space has not become available after a certain number of iterations, the driver will yield before it continues to loop.

By default, the driver calls sched_yield() to do this. However, this can cause the calling process to be scheduled out for a relatively long period of time if there are other, same-priority processes competing for time on the CPU. One example of this is when an OpenGL-based composite manager is moving and repainting a window and the X server is trying to update the window as it moves, which are both CPU-intensive operations.

You can use the __GL_YIELD environment variable to work around these scheduling problems. This variable allows the user to specify what the driver should do when it wants to yield. The possible values are:

__GL_YIELD Behavior
<unset> By default, OpenGL will call sched_yield() to yield.
"NOTHING" OpenGL will never yield.
"USLEEP" OpenGL will call usleep(0) to yield.

Controlling which OpenGL FBConfigs are available

The NVIDIA GLX implementation will hide FBConfigs that are associated with a 32-bit ARGB visual when the XLIB_SKIP_ARGB_VISUALS environment variable is defined. This matches the behavior of libX11, which will hide those visuals from XGetVisualInfo and XMatchVisualInfo. This environment variable is useful when applications are confused by the presence of these FBConfigs.

Using Unofficial GLX protocol

By default, the NVIDIA GLX implementation will not expose GLX protocol for GL commands if the protocol is not considered complete. Protocol could be considered incomplete for a number of reasons. The implementation could still be under development and contain known bugs, or the protocol specification itself could be under development or going through review. If users would like to test the client-side portion of such protocol when using indirect rendering, they can set the __GL_ALLOW_UNOFFICIAL_PROTOCOL environment variable to a non-zero value before starting their GLX application. When an NVIDIA GLX server is used, the related X Config option AllowUnofficialGLXProtocol will need to be set as well to enable support in the server.