Optimized GPU Memory

The Optimized GPU Memory option optimizes GPU memory use for Standard type tools. If you disable this option, the system pre-allocates GPU memory for Legacy type tools. This option is enabled by default.

Memory pre-allocation affects performance in the following ways:

Note: Do not disable Optimized GPU Memory when training multiple tools of Standard and Legacy type within a stream, because it causes significantly slower processing regardless of the number of GPUs you use.

Configuring Optimized GPU Memory in Cognex Deep Learning Studio

You have the following options to enable or disable this option:

  • In the launch menu, select Options, then check or uncheck the Optimized GPU Memory checkbox. For more information, see Launch VisionPro Deep Learning.

  • In the toolbar, use the Optimized for Green/Red Standard tool toggle.

  • In the Compute Devices window, use the Optimized GPU Memory Setting toggle. To open this window, go to the Help menu.

When you disable the option, GPU memory pre-allocation becomes available. Set the reserved memory depending on the application. The highest performance improvement is for applications that process small images.

The following table shows examples from the user interface:

  Enable: Optimization for Standard Tools Disable: Optimization for Legacy Tools
Toolbar

Compute Devices window

Configuring Optimized GPU Memory through API or Command Line

You can enable or disable this option using the API or command line arguments. For example:

  • NET API: control.OptimizedGPUMemory(2.5*1024*1024*1024ul);

  • C API: vidi_optimized_gpu_memory(2.5*1024*1024*1024);