GPU Memory Optimization

With the GPU Memory Optimization option, the system preallocates GPU memory for tool optimization. This feature provides significant speed improvement with Windows Display Driver Model (WDDM) drivers and with Tesla Compute Cluster (TCC) drivers.

GPU Memory Optimization is activated by default with a default value of 2 GB. You can modify these settings either from the Options of the VisionPro Deep Learning Launchmenu, or by selecting the Compute Devices option of the Help menu. For more information, see Launch VisionPro Deep Learning.

However, the reserved memory must be carefully selected, depending on the application. The highest gain in performance is for applications that process small images.

  • If you turn on this option, the system pre-allocates GPU memory for optimization. Turn on this option when using Focused mode tools to speed up training and processing.

  • If you turn off this option, the system stops pre-allocating GPU memory. Turn off this option when training High Detail tools because it slows down their training speed.

Note: It is not recommended to enable Optimized GPU Memory (Help - Compute Devices) for training multiple tools of High Detail and Focused mode within a stream. Using multiple modes in the same stream while Optimized GPU Memory is enabled can result in significantly slower processing, regardless of the number of GPUs you use.

You can deactivate the feature or change the allocation settings via the API or through command line arguments.

For example, in the .NET API, set control.OptimizedGPUMemory(2.5*1024*1024*1024ul);; in the C API, you would set vidi_optimized_gpu_memory(2.5*1024*1024*1024);