VisionPro Deep Learning Licenses

The type of license on the Security Key determines the functionality and performance level of the VisionPro Deep Learning tools in both runtime and training operation. For more information about the VisionPro Deep Learning licenses, please consult your Cognex sales representative.

License Description
Base

The base-level license is not optimized for fast performance. The application only works with a CPU, and does not support using GPUs. Moreover, the tools only operate in inference mode and you cannot retrain the tools while the application is deployed.

Standard

The standard-level license supports a single GPU, but it does not support using multiple GPUs. This license level also includes the Parameter Search utility. The standard license is available in the following configuration options:

  • A 1-year training license, which is used for application development.
  • A runtime license, with training capability, for deployed applications. With this option, the tools can operate in inference mode, and you can retrain the tools while the application is deployed.
  • A runtime-only license, without access to the training capability. With this option, the tools can only operate in inference mode.
Advanced

The advanced-level license is for high-speed, or high-resolution inspection applications, or both. This license level also includes the Parameter Search utility and the Deep Learning Client/Server functionality. The advanced license is available in the following configuration options:

  • A 1-year training license, which is used for application development, and supports up to four GPUs.
  • A runtime license, with training capability, for deployed applications. With this option, the tools can operate in inference mode, and you can retrain the tools while the application is deployed. This option supports up to two GPUs during inference operation.
  • A runtime-only license, without access to the training capability. With this option, the tools can only operate in inference mode. This option supports up to two GPUs during inference operation.