Multiple GPUs for VisionPro Deep Learning Service
The client/server functionality allows up to eight client PCs to make use of a single server for both training Training is the process that your tool, which is a neural network, is learning about the features (pixels) based on the labels you made. For example, a tool will learn the defect/normal pixels in each image based on the defect/normal labels you drew. The goal of the tool Training is learning enough to give the correct inspection results of whether an unseen image is defective or not. The key to training is to ensure that you include all possible variations within your training set, and that your images are accurately labeled. Training times vary by the application, tool setup and the GPU in the PC being used to train the network. and processing. This allows the clients to share the use of one or more GPUs installed on the server. The VisionPro Deep Learning server provides exactly the same capabilities and configuration options as the VisionPro Deep Learning GUI, running locally, in regard to making use of multiple GPUs.
- To use multiple GPUs, they must all be of the same type and have the same amount of memory.
- The maximum number of GPUs that can be used by a server is 4.
- Individual clients can request a specific GPU device when processing a stream.
- A single VisionPro Deep Learning server can be configured to provide a training service (to support clients running the VisionPro Deep Learning GUI), a run-time service (to support clients processing images through streams using the run-time interface), or both training and runtime services. All requests are serialized in a single FIFO(First In First Out) queue and processed in order, using the first available GPU device.
The GPU Mode on the server is set when the VisionPro Deep Learning Service is started, and command line arguments can be used to specify the GPU Mode.