Green Classify

You can use the Green Classify tool to identify and classify an object or the entire scene in an image. The tool assigns a tag to the images, and uses the tag for sorting the images into classes. The tag is represented by a label, and each label has a percentage showing how confident the tool is in the assigned class.

The Green Classify tool can perform tasks such as:

  • Classifying objects or scenes

  • Separating different classes based on a collection of labeled images

  • Identifying products based on their packaging

  • Classifying the quality of welding seams

  • Separating acceptable or unacceptable anomalies

You can use the Green Classify tool flexibly as part of a tool chain. For example:

Tool Types

When you know that you need a Green Classify tool to solve your machine vision problem, you must choose the type of the tool you want to train. You can do so by setting the Type parameter in the Tool Parameters sidebar.

The Green Classify tool is available in the following types:

  • Standard type for the most accurate classification.

  • Legacy type for compatibility with Focused mode tools from earlier VisionPro Deep Learning software versions. Legacy type tools are faster than Standard type tools but less accurate.

The different tool types correspond to different types of neural network models. If you want more accurate results at the expense of increased training Training is the process that your tool, which is a neural network, is learning about the features (pixels) based on the labels you made. For example, a tool will learn the defect/normal pixels in each image based on the defect/normal labels you drew. The goal of the tool Training is learning enough to give the correct inspection results of whether an unseen image is defective or not. The key to training is to ensure that you include all possible variations within your training set, and that your images are accurately labeled. Training times vary by the application, tool setup and the GPU in the PC being used to train the network. and processing times, use the Standard type. The Standard type tools examines the entire image equally, while the Legacy type tools uses Feature Sampling, which makes them selective, focusing on the parts of the image with useful information. Due to this focus, the tool can miss information, especially when the image has important details everywhere.

Note: The parameters in the Tool Parameters sidebar depend on the tool type you select.

Standard Type

The Standard type tool is an improved version of the Legacy type. The Standard type has higher performance, but the training and processing times are longer. Standard type tools do not support multiple labels per image.

The Standard type tool supports the following features in its different modes:

Mode NVIDIA Tensor RT Speed Optimization Outlier Score Heat Map
Fast Yes Yes Yes
Few Shot Yes No Yes
Accurate Yes Yes Yes
Robust No Yes Yes

Modes of the Standard Type

Mode refers to the subtype of the neural network model Each VisionPro Deep Learning tool is a neural network. A neural network mimics the way biological neurans work in the human brain. The neural network consists of interconnected layers of artifical neurons, called nodes, and they have multiple layers. Neural networks excel at tasks like image classification and pattern recognition., which affects the time required for training and processing. Select the mode before training with the Mode parameter. The following modes are available:

Standard Type Features and Benefits

The Standard type has the following benefits:

Legacy Type

The Legacy type tool is a less advanced version of the Standard type tool. The Standard type has higher performance, but the training and processing times are longer. The Legacy type supports multiple labels per image. You can enable this feature with the Exclusive parameter.

The Legacy type tool samples pixels with a feature sampler that is tied to a sampling region. You define the sampling region with the sampling parameters in the Tool Parameters sidebar. If a sampling region does not include any defect pixels, then the network should produce no response.