API Changelog
This topic lists the API changes in previous releases of VisionPro Deep Learning.
VisionPro Deep Learning 3.2
GPU Clock Stabilizer
First, you can try adjusting the GPU clock settings to prevent lowering the GPU clock. Please change the 'Power management mode' in Nvidia Control Panel > Manage 3D settings to 'Prefer maximum performance. However, if the GPU clock still cannot be maintained, consider using together with the new feature called GPU Clock Stabilizer.
The feature to maintain a stable GPU clock in a deployment environment can be helpful in multi-GPU scenarios or in situations where clock instability leads to reduced processing speed or spikes.
For more details, see the below example codes.
- This feature is initially disabled by default. You can control its activation or deactivation through the API.
- It should be employed after Compute Device initialization. Attempting to use it before initialization will result in the C API returning a value other than VIDI_SUCCESS, while the .NET API will trigger an exception.
...
/** @brief disable stabilize feature */
#define VIDI_STABILIZE_OFF 0
/** @brief enable stabilize feature on GPUs */
#define VIDI_STABILIZE_GPU 1
/** @brief stabilize the compute devices
* @param mode mode of operation
* @return 0 if passed, otherwise an error_code that can be used with vidi_get_error_message()
*
* @see VIDI_STABILIZE_GPU, VIDI_STABILIZE_OFF
* This method must be called after the compute devices have been initialized.
*/
VIDI_DLLEXPORT VIDI_UINT vidi_stabilize_compute_device(VIDI_INT mode);
///////////////////////////////////////////////////////
// example
///////////////////////////////////////////////////////
// Turns on.
vidi_stabilize_compute_device(VIDI_STABILIZE_GPU);
// Turns off
vidi_stabilize_compute_device(VIDI_STABILIZE_OFF);
...
...
/// <summary>
/// Offers the possibility to stabilize the compute devices when the control is created, without unloading the vidi dll
/// example: mode = "StabilizeMode.Off"
/// mode = "StabilizeMode.GPU"
/// This can only be called after compute devices initialized.
/// </summary>
void StabilizeComputeDevices(Enum mode);
///////////////////////////////////////////////////////
// example
///////////////////////////////////////////////////////
// Turns on
control.StabilizeComputeDevices(StabilizeMode.GPU);
// Turns off
control.StabilizeComputeDevices(StabilizeMode.Off);
...
vidi_initialize2() is Deprecated
vidi_initialize2() function has been deprecated.
Starting from version VisionPro Deep Learning 3.2.0, you can load and use both the training API and runtime API simultaneously. Although the vidi_initialize2() API still exists, it does not perform the intended operation.
VIDI_DLLEXPORT VIDI_UINT vidi_initialize2(VIDI_INT compute_mode, VIDI_STRING compute_devices, VIDI_INT cuda_load_mode);
VisionPro Deep Learning 3.0
Among the new features introduced in VisionPro Deep Learning 3.0, some of them are only provided through API as they enhance runtime functionality.
Classification Batch Processing for Runtime API
Processing multiple views at once through batch processing is now available for Green Classify High Detail. Using the following Runtime APIs can speed up the Green Classify High Detail processing via API.
The steps to use classification batch processing API are summarized as follows.
-
Fix the batch size and prepare the images to process. The maximum batch size is dependent on the current available GPU memory.
-
Create a sample.
-
Add the prepared images to the sample.
-
Run batch processing.
-
Check the processing results.
For more details of utilizing batch processing API, see the below example codes.
...
// The maximum batch size depends on the available GPU memory size.
// let's suppose the available maximum batch size is 12.
constexpr int batch_size = 12;
// you need to set "runtime_parameters/batch_size" as the batch size you chose above.
// "workspace": the name of your workspace
// "default": the name of the stream in your workspace
// "Classify": the name of the Green High Detail tool in your stream.
string batch_size_str = std::to_string(batch_size);
vidi_runtime_tool_set_parameter("workspace", "default", "Classify", "runtime_parameters/batch_size", batch_size_str.c_str());
...
// you also need to prepare the same number of the images as the batch size
vector<VIDI_IMAGE> images(batch_size);
for (int i = 0; i < batch_size; i++)
{
...
status = vidi_load_image(image_path.c_str(), &images[i]);
...
}
...
// create a sample for batch processing. Note that a sample is the basic unit of processing task.
// "my_sample": the name of the sample you create.
status = vidi_runtime_create_batched_sample("workspace", "default", "my_sample");
...
// add the prepared images to the sample
for (int i = 0; i < batch_size; i++)
{
...
status = vidi_runtime_batched_sample_add_image("workspace", "default", "my_sample", &images[i]);
...
}
...
// execute batch processing
status = vidi_runtime_batched_sample_process("workspace", "default", "Classify", "my_sample", "");
...
// get result of batch processing
status = vidi_runtime_get_batched_sample("workspace", "default", "my_sample", &result_buffer);
...
...
// The maximum batch size depends on the available GPU memory size.
// let's suppose the available maximum batch size is 12.
int batch_size = 12;
...
// you also need to prepare the same number of the images as the batch size
// the paths list should contain the paths of images
List<IImage> imgs = new List<IImage>();
for (int i = 0; i < batch_size; ++i)
imgs.Add(new LibraryImage(paths[i]));
// create a sample for batch processing. Note that a sample is the basic unit of processing task.
// "Classify": the name of the Green High Detail tool in your stream.
using (IBatchedSample sample = stream.CreateBatchedSample())
{
// add images to sample
for (int i = 0; i < batch_size; ++i)
sample.AddImage(imgs[i]);
ITool greenTool = stream.Tools["Classify"];
// set batch size parameter
var param = greenTool.ParametersBase as ViDi2.Runtime.IGreenHighDetailParameters;
param.BatchSize = batch_size;
// run processing
sample.Process(greenTool);
// get processing result of batch
for (int i = 0; i < batch_size; ++i)
{
var tags = sample.Tags(greenTool, i);
foreach (var tag in tags)
Console.WriteLine($"tag: name={tag.Key}, score={tag.Value}");
}
}
NVIDIA TensorRT Support for Runtime API
NVIDIA TensorRT is an NVIDIA SDK that boosts the inference speed of a deep learning application. It optimizes the inference speed of a deep neural network by NVIDIA GPU model. From VisionPro Deep Learning 3.0, Runtime API supports TensorRT to boost the processing speed at Runtime. To exploit TensorRT for Runtime API, go through the following steps. The TensorRT for Runtime API is supported only for Red Analyze High Detail and Green Classify High Detail runtime environment.
-
In the training environment, create a workspace, stream, and a High Detail tool. Train the tool to make it ready to be deployed in the runtime environment. The training environment can be either VisionPro Deep Learning GUI or API. You can create a workspace, stream, and a High Detail tool and train the tool in GUI. For the more details of the training and runtime environment, see Environments.
-
Export a runtime workspace which includes the trained High Detail tool. You also can create a runtime workspace in GUI (Workspace > Export Runtime Workspace). For the more details of the runtime workspace, see Runtime Workspace.
-
Deploy the runtime workspace and its High Detail tool on the device on your front line. For the more details of the runtime deployment, see Runtime Workspace and Runtime Deployment.
-
Run TensorRT Optimization API to optimize the High Detail tools in your runtime workspace with TensorRT, and save your runtime workspace. The optimization normally takes under 10 minutes to be completed. After the first run of optimization, the tool's neural network is optimized to the specific NVIDIA GPU model you use, and from this point, you can process as many images you want with faster speed.
CopyC++ Example of TensorRT Optimization API for Runtime (Red High Detail and Green High Detail)
...
// open the given workspace.
// "workspace": the name of your runtime workspace
//"..\\..\\resources\\runtime\\Green High-detail Tool.vrws": the path to the runtime workspace file
status = vidi_runtime_open_workspace_from_file("workspace", "..\\..\\resources\\runtime\\Green High-detail Tool.vrws");
...
// optimize the High-detail tool
// "default": the name of the stream in your workspace
// "Classify": the name of the Green High Detail tool in your stream.
clog << "Start optimization. It will take a few minutes." << endl;
status = vidi_runtime_tool_convert_trt("workspace", "default", "Classify", 0);
...
// save runtime workspace with the optimized tool
string save_path = "..\\..\\resources\\runtime\\Green High-detail Tool optimized.vrws";
status = vidi_runtime_save_workspace("workspace", save_path.c_str());
clog << "The workspace with the optimized tool is saved at " << save_path.c_str() << endl;
...CopyC# Example of TensorRT Optimization API for Runtime (Red High Detail and Green High Detail)...
// Open a runtime workspace from file
// the path to this file relative to the example root folder
// and assumes the resource archive was extracted there.
// "workspace": the name of your runtime workspace
ViDi2.Runtime.IWorkspace workspace = control.Workspaces.Add("workspace", "..\\..\\..\\..\\resources\\runtime\\Green High-detail Tool.vrws");
// Store a reference to the stream 'default'
IStream stream = workspace.Streams["default"];
// "Classify": the name of the Green High Detail tool in your stream.
var greenTool = stream.Tools["Classify"] as ViDi2.Runtime.IGreenTool;
// Optimizes the High-Detail tool with TensorRT. It takes some time to optimize the tool.
greenTool.OptimizeTensorRT(0);
// Save runtime workspace with the optimized tool.
// You can use this workspace to process with the optimized tool.
string savePath = "..\\..\\..\\..\\resource\\runtime\\Green High-detail Tool Optimized.vrws";
workspace.Save(savePath);
Console.Write($"The workspace with the optimized tool is saved at {savePath}.");
...If you want to use the Classification Batch Processing for Runtime API together with TensorRT-optimized processing, you need to set batch size before calling the optimization.
CopyC++ Example of Setting Batch Size Before Calling TensorRT Optimization (Green High Detail)// Optimize with batchSize=16
// The 5th(width) and 6th(height) parameter of vidi_runtime_tool_convert_trt() are for target width and height for optimization
// If width=0 or height=0 is provided, the trained size is used. So we just recommend use width=0, height=0 in normal case.
int gpu_index = 0;
int batch_size = 16;
vidi_runtime_tool_convert_trt("workspace", "default", "Analyze", gpu_index, 0, 0, batch_size);CopyC# Example of Setting Batch Size Before Calling TensorRT Optimization (Green High Detail)// Optimize with batchSize=16
// The second(width) and third(height) parameter of OptimizeTensorRT() are for target width and height for optimization
// If width=0 or height=0 is provided, the trained size is used. So we just recommend use width=0, height=0 in normal case.
int gpuIndex = 0;
int batchSize = 16;
greenTool.OptimizeTensorRT(gpuIndex, 0, 0, batchSize);Directory to C++ Full Example Codes:
C:/ProgramData/Cognex/VisionPro Deep Learning/3.0/Examples/c++/Example.Runtime.OptimizeHDTool
C:\ProgramData\Cognex\VisionPro Deep Learning\3.0\Examples\c++\Example.Runtime.HDGreen.Batched
Directory to C# Full Example Codes:
C:/ProgramData/Cognex/VisionPro Deep Learning/3.0/Examples/c#/Example.Runtime.OptimizeHDTool.Console
C:\ProgramData\Cognex\VisionPro Deep Learning\3.0\Examples\c#\Example.Runtime.HDGreen.Batched.Console
-
Load your runtime workspace in your application and deploy it on the device to process images with the TensorRT-optimized tool. To do this, you need to set a parameter before processing.
CopyC++ Example of Setting Parameter to Use TensorRT Optimized-model for Processing (Red High Detail and Green High Detail)// If you want to process with optimized tool, you have to set ProcessTensorRT to true.
// Setting this option to true will trigger TensorRT processing.
status = vidi_runtime_tool_set_parameter("workspace", "default", "Analyze", "runtime_parameters/process_with_trt", "true");
...
status = vidi_runtime_sample_process("workspace", "default", "Analyze", "my_sample", "");CopyC# Example of Setting Parameter to Use TensorRT Optimized-model for Processing (Red High Detail and Green High Detail)// If you want to process with optimized tool, you have to set ProcessTensorRT to true.
// Setting this option to true will trigger TensorRT processing.
var hdParam = hdTool.ParametersBase as ViDi2.Runtime.IToolParametersHighDetail;
hdParam.ProcessTensorRT = true;
...
sample.Process(hdTool);See the example codes below to process images with a runtime workspace.
Directory to C# Example Codes:
-
C:/ProgramData/Cognex/VisionPro Deep Learning/3.0/Examples/c#/Example.Runtime.HDGreen.Console
-
C:/ProgramData/Cognex/VisionPro Deep Learning/3.0/Examples/c#/Example.Runtime.HDRed.Console
Directory to C++ Example Codes:
-
C:/ProgramData/Cognex/VisionPro Deep Learning/3.0/Examples/c++/Example.Runtime.HDGreen
-
C:/ProgramData/Cognex/VisionPro Deep Learning/3.0/Examples/c++/Example.Runtime.HDRed
Note: TensorRT Optimization API (vidi_runtime_tool_convert_trt, .OptimizeTensorRT) should be called again after your NVIDIA GPU model is changed because its optimization works by NVIDIA GPU model. If you changed your NVIDIA GPU model in your device, repeat the Step 4 and 5. -
VisionPro Deep Learning 2.1.1
In VisionPro Deep Learning 2.1.1, a new processing parameter max_scan_iterations is added only in API for Blue Locate and Blue Read. This parameter limits the maximum number of iterations for image scanning during processing and providing a fixed number for this parameter speeds up the processing.
You can get the value of this parameter by:
-
vidi_runtime_get_parameter
-
vidi_runtime_tool_get_parameter
You can change the value of this parameter by:
-
vidi_runtime_set_parameter
-
vidi_runtime_tool_set_parameter
The parameter path for max_scan_iterations is:
-
processing/blue/max_scan_iterations
Examples
This topic introduces the code examples of max_scan_iterations parameter for C and .NET API
| Symbol | Definition |
|---|---|
| WORKSPACE | The name of your workspace. |
| STREAM | The name of the stream in your workspace. |
| TOOL | The name of the tool in the stream. |
| 40 | The value of the parameter. |
C API
VIDI_UINT status = vidi_runtime_tool_set_parameter("WORKSPACE", "STREAM", "TOOL", "processing/blue/max_scan_iterations", "40");
.NET API
libraryAccess.SetToolParameter("WORKSPACE", "STREAM", "TOOL", "processing/blue/max_scan_iterations", "40"); // libraryAccess is ILibraryAccess
VisionPro Deep Learning 2.1
As High Detail Quick modes are added in VisionPro Deep Learning 3.2, there are some changes to API as well.
Green Classify High Detail Quick Training - C API
Green High Detail Quick Training with C API
// Green high-detail-quick mode
...
param_ss << "workspaces/" << workspace_name << "/streams/" << stream_name << "/tools/" << tool_name << "/tool_type";
status = vidi_training_set_parameter(param_ss.str().c_str(), "green_quick");
...
To train Green Classify High Detail Quick, the parameter path should be set to "green_quick" for vidi_training_set_parameter.
The example of parameter path:
-
workspaces/WORKSPACE_NAME/streams/STREAM_NAME/tools/TOOL_NAME/tool_type
| Symbol | Definition |
|---|---|
| WORKSPACE_NAME |
The name of your workspace. |
| STREAM_NAME |
The name of the stream in the workspace. |
| TOOL_NAME |
The name of the tool in the stream. |
To configure the Tool Parameters for Green Classify High Detail Quick, the parameter path also should be set. For example, if you want to change Epoch Count in Training Parameters, set the parameter path as:
-
workspaces/WORKSPACE_NAME/streams/STREAM_NAME/tools/TOOL_NAME/training_parameters/count_epochs
| Symbol | Definition |
|---|---|
| WORKSPACE_NAME |
The name of your workspace. |
| STREAM_NAME |
The name of the stream in the workspace. |
| TOOL_NAME |
The name of the tool in the stream. |
See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c++\Example.Training.HDGreen\example_training_hdgreen.cpp for the more detailed examples of training C API codes.
Green Classify High Detail Quick Training - .NET API
Green High Detail Quick Training with .NET API
...
// Green high-detail-quick mode
hdGreenTool.Database.Tool.Type = ToolType.GreenQuickHighDetail;
...
To train Green Classify High Detail Quick, the ViDi2.Training.ITool.Type should be set to ToolType.GreenQuickHighDetail.
To configure the Tool Parameters for Green Classify High Detail Quick, the ViDi2.Training.IGreenHighDetailParameters should be set. For example, you should set the value of CountEpochs for IGreenHighDetailParameters.
See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c#\Example.Training.HDGreen.Console\Program.cs for the more detailed examples of training .NET API codes.
Green Classify High Detail Quick Processing - C API
The way to process Green Classify High Detail Quick with API is the same as those of Green Classify Focused and Green Classify High Detail. See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c++\Example.Runtime.HDGreen\example_runtime_hdgreen.cpp for the details.
Green Classify High Detail Quick Processing - .NET API
The way to process Green Classify High Detail Quick with API is the same as those of Green Classify Focused and Green Classify High Detail. See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c#\Example.Runtime.HDGreen.Console\Program.cs for the details.
Red Analyze High Detail Quick Training - C API
Red High Detail Quick Training with C API
...
// Red high-detail quick mode
param_ss << "workspaces/" << workspace_name << "/streams/" << stream_name << "/tools/" << tool_name << "/tool_type";
status = vidi_training_set_parameter(param_ss.str().c_str(), "red_quick");
...
To train Red Analyze High Detail Quick, the parameter path should be set to "red_quick" for vidi_training_set_parameter.
The example of parameter path:
-
workspaces/WORKSPACE_NAME/streams/STREAM_NAME/tools/TOOL_NAME/tool_type
| Symbol | Definition |
|---|---|
| WORKSPACE_NAME |
The name of your workspace. |
| STREAM_NAME |
The name of the stream in the workspace. |
| TOOL_NAME |
The name of the tool in the stream. |
To configure the Tool Parameters for Red Analyze High Detail Quick, the parameter path also should be set. For example, if you want to change Epoch Count in Training Parameters, set the parameter path as:
-
workspaces/WORKSPACE_NAME/streams/STREAM_NAME/tools/TOOL_NAME/training_parameters/count_epochs
| Symbol | Definition |
|---|---|
| WORKSPACE_NAME |
The name of your workspace. |
| STREAM_NAME |
The name of the stream in the workspace. |
| TOOL_NAME |
The name of the tool in the stream. |
See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c++\Example.Training.HDRed\example_training_hdred.cpp for the more detailed examples of training C API codes.
Red Analyze High Detail Quick Training - .NET API
Red High Detail Quick Training with .NET API
...
// Red high-detail quick mode
hdRedTool.Database.Tool.Type = ToolType.RedQuickHighDetail;
...
To train Red Analyze High Detail Quick, the ViDi2.Training.ITool.Type should be set to ToolType.RedQuickHighDetail.
To configure the Tool Parameters for Red Analyze High Detail Quick, the ViDi2.Training.IRedHighDetailParameters should be set. For example, you should set the value of CountEpochs for IRedHighDetailParameters.
See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c#\Example.Training.HDRed.Console\Program.cs for the more detailed examples of training .NET API codes.
Red Analyze High Detail Quick Processing - C API
The way to process Red Analyze High Detail Quick with API is the same as those of Red Analyze Focused Supervised and Red Analyze High Detail. See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c++\Example.Runtime.Red\example_runtime_red.cpp for the details.
Red Analyze High Detail Quick Processing - .NET API
The way to process Red Analyze High Detail Quick with API is the same as those of Red Analyze Focused Supervised and Red Analyze High Detail. See C:\ProgramData\Cognex\VisionPro Deep Learning\2.1\Examples\c#\Example.Runtime.HDRed.Console\Program.cs for the details.