Using AlignPlus 2D Hand-Eye CalibrationCognex VisionPro

AlignPlus 2D hand-eye calibration allows you to calibrate images obtained with your cameras to the stage on which the object to be inspected is and whose position relative to your cameras moves. The images you obtain using your cameras may exhibit lens distortion and perspective distortion, your motion system input may be offset from the actual position it moves to and it may have systematic errors as well. AlignPlus hand-eye calibration allows you to view and inspect features of objects in an undistorted manner (with physically correct length units) and with placement in the native (Home2D) coordinate space of the motion system. That is, it shows you the object to be inspected in its real physical appearance and it tells you where that object is (in the Home2D coordinate space).

For this tool, only an API is available (and no edit control). This topic describes the main classes for this tool.

For the theory of AlignPlus 2D hand-eye calibration, see the standalone AlignPlus 2D Hand-Eye Calibration Concepts document. You can build up your own application based on the AlignPlus hand-eye sample application available here: http://www.cognex.com/Support/VisionPro/

Train Time Class Overview

The following classes and their members are used during the train time of AlignPlus hand-eye calibration.

Calibration Plate Feature Extraction

A checkerboard with data matrix codes (or with an L-shaped fiducial mark) is used as the calibration plate. The goal of checkerboard feature extraction is to:

  1. Detect the checker image points (vertices) on the checkerboard.
  2. Read the ID matrix codes (or L-shaped fiducial mark) to get the label information for the vertices.
  3. Construct correspondence pairs between the detected image points and the model points (the corresponding physical positions in the checkerboard physical coordinate space).
The Execute() method of the CogCalibFeatureExtractorCheckerboard class performs these tasks on a given collection of training images, which can be of multiple cameras. The CogImageCollectionMCamerasNPoses class holds the training image collection for all cameras and calibration poses. The Execute() method returns the feature extraction results including the correspondence pairs as type CogCalibFeatureExtractorResults.

Generation of Calibration Results

The CogHandEyeCalibrator class performs the hand-eye calibration between one or more cameras and the motion stage. It has an Execute() method that uses the calibration plate feature extraction results and the UncorrectedHome2DFromStage2D poses to return the calibration results of type CogHandEyeCalibrationResults, which is the result set for all cameras. CogHandEyeCalibrationResults holds instances of CogHandEyeCalibrationResult, one for each camera. CogHandEyeCalibrationResult holds the calibration result for a single camera. Among others, it has the following methods returning calibration results:

  • ConvertHome2DFromStage2DToUncorrectedHome2DFromStage2D - Returns the UncorrectedHome2DFromStage2D pose corresponding to the input Home2DFromStage2D.
  • ConvertUncorrectedHome2DFromStage2DToHome2DFromStage2D - Returns the Home2DFromStage2D transform corresponding to the input UncorrectedHome2DFromStage2D pose.
  • GetEstimatedHome2DFromStage2DPoses - Returns an array of estimated Home2DFromStage2D poses.
  • GetHome2DFromStationaryCamera2D - Returns a copy of the Home2DFromStationaryCamera2D transform. This is a rigid transform that may also flip handedness. This is the placement pose of the camera. This is only valid if MovingCamera is false.
  • GetHome2DFromStationaryPlate2D - Returns a copy of the Home2DFromStationaryPlate2D transform. This is a rigid transform that may also flip handedness. This is the placement pose of the plate. This is only valid if MovingCamera is true.
  • GetMotionXAxisHome2D - Gets the X and Y coordinates of the vector that describes the X axis of motion in Home2D. This specifies the magnitude of X unit travel. The direction of X unit travel is accurate by definition.
  • GetMotionYAxisHome2D - Gets the X and Y coordinates of the vector that describes the Y axis of motion in Home2D. This specifies both the direction and magnitude of Y unit travel. The direction should be close to +90 degrees.
  • GetRaw2DFromCamera2D - Returns the transform that maps coordinates in the camera coordinate system (Camera2D) to the image coordinate system (Raw2D).
  • GetRaw2DFromHome2D - Returns a copy of the Raw2DFromHome2D transform for the specified UncorrectedHome2DFromStage2D. When MovingCamera is false, the UncorrectedHome2DFromStage2D argument is ignored.
  • GetStage2DFromMovingCamera2D - Returns a copy of the Stage2DFromMovingCamera2D transform. This is a rigid transform that may also flip handedness. This is the placement pose of the camera. This is only valid if MovingCamera is true.
  • GetStage2DFromMovingPlate2D - Returns a copy of the Stage2DFromMovingPlate2D transform. This is a rigid transform that may also flip handedness. This is the placement pose of the plate. This is only valid if MovingCamera is false.
  • Various methods returning residual information.

Run Time Class Overview

The following classes and their members are used during the run time of AlignPlus hand-eye calibration.

Image Correction

The trained CogCalibImageCorrector class, one instance for each camera, performs image correction on a run-time image using the calibration results. (You can also perform image correction on a train-time image.) During image correction, each input image’s distortions get corrected and the Home2D coordinate space of the motion system is added to the image’s coordinate space tree to generate the output image on which you can perform measurements.

The calibration results and the calibration image are used to train each CogCalibImageCorrector instance. Training is performed as the Train() method of the CogCalibImageCorrector instance is invoked. The Execute() method of the CogCalibImageCorrector instance performs the actual image correction on the run-time image you provide.

Commanding the Motion Stage

Once you performed your position measurements on your corrected run-time images and you know the physical movement your motion stage has to perform to align your object to be inspected, you have to command your motion stage using the calibration results. In particular, you can use the ConvertHome2DFromStage2DToUncorrectedHome2DFromStage2D method to calculate the motion to be commanded so your motion stage performs the desired movement.

Motion Stage Validation

The CogMotionStageValidator class performs motion stage validation prior to performing a hand-eye calibration of the stage in a machine vision system with one or more cameras. The purpose of this class is to verify that the stage moves to its commanded poses (X, Y, Theta), and to characterize certain types of systematic errors in the observed motion.

Similar to the hand-eye calibration usage model, this tool uses correspondence data extracted from all views (images) from all cameras, along with the UncorrectedHome2DFromStage2D pose associated with each view. The tool validates the motion stage for all sets of UncorrectedHome2DFromStage2D poses that contain motion corresponding to the metric requested for in the input parameters.

The Execute() method of the CogMotionStageValidator class instance returns the validation results of type CogMotionStageValidationResult including the requested metrics.

For details on motion stage validation, see the description in the CogMotionStageValidator class.