Computes the affine transforms between images from telecentric cameras. Telecentric cameras are cameras with telecentric lenses. Specifically, this method computes the affine transforms from the image of the camera specified as the base camera via baseCameraIndex to the images of all the other cameras specified in raw2DFromPhys3Ds.
The affine transforms are computed using the cameras' calibration information (raw2DFromPhys3Ds), and a 3D plane in Phys3D (planarFeaturePhys3D). The specified 3D plane should be the 3D plane coincident with the planar part or a planar feature on the part. This method assumes that all the telcentric cameras are viewing the same planar part or the same planar feature on the part.Assembly: Cognex.VisionPro3D (in Cognex.VisionPro3D.dll) Version: 79.0.0.0
Parameters
- raw2DFromPhys3Ds
- Type: System.Collections.Generic List Cog3DCameraCalibration
A list of Cog3DCameraCalibrations, indexed by camera. One calibration per camera. May not be null. Note that all the camera calibrations must be relative to the same Phys3D coordinate space.
- planarFeaturePhys3D
- Type: Cognex.VisionPro3D Cog3DPlane
The feature's 3D plane in Phys3D. May not be null.
- baseCameraIndex
- Type: System Int32
The index of the base camera.
Return Value
Type: List CogTransform2DLinearThe List of CogTransform2DLinears representing the affine transforms between the telecentric cameras' images. The List will have the same size as raw2DFromPhys3Ds. The List element with index baseCameraIndex will be the identity transform.
| Exception | Condition |
|---|---|
| ArgumentNullException | If raw2DFromPhys3Ds or planarFeaturePhys3D is null, or any item in raw2DFromPhys3Ds is null. |
| ArgumentException |
If any of the following conditions are true:
|
When pattern searching for a feature in each camera’s acquired image, the 3D accuracy achieved by triangulating 3D points from the 2D image pattern origin positions in the different images, is highly dependent on the accuracy of the image pattern origins. This method provides an accurate way to take an image pattern in the base camera's image and then use this method’s output 2D affine transforms to map the base camera's image pattern and/or the base camera's image pattern origin to the other cameras. This technique assumes that all the cameras are viewing the same pattern of interest.
Specifically, this method
- Facilitates taking an image pattern (set of pixels in Raw2D space) acquired by the base camera and mapping that image pattern to the Raw2D image spaces of all the other cameras. The mapping can be achieved using the VisionPro CogAffineTransformTool.
- Facilitates taking a 2D point e.g. the image pattern origin, in Raw2D space of the base camera and mapping the 2D point to the Raw2D image spaces of all the other cameras.
- Computes the affine transforms by projecting points from the base camera's Raw2D space to the specified 3D plane (planarFeaturePhys3D), and then projecting the points from the specified 3D plane to the other cameras' Raw2D space.
The 3D plane of the part or a planar feature of the part (planarFeaturePhys3D) is defined in the 3D physical space (Phys3D). This must be the same Phys3D space of the camera calibrations.
If planarFeaturePhys3D is parallel to the part or a planar feature of the part but the actual feature is not lying precisely in the specified 3D plane, then the translation part of the returned affine transforms will not be accurate. However, all other components of the returned affine transforms, such as aspect ratio, rotation, skew, etc. will still be accurate.