Cog3DPoseEstimatorUsingCrsp2D3Ds EstimatePoseUsingInlierCrsp2D3Ds Method Cognex VisionPro
This method performs a 3D robust pose estimation for one part instance based on the specified crsp2D3DsUnified, and the specified camera calibrations, from one or multiple cameras/views. Note that robust pose estimation parameters are used, and the returned result includes information about feature outliers.

Namespace: Cognex.VisionPro3D
Assembly: Cognex.VisionPro3D (in Cognex.VisionPro3D.dll) Version: 65.1.0.0
Syntax

public Cog3DPoseEstimatorUsingCrsp2D3DsResult EstimatePoseUsingInlierCrsp2D3Ds(
	List<Cog3DCameraCalibration> raw2DFromPhys3Ds,
	List<Cog3DCrsp2D3D> crsp2D3DsUnified,
	int partInstanceIndex,
	Cog3DRobustPoseEstimationParametersSimple robustPoseEstimationParamsSimple
)

Parameters

raw2DFromPhys3Ds
Type: System.Collections.Generic List Cog3DCameraCalibration 
A List of camera calibrations. One calibration per camera/view. The size of this list specifies the number of cameras/views. May not be null.
crsp2D3DsUnified
Type: System.Collections.Generic List Cog3DCrsp2D3D 

A List of crsp2D3Ds. May not be null. Note that each item has the camera index, the index of the part instance, the 2D feature(s), the corresponding 3D feature index, and sub-feature type.

Note that when there is multiple part instances, the PartInstanceIndex of crsp2D3DsUnified must be unified for all cameras by calling part corresponder ( Cog3DPartCorresponderUsingCrsp2D3Ds.Execute ). Therefore, for the same 3D part instance, the same PartInstanceIndex is used for the corresponding crsp2D3Ds items across all cameras.

If there is only one part instance, and the PartInstanceIndex is 0 for all items of crsp2D3Ds, then no need to call part corresponder.

Note that if crsp2D3DsUnified[i].FeatureRaw2D is null or empty, then crsp2D3DsUnified[i] is ignored during pose estimation.

partInstanceIndex
Type: System Int32
This parameter specifies which part instance to run the pose estimation. If crsp2D3DsUnified[i].PartInstanceIndex is not equal to partInstanceIndex, then that crsp2D3DsUnified element is not used in the pose estimation.
robustPoseEstimationParamsSimple
Type: Cognex.VisionPro3D Cog3DRobustPoseEstimationParametersSimple
Simple parameters to specify the behavior of the robust pose estimation. May not be null.

Return Value

Type: Cog3DPoseEstimatorUsingCrsp2D3DsResult
The pose estimation result (Cog3DPoseEstimatorUsingCrsp2D3DsResult). A pose estimation result is always returned except when a throw occurs. The pose estimation result contains a list of pose results (GetPoseResults ) and the list of crsp2D3D indices of the outliers (GetIndicesOfOutlierCrsp2D3Ds ) . The pose in each pose result maps features from Model3D space to Phys3D space. A list of pose results is returned in order to handle the situation where there are multiple pose estimations for the available features. The list of pose results will contain:
  • 0 pose results, when no poses were found to meet the estimation parameters;
  • 1 pose result, when there was a single pose that met the estimation parameters;
  • > 1 pose results, when there were multiple poses that satisfied the estimation parameters

Note that the returned object of Cog3DPoseEstimatorUsingCrsp2D3DsResult has properties PartInstanceIndex and Message:

PartInstanceIndex is the same as the input argument partInstanceIndex, and can be used to correspond pose estimation results and part instances.

Message is NULL if pose results GetPoseResults  in the returned object is not empty; Otherwise, Message contains diagnosis information about why the pose results in the returned object is empty.

Exceptions

ExceptionCondition
ArgumentNullException If any of the arguments (except partInstanceIndex) is null, or any input argument includes null item, or FeaturesModel3D[crsp2D3DsUnified[i].FeatureModel3DIndex] is null.
ArgumentException
  • If raw2DFromPhys3Ds.Count == 0;
  • if raw2DFromPhys3Ds.Count is 1, and raw2DFromPhys3Ds[0].IsTelecentric is true .
  • If partInstanceIndex is less than 0;
  • If crsp2D3DsUnified[i].CameraIndex is not inside range [0, raw2DFromPhys3Ds.Count-1];
  • If crsp2D3DsUnified[i].FeatureModel3DType is not Cog3DVect3, Cog3DLine, Cog3DLineSeg, Cog3DCylinder, or Cog3DCircle;
  • If crsp2D3DsUnified[i].FeatureModel3DType is Cog3DVect3, but crsp2D3DsUnified[i].Subfeature is not Cog3DSubfeatureConstants.Point0;
  • If crsp2D3DsUnified[i].FeatureModel3DType is Cog3DLine or Cog3DLineSeg, but crsp2D3DsUnified[i].Subfeature is not Cog3DSubfeatureConstants.StraightEdge0;
  • If crsp2D3DsUnified[i].FeatureModel3DType is Cog3DCircle, but crsp2D3DsUnified[i].Subfeature is not Cog3DSubfeatureConstants.CircleEdge0 or Cog3DSubfeatureConstants.Point0;
  • If crsp2D3DsUnified[i].FeatureModel3DType is Cog3DCylinder, but crsp2D3DsUnified[i].Subfeature is not Cog3DSubfeatureConstants.OccludingEdge0, .OccludingEdge1, .CircularEdge0, or .CircularEdge1;
  • If crsp2D3DsUnified[i].FeatureRaw2D is not null, and its type is not Cog3DVect2, or Cog3DVect2Collection;
  • If crsp2D3DsUnified[i].FeatureModel3DType is Cog3DVect3, crsp2D3DsUnified[i].FeatureRaw2D has type of Cog3DVect2Collection, and its size is greater than 1.
  • If crsp2D3DsUnified[i].FeatureModel3DIndex is not inside range [0, FeaturesModel3D.Count-1];
  • If crsp2D3DsUnified[i].FeatureModel3DType is not the same as the type of FeaturesModel3D[crsp2D3DsUnified[i].FeatureModel3DIndex];
Remarks

Note that this method handles a single part instance. Therefore, all 2D features with the specified partInstanceIndex in crsp2D3DsUnified must correspond to the same part instance.

Note that if 3D line segment model features are used, and some 3D line segments are shorter than the actual lengths, then it may return result(s) with larger residuals. Dilating the 3D line segments may avoid this problem.

Note that this method might return multiple different poses with similar fitting residuals. If the returned result has a list of multiple pose results, it means not enough information in crsp2D3DsUnified to get a unique pose for the specified part instance. Pose ambiguities can be avoided by adding more feature correspondences.

The following is a list of situations that satisfy the requirements for estimating a pose. Note that under some of these situations there may be multiple poses with equivalent fitting residuals:

  • There are 3 or more point correspondences;
  • There are 3 or more intersection points among 3D model lines or line segments;
  • There are 3 or more model circles;
  • There are 2 or more non-parallel model cylinders;
  • The combination of number of point correspondences, number of intersection points among 3D model lines or line segments, and number of model circles, is 3 or more;
  • There are 2 or more nonparallel model lines or line segments observed simultaneously by two or more cameras;
  • There are 2 or more model circles observed simultaneously by two or more cameras;
  • There are one or more model points, and one or more model circles observed simultaneously by two or more cameras;
  • There are one or more model points, and one or more model lines or line segments observed simultaneously by two or more cameras;
  • There are one or more model points, and one or more model cylinders observed simultaneously by two or more cameras;
  • There are one or more model lines or line segments, and one or more model circles observed simultaneously by two or more cameras;
  • There are one or more model cylinders, and one or more model circles observed simultaneously by two or more cameras;
  • There are one or more model cylinders, and one or more model lines or line segments (not parallel to the cylinder's axis) observed simultaneously by two or more cameras.

Note that the returned result might have an empty list of pose results, if it can not find any pose satisfying the requirements specified by the robust pose estimation parameters.

See Also