I'm having trouble undistorting points on an image taken with a calibrated camera using the Python bindings for OpenCV. The undistorted points have entirely different coordinates than the original points detected in the image.
Here's the offending call:
undistorted = cv2.undistortPoints(image_points,camera_matrix,distortion_coefficients)
where image_points
is a numpy array of detected chessboard corners returned by cv2.findChessboardCorners
and reshaped to match the dimensional requirements of cv2.undistortPoints
, and camera_matrix
and distortion_coefficients
were returned by cv2.calibrateCamera
.
camera_matrix
and distortion_coefficients
seem to me to be okay, and so do image_points
. Nevertheless, distorted
seems to have no relationship to image_points
. Here's a summary of the values:
>>> image_points
array([[[ 186.95303345, 163.25502014]],[[ 209.54478455, 164.62690735]],[[ 232.26443481, 166.10734558]],..., [[ 339.03695679, 385.97784424]],[[ 339.20108032, 400.38635254]],[[ 339.13067627, 415.30780029]]], dtype=float32)
>>> undistorted
array([[[-0.19536583, -0.07900728]],[[-0.16608481, -0.0772614 ]],[[-0.13660771, -0.07537176]],..., [[ 0.00228534, 0.21044853]],[[ 0.00249786, 0.22910291]],[[ 0.00240568, 0.24841554]]], dtype=float32)
>>> camera_matrix
array([[ 767.56947802, 0. , 337.27849576],[ 0. , 767.56947802, 224.04766824],[ 0. , 0. , 1. ]])
>>> distortion_coefficients
array([[ 0.06993424, -0.32645465, 0. , 0. , -0.04310827]])
I'm working with reference C code and everything matches up until I make that call. What's going wrong?