校准最佳实践
准确校准对于大多数机器和计算机视觉任务的性能至关重要。以下列出了我们通过大量实验和理论思考发现的最佳实践。
- 选择正确尺寸的校准目标。足够大以正确约束参数。理想情况下,当在相机图像中正面平行时,它应该覆盖至少一半的总面积。
- 在最终应用的近似工作距离 (WD) 处进行校准。相机应在此距离处对焦,并且在校准期间或之后不得改变镜头焦距和光圈。
- 目标应具有较高的特征数。最好使用精细图案。但是,有时检测稳健性会受到影响。我们建议,如果照明可控且良好,则对于 3MPx 以上的相机使用精细图案数。
- 收集不同区域和倾斜度的图像。移动目标以完全覆盖图像区域并力求均匀覆盖。镜头失真可以从正面平行图像中正确确定,但焦距和主点估计取决于观察透视缩短。包括正面平行图像和板在水平和垂直方向上倾斜 +/- 45 度时拍摄的图像。倾斜更多通常不是一个好主意,因为特征定位精度会受到影响并且可能会出现偏差。
- 使用良好的照明。这一点经常被忽视,但却非常重要。最好通过受控的摄影照明来漫射校准目标。强点光源会导致照明不均匀,可能导致检测失败,并且不能很好地利用相机的动态范围。阴影也会产生同样的效果。
- 有足够的观测值。通常,校准应至少对校准目标的 6 个观测值(图像)进行。如果使用更高阶的相机或失真模型,则更多的观测值是有益的。
- 考虑使用 CharuCo 板等唯一编码目标。这些目标可让您从相机传感器和镜头的边缘收集观测数据,从而很好地限制失真参数。此外,即使某些特征点不满足其他要求,它们也允许您收集数据。
- 校准的准确性取决于所使用的校准目标。仅使用激光或喷墨打印的目标进行验证和测试。
- 正确安装校准目标和相机。为了最大限度地减少较大目标的扭曲和弯曲,请将其垂直安装或平放在刚性支架上。在这些情况下,请考虑移动相机而不是目标。使用优质三脚架,并避免在采集过程中触摸相机。
- 删除错误的观察结果。仔细检查重新投影错误。每个视图和每个特征都应如此。如果其中任何一个显示为异常值,请将其排除并重新校准。
- 获得较低的再现误差并不等于良好的相机校准,而仅仅表明所提供的数据/证据可以用所用的模型来描述。这可能是由于过度拟合造成的。参数不确定性表明所选相机模型的约束效果如何。
- 分析单个重投影误差。它们的方向和大小不应与位置相关,即它们应该混乱地指向所有方向。Calib.io 的相机校准器软件提供了强大的可视化功能来调查重投影误差。
遵循这些做法应能确保尽可能最准确和精确的校准。
有任何问题、评论或其他见解?请在下面发表。
do you need to take images at different distance for good range coverage as well ?
@Brian: It sounds like you are working with a camera that can do some light internal image processing (i.e. sharpening filters). These alter the raw image to make it appear sharper. Generally, we would recommend turning off all image processing during calibration. Depending on the exact implementation, these algorithms could bias the found locations of image saddle points or circles. However, the effects could very well be negligible.
First off I would like to thank your team for making such great calibration targets! My question deals with digital camera setting. I am using a fixed focal length camera and basically following the guidelines that you have set out in this document. Provided that you are getting a “good image”, do settings like sharpness, gamma, white balance, etc, impact the camera characteristics of camera calibration? If so, do you have any recommendation for such settings?
I keep going back and forth on whether the sharpness setting impacts the finding of the intersections of a chess board.
@Min-An Chao, yes you are correct that the pinhole camera model is ideal in the sense that it is focus free. Only with very small apertures do standard cameras begin to exhibit pinhole like characteristics. Unfortunately, in most cases we are required to open the aperture in order to collect more light and as a result we deviate from the pinhole model. However, in these cases we can assume pinhole like characteristics where the image is in focus and we perform a calibration at that focusing distance (fixed focal length) in order to estimate the pinhole model in that condition. By re-focusing we are effectively changing the distance between lense and sensor which affects the focal length directly. The change in focal length is quite small, but for precision applications it can put a measurement setup out of spec. For ideal optics the principal point (cx,cy) will theoretically not change. But in the real world, it might.
To visually see the change in focal length by focusing. Put your camera in manual focusing mode, close the aperture as much as you can and use a lot of light to illuminate a scene. Now, manually adjust the focus dial and observe how the image zooms in and out.
Hello,
Thanks for great guidelines and informative Q&A section here. I‘d like to follow up @Tim‘s question, since the possibility of calibration for autofocus camera has bothered me several times. Because pinhole camera model does not have focus problem, so when applying this model to describe a camera matrix, fine-tuning the distance between image sensor and the lens set should not change the pinhole focal length f, and should not change the origin point (cx, cy) projected on the image plane. Then what would autofocus affect or harm the calibration results? Because personally I did the calibration for one camera where I can set manual focus digitally and precisely and repeat the calibration with precisely controlled motorized stages and could not find signifacant differences on camera matrix and distortion parameters with opencv scripts. Could you shed some lights here? Thanks!
Min-An