在 OpenCV C++ 中估计基本矩阵之前对相应点进行归一化的正确方法是什么?

What is the correct way to Normalize corresponding points before estimation of Fundamental matrix in OpenCV C++?

本文关键字:是什么 方法 C++ OpenCV      更新时间:2023-10-16

我正在尝试手动实现相应点的基本矩阵估计函数(基于两个图像之间的相似性(。经过ORB特征检测、提取、匹配和比率测试后得到相应的点。

关于这个主题的良好来源有很多文献。但是,它们似乎都没有给出一个好的伪代码来执行该过程。我浏览了多视图几何书的各个章节;以及许多在线资源。

这个来源似乎给出了一个进行规范化的公式,我遵循了这个来源第 13 页上提到的公式。

基于这个公式,我创建了以下算法。我不确定我是否以正确的方式做!

Normalization.hpp

class Normalization {
typedef std::vector <cv::Point2f> intercepts;
typedef std::vector<cv::Mat> matVec;
public:
Normalization () {}
~Normalization () {}
void makeAverage(intercepts pointsVec);
std::tuple <cv::Mat, cv::Mat> normalize(intercepts pointsVec);
matVec getNormalizedPoints(intercepts pointsVec);
private:
double xAvg = 0;
double yAvg = 0;
double count = 0;
matVec normalizedPts;
double distance = 0;
matVec matVecData;
cv::Mat forwardTransform;
cv::Mat reverseTransform;
};

规范化.cpp

#include "Normalization.hpp"
typedef std::vector <cv::Point2f> intercepts;
typedef std::vector<cv::Mat> matVec;
/*******
*@brief  : The makeAverage function receives the input 2D coordinates (x, y)
*          and creates the average of x and y
*@params : The input parameter is a set of all matches (x, y pairs) in image A
************/
void Normalization::makeAverage(intercepts pointsVec) {
count = pointsVec.size();
for (auto& member : pointsVec) {
xAvg = xAvg + member.x;
yAvg = yAvg + member.y;
}
xAvg = xAvg / count;
yAvg = yAvg / count;
}
/*******
*@brief  : The normalize function accesses the average distance calculated
*          in the previous step and calculates the forward and inverse transformation
*          matrices
*@params : The input to this function is a vector of corresponding points in given image
*@return : The returned data is a tuple of forward and inverse transformation matrices
*************/
std::tuple <cv::Mat, cv::Mat> Normalization::normalize(intercepts pointsVec) {
for (auto& member : pointsVec) {
//  Accumulate the distance for every point
distance += ((1 / (count * std::sqrt(2))) *
(std::sqrt(std::pow((member.x - xAvg), 2)
+ std::pow((member.y - yAvg), 2))));
}
forwardTransform = (cv::Mat_<double>(3, 3) << (1 / distance), 
0, -(xAvg / distance), 0, (1 / distance), 
-(yAvg / distance), 0, 0, 1);
reverseTransform = (cv::Mat_<double>(3, 3) << distance, 0, xAvg, 
0, distance, yAvg, 0, 0, 1);
return std::make_tuple(forwardTransform, reverseTransform);
}
/*******
*@brief  : The getNormalizedPoints function trannsforms the raw image coordinates into
*          transformed coordinates using the forwardTransform matrix estimated in previous step
*@params : The input to this function is a vector of corresponding points in given image
*@return : The returned data is vector of transformed coordinates
*************/
matVec Normalization::getNormalizedPoints(intercepts pointsVec) {
cv::Mat triplet;
for (auto& member : pointsVec) {
triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
matVecData.emplace_back(forwardTransform * triplet);
}
return matVecData;
}

这是正确的方法吗?还有其他规范化方法吗?

我认为你可以按照自己的方式去做,但在"计算机视觉中的多视图几何"中,Hartley和Zisserman建议使用各向同性缩放(第107页(:

各向同性缩放。作为归一化的第一步,每个图像中的坐标是 翻译(通过每个图像的不同翻译(以便带来 设置到原点的所有点。坐标也按比例缩放,以便平均 点 x 的形式为 x = (x, y,w(T,其中 x、y 和 w 中的每一个都具有相同的平均值 大小。与其为每个坐标方向选择不同的比例因子,不如将 选择各向同性比例因子,以便缩放点的 x 和 y 坐标 同样。为此,我们选择缩放坐标,使平均距离 距原点 X 的点等于 √ 2. 这意味着"平均"点相等 至 (1, 1, 1(T。总之,变换如下:
(i(点被平移,使其质心位于原点。
(ii( 然后对点进行缩放,使与原点的平均距离相等 到√2。
(iii( 此转换分别应用于两个图像中的每一个。

他们指出,这对于直接线性变换(DLT(很重要,但对于您想要的基本矩阵的计算更为重要。 您选择的算法将点坐标归一化为 (1, 1, 1(,但应用缩放,以便与原点的平均距离等于 √2。

下面是此类规范化的一些代码。平均步骤保持不变:

std::vector<cv::Mat> normalize(std::vector<cv::Point2d> pointsVec) {
// Averaging
double count = (double) pointsVec.size();
double xAvg = 0;
double yAvg = 0;
for (auto& member : pointsVec) {
xAvg = xAvg + member.x;
yAvg = yAvg + member.y;
}
xAvg = xAvg / count;
yAvg = yAvg / count;
// Normalization
std::vector<cv::Mat> points3d;
std::vector<double> distances;
for (auto& member : pointsVec) {
double distance = (std::sqrt(std::pow((member.x - xAvg), 2) + std::pow((member.y - yAvg), 2)));
distances.push_back(distance);
}
double xy_norm = std::accumulate(distances.begin(), distances.end(), 0.0) / distances.size();
// Create a matrix transforming the points into having mean (0,0) and mean distance to the center equal to sqrt(2)
cv::Mat_<double> Normalization_matrix(3, 3); 
double diagonal_element = sqrt(2) / xy_norm;
double element_13 = -sqrt(2) * xAvg / xy_norm;
double element_23 = -sqrt(2)* yAvg/ xy_norm;
Normalization_matrix << diagonal_element, 0, element_13,
0, diagonal_element, element_23,
0, 0, 1;
// Multiply the original points with the normalization matrix
for (auto& member : pointsVec) {
cv::Mat triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
points3d.emplace_back(Normalization_matrix * triplet);
}
return points3d;
}