OpenCV学习问题记录(四)

本文介绍Harris角点检测算法的实现过程,包括图像读取、灰度转换、参数设置、角点检测及结果可视化。同时,对比Shi-Tomasi和FAST算法,展示不同算法在角点检测上的性能差异。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1、Harris角点检测

#include <iostream>
#include <numeric>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/features2d.hpp>

using namespace std;

void cornernessHarris()
{
// load image from file
	cv::Mat img1;
	img1 = cv::imread("E:/传感器融合/SensorFusion/所有代码/所有代码/3_Camera/04_Tracking Image Features/gradient_filtering/images/img1.png");

	// convert image to grayscale
	cv::Mat imgGray1;
	cv::cvtColor(img1, imgGray, cv::COLOR_BGR2GRAY);

	// Detector parameters
	int blockSize = 2; // for every pixel, a blockSize × blockSize neighborhood is considered
	int apertureSize = 3; // aperture parameter for Sobel operator (must be odd)
	double k = 0.04; // Harris parameter (see equation for details)

	// Detect Harris corners and normalize output
	cv::Mat dst, dst_norm, dst_norm_scaled;
	dst = cv::Mat::zeros(imgGray1.size(), CV_32FC1);
	cv::cornerHarris(imgGray1, dst, blockSize, apertureSize, k, cv::BORDER_DEFAULT);
	cv::normalize(dst, dst_norm, 0, 255, cv::NORM_MINMAX, CV_32FC1, cv::Mat());
	cv::convertScaleAbs(dst_norm, dst_norm_scaled);

	// visualize results
	string windowName1 = "Harris Corner Detector Response Matrix";
	cv::namedWindow(windowName1, 4);
	cv::imshow(windowName1, dst_norm_scaled);
	cv::waitKey(0);
}

	int main() 
	{
		cornernessHarris();
	}

注解:
1、cv::cornerHarris(imgGray1, dst, blockSize, apertureSize, k, cv::BORDER_DEFAULT);
函数cornerHarris主要参数:
imgGray1:输入的单通道图像
dst:存放结果的图像,大小和原图像相同
blockSize:邻域的大小
apertureSize:sobel算子孔径的大小
k:函数的基本参数0.04——0.06,一般取较小

2、cv::normalize(dst, dst_norm, 0, 255, cv::NORM_MINMAX, CV_32FC1, cv::Mat());
作用:使数据归一化。因为现在得到的角点图中的点的灰度值很小,为了方便计算,将这些灰度值规整(归一化操作),
形式:void cv::normalize(InputArry src,InputOutputArray dst,double alpha=1,double beta=0,int norm_type=NORM_L2,int dtype=-1,InputArray mark=noArry())
src:输入数组;
dst:输出数组,数组的大小和原数组一致;
alpha:若值为1表示用来规范值,若为其他值表示规范范围,并且是下限;
beta:只用来规范范围并且是上限;
norm_type:归一化选择的数学公式类型;
dtype :当为负,输出在大小深度通道数都等于输入,当为正,输出只在深度与输如不同,不同的地方游dtype决定;
mark:掩码。选择感兴趣区域,选定后只能对该区域进行操作。

3、cv::convertScaleAbs(dst_norm, dst_norm_scaled);
作用:图像增强的方式,在此仅用于增强角点显示,便于人眼观察。其他常用的方法是阈值后显示。
形式:void cv::convertScaleAbs(
	cv::InputArray src, // 输入数组
	cv::OutputArray dst, // 输出数组
	double alpha = 1.0, // 乘数因子
	double beta = 0.0 // 偏移量
);

2、角点定位

// Look for prominent corners and instantiate keypoints
	vector<cv::KeyPoint> keypoints;
	douint minResponse = 100; // minimum value for a corner in the 8bit scaled response matrixble maxOverlap = 0.0; 
	// max. permissible overlap between two features in %, used during non-maxima suppression
	for (size_t j = 0; j < dst_norm.rows; j++)
	{
		for (size_t i = 0; i < dst_norm.cols; i++)
		{
			int response = (int)dst_norm.at<float>(j, i);
			if (response > minResponse)
			{ // only store points above a threshold

				cv::KeyPoint newKeyPoint;
				newKeyPoint.pt = cv::Point2f(i, j);//关键点坐标
				newKeyPoint.size = 2 * apertureSize;//邻域直径大小,即sobel算子孔径大小
				newKeyPoint.response = response;//响应强度

				// perform non-maximum suppression (NMS) in local neighbourhood around new key point
				bool bOverlap = false;
				for (auto it = keypoints.begin(); it != keypoints.end(); ++it)
				{
					double kptOverlap = cv::KeyPoint::overlap(newKeyPoint, *it);//计算特征点对的覆盖率
					if (kptOverlap > maxOverlap)
					{
						bOverlap = true;
						if (newKeyPoint.response > (*it).response)
						{                      // if overlap is >t AND response is higher for new kpt
							*it = newKeyPoint; // replace old key point with new one
							break;             // quit loop over keypoints
						}
					}
				}
				if (!bOverlap)
				{                                     // only add new key point if no overlap has been found in previous NMS
					keypoints.push_back(newKeyPoint); // store new keypoint in dynamic list
				}
			}
		} // eof loop over cols
	}     // eof loop over rows

	// visualize keypoints
	windowName = "Harris Corner Detection Results";
	cv::namedWindow(windowName, 5);
	cv::Mat visImage = dst_norm_scaled.clone();
	cv::drawKeypoints(dst_norm_scaled, keypoints, visImage, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
	cv::imshow(windowName, visImage);
	cv::waitKey(0);
}

注:Keypoints类源码解释

关于特征点的检测 绘制 详细解释

3、角点检测的几种经典算法
The Harris detector along with several other “classics” belongs to a group of traditional detectors, which aim at maximizing detection accuracy. In this group, computational complexity is not a primary concern. The following list shows a number of popular classic detectors :

1988 Harris Corner Detector (Harris, Stephens)
1996 Good Features to Track (Shi, Tomasi)
1999 Scale Invariant Feature Transform (Lowe)
2006 Speeded Up Robust Features (Bay, Tuytelaars, Van Gool)
In recent years, a number of faster detectors has been developed which aims at real-time applications on smartphones and other portable devices. The following list shows the most popular detectors belonging to this group:

2006 Features from Accelerated Segment Test (FAST) (Rosten, Drummond)
2010 Binary Robust Independent Elementary Features (BRIEF) (Calonder, et al.)
2011 Oriented FAST and Rotated BRIEF (ORB) (Rublee et al.)
2011 Binary Robust Invariant Scalable Keypoints (BRISK) (Leutenegger, Chli, Siegwart)
2012 Fast Retina Keypoint (FREAK) (Alahi, Ortiz, Vandergheynst)
2012 KAZE (Alcantarilla, Bartoli, Davidson)
其中两种算法示例:

#include <iostream>
#include <numeric>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/features2d.hpp>

using namespace std;

void detKeypoints1()
{
	// load image from file and convert to grayscale
	cv::Mat imgGray;
	cv::Mat img = cv::imread("E:/传感器融合/SensorFusion/所有代码/所有代码/3_Camera/04_Tracking Image Features/gradient_filtering/images/img1.png");
	string windowName1 = "原图";
	cv::namedWindow(windowName1, 2);
	imshow(windowName1, img);
	cv::cvtColor(img, imgGray, cv::COLOR_BGR2GRAY);

	// Shi-Tomasi detector
	int blockSize = 6;       //  size of a block for computing a derivative covariation matrix over each pixel neighborhood
	double maxOverlap = 0.0; // max. permissible overlap between two features in %
	double minDistance = (1.0 - maxOverlap) * blockSize;
	int maxCorners = img.rows * img.cols / max(1.0, minDistance); // max. num. of keypoints
	double qualityLevel = 0.01;                                   // minimal accepted quality of image corners
	double k = 0.04;
	bool useHarris = false;

	vector<cv::KeyPoint> kptsShiTomasi;
	vector<cv::Point2f> corners;
	double t = (double)cv::getTickCount();
	cv::goodFeaturesToTrack(imgGray, corners, maxCorners, qualityLevel, minDistance, cv::Mat(), blockSize, useHarris, k);
	t = ((double)cv::getTickCount() - t) / cv::getTickFrequency();
	cout << "Shi-Tomasi with n= " << corners.size() << " keypoints in " << 1000 * t / 1.0 << " ms" << endl;

	for (auto it = corners.begin(); it != corners.end(); ++it)
	{ // add corners to result vector

		cv::KeyPoint newKeyPoint;
		newKeyPoint.pt = cv::Point2f((*it).x, (*it).y);
		newKeyPoint.size = blockSize;
		kptsShiTomasi.push_back(newKeyPoint);
	}

	// visualize results
	cv::Mat visImage = img.clone();
	cv::drawKeypoints(img, kptsShiTomasi, visImage, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
	string windowName = "Shi-Tomasi Results";
	cv::namedWindow(windowName, 1);
	imshow(windowName, visImage);
	cv::waitKey(0);
	 //STUDENT CODE
	int threshold = 30;                                                              // difference between intensity of the central pixel and pixels of a circle around this pixel
	bool bNMS = true;                                                                // perform non-maxima suppression on keypoints
	//cv::FastFeatureDetector:: type = cv::FastFeatureDetector::TYPE_9_16; // TYPE_9_16, TYPE_7_12, TYPE_5_8
	cv::Ptr<cv::FeatureDetector> detector = cv::FastFeatureDetector::create(threshold, bNMS, cv::FastFeatureDetector::TYPE_9_16);

	vector<cv::KeyPoint> kptsFAST;
	t = (double)cv::getTickCount();
	detector->detect(imgGray, kptsFAST);
	t = ((double)cv::getTickCount() - t) / cv::getTickFrequency();
	cout << "FAST with n= " << kptsFAST.size() << " keypoints in " << 1000 * t / 1.0 << " ms" << endl;

	visImage = img.clone();
	cv::drawKeypoints(img, kptsFAST, visImage, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
	windowName = "FAST Results";
	cv::namedWindow(windowName, 2);
	imshow(windowName, visImage);
	cv::waitKey(0);

	// EOF STUDENT CODE
}

int main()
{
	detKeypoints1();
}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值