# question about Drawing Epipolar Curve Classic List Threaded 5 messages Open this post in threaded view
|

## question about Drawing Epipolar Curve

 Dear Friends, I am working with FindFaundamentalMat function to find 2 corresponding match points in panorama images. I am trying to draw the epipolar curve (since the panorama is a snap of 360 degree scene. however I have faced 2 problems. first in some of the pair images, the curve does not pass through the corresponding point chosen from other image. while it gives good results in some other pair images. (I am suspicious if it is because of distortion in the image or maybe some bug in my code!) second, the curve is too thick and have a weird shape. My friend says that the areas covered by this curve should be in terms of pixels while I am drawing them in terms of distance units. if it is true, how can I correct my code? could you kindly look at my code and see if I have gone wrong some where, or if I can improve my code in some way?! I have included some pictures so that you can see the result of my code. http://i39.tinypic.com/2ujpo9t.jpg   bad results http://i41.tinypic.com/ht8s42.jpg    good results Thank you in advance, your helps are highly appreciated. H. Peikari PS: First I find the fundamental matrix using RANSAC by finding some match points. Then by clicking in image1 , I want to show its corresponding epipolar curve in other image. ============================================================= here is the code - mouse call back function ============================================================= float EPS=0.3; void on_mouse(int event, int xx, int yy, int flags, void* param) {    switch(event)  {  case CV_EVENT_LBUTTONDOWN:  {       cvCircle(im1, cvPoint(xx,yy) , 5, cvScalar(255,0,255), 1);   float u1 , u2 , v1 , v2 , z1 , z2 , result ;      float x , y , z , angle ,l1 , l2 , l3 , E1 , E2;  angle =(float)( (float)xx * 2.0 * M_PI / (float)im2->width);        y = sin(angle);  v1 = y;    z = (float)(-((float)yy / (float)im2->height * 2.0 - 1));    z1 = z;  x = - cos(angle);  u1 = x;  for(float i=0; iwidth; i++)  {   for(float j=0; jheight ;j++)   {    angle =(float)( (float)i * 2.0 * M_PI / (float)im2->width);      y = sin(angle);    v2 = y;              z =(float)( -((float)j / (float)im2->height * 2.0 - 1));      z2 = z;    x = - cos(angle);    u2 = x;      /*         calculating          T'*F*T/F*T          */      result = (float)(u1*u2*cvmGet(F,0,0) + v1*u2*cvmGet(F,1,0) + z1*u2*cvmGet(F,2,0) + u1*v2*cvmGet(F,0,1) + v1*v2*cvmGet(F,1,1) + z1*v2*cvmGet(F,2,1) + u1*z2*cvmGet(F,0,2) + v1*z2*cvmGet(F,1,2) + z1*z2*cvmGet(F,2,2));      l1= (float)pow(( u1*cvmGet(F,0,0) + v1*cvmGet(F,0,1) + z1*cvmGet (F,0,2) ),2);      l2= (float)pow(( u1*cvmGet(F,1,0) + v1*cvmGet(F,1,1) + z1*cvmGet (F,1,2) ),2);      l3= (float)pow(( u1*cvmGet(F,2,0) + v1*cvmGet(F,2,1) + z1*cvmGet (F,2,2) ),2);      E1 = l1+l2+l3;            l1= (float)pow(( u2*cvmGet(F,0,0) + v2*cvmGet(F,1,0) + z2*cvmGet (F,2,0) ),2);      l2= (float)pow(( u2*cvmGet(F,0,1) + v2*cvmGet(F,1,1) + z2*cvmGet (F,2,1) ),2);      l3= (float)pow(( u2*cvmGet(F,0,2) + v2*cvmGet(F,1,2) + z2*cvmGet (F,2,2) ),2);      E2 = l1+l2+l3;      result= result * (sqrt(1/E1)+sqrt(1/E2)); // in order to normalize     if( abs(result) <= (0.0 + EPS) )  /*       T'*F*T / F*T <= 0+EPS   hence  matched   */     {      Ipoint p;      p.x = i;      p.y = j;      drawPoint(im2,p);     }   }  }  cvShowImage("corresponding point", im1);  cvShowImage("Epipolar Curve", im2);  cvReleaseImage(&im1);  cvReleaseImage(&im2);  }   } }
Open this post in threaded view
|

## feature tracking using SURF

 Hi all, I'm trying to track feature points in the images of a freely moving camera using SURF(cvExtractSURF()). For lowering its computational cost, I'm using the ROI. That is, 1. I find N features from a frame using cvExtractSURF(). 2. N fixed-sized ROI images which surround the features are made from the next frame. 3. I find M(i) features from i-th ROI image. 4. Among M(i) features, I find the correspondence of the i-th feature by the SSD of descriptor values. If the ROI size was not large, the processing time was acceptable for real-time applications. But its performance was too poor than expected. In practice, regardless of the ROI size the performance was not good. I thought the SURF will be better than other texture-based matching algo. because it is robust to the affine transformation. But its performance was poorer than the simple block matching and not robust to the affine transformation. So, I wonder if anybody has used cvExtractSURF() for feature tracking. Was it successful? If so, can I have a piece of code or comments? Sincerely, Hanhoon p.s. My main purpose is to find a rough but fast feature-point tracking alg. which is robust to affine (perspective if possible) camera motion. So, i don't want to use the wide-baseline matching alg. or dynamic template matching alg. Can anybody recommend me an alg. for this purpose? [Non-text portions of this message have been removed]
Open this post in threaded view
|

## Re: feature tracking using SURF

 Well you have SIFT but the implementations are slower than SURF and not included in OpenCV... You may look at your matching algorithm and try to tune it to get better results perhaps. --- In [hidden email], "Hanhoon Park" wrote: > > Hi all, > > I'm trying to track feature points in the images of a freely moving camera using SURF(cvExtractSURF()). For lowering its computational cost, I'm using the ROI. > That is, > 1. I find N features from a frame using cvExtractSURF(). > 2. N fixed-sized ROI images which surround the features are made from the next frame. > 3. I find M(i) features from i-th ROI image. > 4. Among M(i) features, I find the correspondence of the i-th feature by the SSD of descriptor values. > If the ROI size was not large, the processing time was acceptable for real-time applications. > But its performance was too poor than expected. In practice, regardless of the ROI size the performance was not good. > I thought the SURF will be better than other texture-based matching algo. because it is robust to the affine transformation. But its performance was poorer than the simple block matching and not robust to the affine transformation. > > So, I wonder if anybody has used cvExtractSURF() for feature tracking. Was it successful? > If so, can I have a piece of code or comments? > > Sincerely, > Hanhoon > > p.s. My main purpose is to find a rough but fast feature-point tracking alg. which is robust to affine (perspective if possible) camera motion. So, i don't want to use the wide-baseline matching alg. or dynamic template matching alg. Can anybody recommend me an alg. for this purpose? > > [Non-text portions of this message have been removed] >