Quantcast

how to use the ORB descriptor

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

how to use the ORB descriptor

kutawei@yahoo.cn
hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

nghiaho12
I've mucked around with ORB a bit. It has the same C++ interface as SIFT/SURF. One way to use it is:

cv::ORB orb
cv::Mat grey1, desc1;
cv::Mat grey2, desc2;
vector <cv::KeyPoint> kp1. kp2;

...
// assume images are loaded into grey1 and grey2

orb(grey1, cv::Mat(), kp1, desc1);
orb(grey2, cv::Mat(), kp2, desc2);

I like to use the GPU BruteForceMatcher class to do nearest neighbour matching, like so:

cv::gpu::GpuMat gpu_desc1(desc1);
cv::gpu::GpuMat gpu_desc2(desc2);
cv::gpu::GpuMat gpu_ret_idx(desc2);
cv::gpu::GpuMat gpu_ret_dist(desc2);
cv::Mat ret_idx, ret_dist;

cv::gpu::BruteForceMatcher_GPU < cv::L2<float> > gpu_matcher;

gpu_matcher.knnMatch(gpu_desc1, gpu_desc2, gpu_ret_idx, gpu_ret_dist, gpu_all_dist, 2);
gpu_ret_idx.download(ret_idx);
gpu_ret_dist.download(ret_dist);

float ratio = 0.7f; // SIFT style feature matching
for(int i=0; i < ret_idx.rows; i++) {
  if(ret_dist.at<float>(i,0) < ret_dist.at<float>(i,1)*ratio) {
     // we got a match!
  }
}

ORB has a fixed number of features per image, so you will get exactly the same number of features in one image as the other. It is much faster than SURF/SIFT but I found it not as robust as the aforementioned features.

--- In [hidden email], "kutawei@..." <kutawei@...> wrote:
>
> hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

Julius Adorf
In reply to this post by kutawei@yahoo.cn


Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs

AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.

Have you checked the development version of OpenCV? Is
LSH already included therein?

Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.

Julius

--- In [hidden email], "kutawei@..." <kutawei@...> wrote:
>
> hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

kutawei@yahoo.cn
In reply to this post by nghiaho12
thank you nghiaho12. i was wrong to use the uniform interface OrbDescriptorExtractor that not work. i should use the directly orb class. how about the GPU matching speed you get? is it fast? since we are targeting the mobile platform we won't use GPU accelerate.

--- In [hidden email], "nghiaho12" <nghiaho12@...> wrote:

>
> I've mucked around with ORB a bit. It has the same C++ interface as SIFT/SURF. One way to use it is:
>
> cv::ORB orb
> cv::Mat grey1, desc1;
> cv::Mat grey2, desc2;
> vector <cv::KeyPoint> kp1. kp2;
>
> ...
> // assume images are loaded into grey1 and grey2
>
> orb(grey1, cv::Mat(), kp1, desc1);
> orb(grey2, cv::Mat(), kp2, desc2);
>
> I like to use the GPU BruteForceMatcher class to do nearest neighbour matching, like so:
>
> cv::gpu::GpuMat gpu_desc1(desc1);
> cv::gpu::GpuMat gpu_desc2(desc2);
> cv::gpu::GpuMat gpu_ret_idx(desc2);
> cv::gpu::GpuMat gpu_ret_dist(desc2);
> cv::Mat ret_idx, ret_dist;
>
> cv::gpu::BruteForceMatcher_GPU < cv::L2<float> > gpu_matcher;
>
> gpu_matcher.knnMatch(gpu_desc1, gpu_desc2, gpu_ret_idx, gpu_ret_dist, gpu_all_dist, 2);
> gpu_ret_idx.download(ret_idx);
> gpu_ret_dist.download(ret_dist);
>
> float ratio = 0.7f; // SIFT style feature matching
> for(int i=0; i < ret_idx.rows; i++) {
>   if(ret_dist.at<float>(i,0) < ret_dist.at<float>(i,1)*ratio) {
>      // we got a match!
>   }
> }
>
> ORB has a fixed number of features per image, so you will get exactly the same number of features in one image as the other. It is much faster than SURF/SIFT but I found it not as robust as the aforementioned features.
>
> --- In [hidden email], "kutawei@" <kutawei@> wrote:
> >
> > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

kutawei@yahoo.cn
In reply to this post by Julius Adorf
thank you julius. but I can't find the LSH implement in ROS, i seems that was delete. if you have the code, can you share it? otherwise, we have to wait opencv2.3.1 release.

--- In [hidden email], "Julius Adorf" <jeadorf@...> wrote:

>
>
>
> Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs
>
> AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.
>
> Have you checked the development version of OpenCV? Is
> LSH already included therein?
>
> Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.
>
> Julius
>
> --- In [hidden email], "kutawei@" <kutawei@> wrote:
> >
> > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

nghiaho12
In reply to this post by kutawei@yahoo.cn
The GPU matching is quite fast, on a 1300x1300 image using SURF features (about 2000-4000 features per image), it takes about 200-300ms or so on Geforce GTX 280. If you can't use the GPU, have a look at OpenCV's Flann matcher. It's also fast, but compromises on accuracy to achieve that speed. I tried Flann with 4 randomised KD-tree and 64 searches max, and it only runs a bit slower than the GPU matcher.

--- In [hidden email], "kutawei@..." <kutawei@...> wrote:

>
> thank you nghiaho12. i was wrong to use the uniform interface OrbDescriptorExtractor that not work. i should use the directly orb class. how about the GPU matching speed you get? is it fast? since we are targeting the mobile platform we won't use GPU accelerate.
>
> --- In [hidden email], "nghiaho12" <nghiaho12@> wrote:
> >
> > I've mucked around with ORB a bit. It has the same C++ interface as SIFT/SURF. One way to use it is:
> >
> > cv::ORB orb
> > cv::Mat grey1, desc1;
> > cv::Mat grey2, desc2;
> > vector <cv::KeyPoint> kp1. kp2;
> >
> > ...
> > // assume images are loaded into grey1 and grey2
> >
> > orb(grey1, cv::Mat(), kp1, desc1);
> > orb(grey2, cv::Mat(), kp2, desc2);
> >
> > I like to use the GPU BruteForceMatcher class to do nearest neighbour matching, like so:
> >
> > cv::gpu::GpuMat gpu_desc1(desc1);
> > cv::gpu::GpuMat gpu_desc2(desc2);
> > cv::gpu::GpuMat gpu_ret_idx(desc2);
> > cv::gpu::GpuMat gpu_ret_dist(desc2);
> > cv::Mat ret_idx, ret_dist;
> >
> > cv::gpu::BruteForceMatcher_GPU < cv::L2<float> > gpu_matcher;
> >
> > gpu_matcher.knnMatch(gpu_desc1, gpu_desc2, gpu_ret_idx, gpu_ret_dist, gpu_all_dist, 2);
> > gpu_ret_idx.download(ret_idx);
> > gpu_ret_dist.download(ret_dist);
> >
> > float ratio = 0.7f; // SIFT style feature matching
> > for(int i=0; i < ret_idx.rows; i++) {
> >   if(ret_dist.at<float>(i,0) < ret_dist.at<float>(i,1)*ratio) {
> >      // we got a match!
> >   }
> > }
> >
> > ORB has a fixed number of features per image, so you will get exactly the same number of features in one image as the other. It is much faster than SURF/SIFT but I found it not as robust as the aforementioned features.
> >
> > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > >
> > > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> > >
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

Julius Adorf
In reply to this post by kutawei@yahoo.cn
The ROS package rbrief is contained in ROS stack object_recognition_experimental (be warned, the name already tells that everything therein is very experimental):

https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp

--- In [hidden email], "kutawei@..." <kutawei@...> wrote:

>
> thank you julius. but I can't find the LSH implement in ROS, i seems that was delete. if you have the code, can you share it? otherwise, we have to wait opencv2.3.1 release.
>
> --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> >
> >
> >
> > Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs
> >
> > AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.
> >
> > Have you checked the development version of OpenCV? Is
> > LSH already included therein?
> >
> > Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.
> >
> > Julius
> >
> > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > >
> > > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> > >
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

kutawei@yahoo.cn
thank you Julius again. I got it.

--- In [hidden email], "Julius Adorf" <jeadorf@...> wrote:

>
> The ROS package rbrief is contained in ROS stack object_recognition_experimental (be warned, the name already tells that everything therein is very experimental):
>
> https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp
>
> --- In [hidden email], "kutawei@" <kutawei@> wrote:
> >
> > thank you julius. but I can't find the LSH implement in ROS, i seems that was delete. if you have the code, can you share it? otherwise, we have to wait opencv2.3.1 release.
> >
> > --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> > >
> > >
> > >
> > > Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs
> > >
> > > AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.
> > >
> > > Have you checked the development version of OpenCV? Is
> > > LSH already included therein?
> > >
> > > Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.
> > >
> > > Julius
> > >
> > > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > > >
> > > > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> > > >
> > >
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

Julius Adorf
You're welcome. Just in case you're observing OpenCV trunk
you might notify me in this thread as soon as LSH is integrated.
I'm still interested in it as well.

Julius

--- In [hidden email], "kutawei@..." <kutawei@...> wrote:

>
> thank you Julius again. I got it.
>
> --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> >
> > The ROS package rbrief is contained in ROS stack object_recognition_experimental (be warned, the name already tells that everything therein is very experimental):
> >
> > https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp
> >
> > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > >
> > > thank you julius. but I can't find the LSH implement in ROS, i seems that was delete. if you have the code, can you share it? otherwise, we have to wait opencv2.3.1 release.
> > >
> > > --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> > > >
> > > >
> > > >
> > > > Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs
> > > >
> > > > AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.
> > > >
> > > > Have you checked the development version of OpenCV? Is
> > > > LSH already included therein?
> > > >
> > > > Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.
> > > >
> > > > Julius
> > > >
> > > > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > > > >
> > > > > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> > > > >
> > > >
> > >
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

kutawei@yahoo.cn
In reply to this post by kutawei@yahoo.cn
Hi, Julius. I test the ORB descriptor in opencv2.3 with LSH matcher in ROS you instructed me. I was confuse that LSH was slower about 5 times than BruteForce matcher. I want to know how you get the faster result. I used VS2010 so I only modified the original lsh.cpp a little( __builtin_popcountll -> __popcnt) to compile. but I don't think it's the reason. was I wrong something? and my test program are base on the  
opencv example brief_match_test.cpp. I can't get lsh run under debug mode too, strangely. could you please help me? thank you advance.
--- In [hidden email], "kutawei@..." <kutawei@...> wrote:

>
> thank you Julius again. I got it.
>
> --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> >
> > The ROS package rbrief is contained in ROS stack object_recognition_experimental (be warned, the name already tells that everything therein is very experimental):
> >
> > https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp
> >
> > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > >
> > > thank you julius. but I can't find the LSH implement in ROS, i seems that was delete. if you have the code, can you share it? otherwise, we have to wait opencv2.3.1 release.
> > >
> > > --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> > > >
> > > >
> > > >
> > > > Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs
> > > >
> > > > AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.
> > > >
> > > > Have you checked the development version of OpenCV? Is
> > > > LSH already included therein?
> > > >
> > > > Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.
> > > >
> > > > Julius
> > > >
> > > > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > > > >
> > > > > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> > > > >
> > > >
> > >
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

Julius Adorf
Hi,

Weird. I did not use LSH directly. There is an experimental package
http://www.ros.org/wiki/tod_detecting which I used together with ORB, in particularly the file https://code.ros.org/svn/wg-ros-pkg/branches/trunk_diamondback/stacks/object_recognition/tod_detecting/src/Matcher.cpp

I am afraid that's all I can tell you right now because I did not
use it directly.

Julius

--- In [hidden email], "kutawei@..." <kutawei@...> wrote:

>
> Hi, Julius. I test the ORB descriptor in opencv2.3 with LSH matcher in ROS you instructed me. I was confuse that LSH was slower about 5 times than BruteForce matcher. I want to know how you get the faster result. I used VS2010 so I only modified the original lsh.cpp a little( __builtin_popcountll -> __popcnt) to compile. but I don't think it's the reason. was I wrong something? and my test program are base on the  
> opencv example brief_match_test.cpp. I can't get lsh run under debug mode too, strangely. could you please help me? thank you advance.
> --- In [hidden email], "kutawei@" <kutawei@> wrote:
> >
> > thank you Julius again. I got it.
> >
> > --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> > >
> > > The ROS package rbrief is contained in ROS stack object_recognition_experimental (be warned, the name already tells that everything therein is very experimental):
> > >
> > > https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp
> > >
> > > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > > >
> > > > thank you julius. but I can't find the LSH implement in ROS, i seems that was delete. if you have the code, can you share it? otherwise, we have to wait opencv2.3.1 release.
> > > >
> > > > --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> > > > >
> > > > >
> > > > >
> > > > > Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs
> > > > >
> > > > > AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.
> > > > >
> > > > > Have you checked the development version of OpenCV? Is
> > > > > LSH already included therein?
> > > > >
> > > > > Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.
> > > > >
> > > > > Julius
> > > > >
> > > > > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > > > > >
> > > > > > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> > > > > >
> > > > >
> > > >
> > >
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

kutawei@yahoo.cn
In reply to this post by Julius Adorf
Hi, Julius. did you just miss my last post? I want to know how you experiment with the LSH matcher from ROS. thank you. BTW, OpenCV doesn't integrate LSH until now.

--- In [hidden email], "Julius Adorf" <jeadorf@...> wrote:

>
> You're welcome. Just in case you're observing OpenCV trunk
> you might notify me in this thread as soon as LSH is integrated.
> I'm still interested in it as well.
>
> Julius
>
> --- In [hidden email], "kutawei@" <kutawei@> wrote:
> >
> > thank you Julius again. I got it.
> >
> > --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> > >
> > > The ROS package rbrief is contained in ROS stack object_recognition_experimental (be warned, the name already tells that everything therein is very experimental):
> > >
> > > https://code.ros.org/trac/wg-ros-pkg/browser/branches/trunk_diamondback/stacks/object_recognition_experimental/rbrief/src/lsh.cpp
> > >
> > > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > > >
> > > > thank you julius. but I can't find the LSH implement in ROS, i seems that was delete. if you have the code, can you share it? otherwise, we have to wait opencv2.3.1 release.
> > > >
> > > > --- In [hidden email], "Julius Adorf" <jeadorf@> wrote:
> > > > >
> > > > >
> > > > >
> > > > > Yeah, LSH will be introduced with OpenCV 2.3.1, according to http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs
> > > > >
> > > > > AFAIK, the LSH implementation will be migrated from the Robot Operating System package rbrief in the stack object_recognition_experimental over to OpenCV 2.3.1.
> > > > >
> > > > > Have you checked the development version of OpenCV? Is
> > > > > LSH already included therein?
> > > > >
> > > > > Some of my experiments with ORB and LSH showed that LSH is way faster (25-50 times in my application) than brute-force matching. I think,  it's worth a try.
> > > > >
> > > > > Julius
> > > > >
> > > > > --- In [hidden email], "kutawei@" <kutawei@> wrote:
> > > > > >
> > > > > > hi, is there anybody find ways how to use the new ORB descriptor, as it contain the rotate information, i need an example to see how to match two images by ORB. another questions is LSH really suitable for BRIEF or ORB match? i thought the nearest search a problem in hamming space, should i sit and wait the opencv2.3.1 out?
> > > > > >
> > > > >
> > > >
> > >
> >
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

Julius Adorf
Hi,

Possibly - your last post was about some problem that LSH seemed to be slower than brute-force search, right?. I am afraid, but I don't think I can provide further help with this issue. I *did* use LSH indirectly via https://code.ros.org/svn/wg-ros-pkg/branches/trunk_diamondback/stacks/object_recognition/tod_detecting/src/Matcher.cpp .

Julius


--- In [hidden email], "kutawei@..." <kutawei@...> wrote:
>
> Hi, Julius. did you just miss my last post? I want to know how you experiment with the LSH matcher from ROS. thank you. BTW, OpenCV doesn't integrate LSH until now.


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

kutawei@yahoo.cn
Yes, that's my problem. I got it faster after modifies the LshMatcher.setDimensions() parameter, but the accelerating not enough. I thought maybe my test features too limited(just 500), can you tell me how many features you use and got the conclusion LSH was 25-50 times faster brute-force matching? And I had to correct myself, that the opencv trunk already have LSH(you can find lsh_index.h, lsh_table.h in the flann module), but no a warping LshMatcher class nor any relative example now. thank you Julius, you've helped me a lot.

--- In [hidden email], "Julius Adorf" <jeadorf@...> wrote:

>
> Hi,
>
> Possibly - your last post was about some problem that LSH seemed to be slower than brute-force search, right?. I am afraid, but I don't think I can provide further help with this issue. I *did* use LSH indirectly via https://code.ros.org/svn/wg-ros-pkg/branches/trunk_diamondback/stacks/object_recognition/tod_detecting/src/Matcher.cpp .
>
> Julius
>
>
> --- In [hidden email], "kutawei@" <kutawei@> wrote:
> >
> > Hi, Julius. did you just miss my last post? I want to know how you experiment with the LSH matcher from ROS. thank you. BTW, OpenCV doesn't integrate LSH until now.
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to use the ORB descriptor

naheiya
This post has NOT been accepted by the mailing list yet.
In reply to this post by nghiaho12
Hi.     can you help me?



#include "stdafx.h"
#include <highgui.h>
#include <cv.h>
#include <iostream>   
#include <vector> 
//#include "C:\opencv2.3.1\opencv\build\include\opencv2\gpu\gpu.hpp"
//基本的多视角几何算法,单个立体摄像头标定,物体姿态估计,立体相似性算法,3D信息的重建
#pragma comment(lib,"opencv_calib3d231.lib")
#pragma comment(lib,"opencv_calib3d231d.lib")
#pragma comment(lib,"opencv_contrib231.lib")
#pragma comment(lib,"opencv_contrib231d.lib")
//定义了基本数据结构,包括最重要的Mat和一些其他的模块
#pragma comment(lib,"opencv_core231.lib")
#pragma comment(lib,"opencv_core231d.lib")
//显著特征检测,描述,特征匹配
#pragma comment(lib,"opencv_features2d231.lib")
#pragma comment(lib,"opencv_features2d231d.lib")
//Fast Library for Approximate Nearest Neighbors(FLANN)算法库
#pragma comment(lib,"opencv_flann231.lib")
#pragma comment(lib,"opencv_flann231d.lib")
//利用GPU对OpenCV模块进行加速算法
#pragma comment(lib,"opencv_gpu231.lib")
#pragma comment(lib,"opencv_gpu231d.lib")
//视频捕捉、图像和视频的编码解码、图形交互界面的接口
#pragma comment(lib,"opencv_highgui231.lib")
#pragma comment(lib,"opencv_highgui231d.lib")
//该模块包括了线性和非线性的图像滤波,图像的几何变换,颜色空间转换,直方图处理等等
#pragma comment(lib,"opencv_imgproc231.lib")
#pragma comment(lib,"opencv_imgproc231d.lib")
//一些已经废弃的代码库,保留下来作为向下兼容
#pragma comment(lib,"opencv_legacy231.lib")
#pragma comment(lib,"opencv_legacy231d.lib")
//机器学习模块(SVM,决策树,Boosting等等)
#pragma comment(lib,"opencv_ml231.lib")
#pragma comment(lib,"opencv_ml231d.lib")
//物体检测和预定义好的分类器实例(比如人脸,眼睛,面部,人,车辆等等)
#pragma comment(lib,"opencv_objdetect231.lib")
#pragma comment(lib,"opencv_objdetect231d.lib")
#pragma comment(lib,"opencv_ts231.lib")
#pragma comment(lib,"opencv_ts231d.lib")
//该模块包括运动估计,背景分离,对象跟踪
#pragma comment(lib,"opencv_video231.lib")
#pragma comment(lib,"opencv_video231d.lib")



        using namespace cv;  
        using namespace std;


int _tmain(int argc, _TCHAR* argv[])
{
        cv::ORB orb;
        cv::Mat grey1, desc1;
        cv::Mat grey2, desc2;
        vector <cv::KeyPoint> kp1, kp2;

        grey1 = imread("C:\\lenahui1.jpg");
        grey2 = imread("C:\\lenahui2.jpg");

        orb(grey1, cv::Mat(), kp1, desc1);
        orb(grey2, cv::Mat(), kp2, desc2);

        cv::gpu::GpuMat
        cv::gpu::GpuMat gpu_desc1(desc1);
        cv::gpu::GpuMat gpu_desc2(desc2);
        cv::gpu::GpuMat gpu_ret_idx(desc2);
        cv::gpu::GpuMat gpu_ret_dist(desc2);
        cv::Mat ret_idx, ret_dist;

        //cv::gpu::BruteForceMatcher_GPU < cv::L2<float> > gpu_matcher;
        cv::gpu::BruteForceMatcher_GPU< L2<float> >gpu_matcher;
        gpu_matcher.knnMatch(gpu_desc1, gpu_desc2, gpu_ret_idx, gpu_ret_dist,
                gpu_all_dist, 2);
        gpu_ret_idx.download(ret_idx);
        gpu_ret_dist.download(ret_dist);

        float ratio = 0.7f; // SIFT style feature matching
        for(int i=0; i < ret_idx.rows; i++)
        {
                if(ret_dist.at<float>(i,0) < ret_dist.at<float>(i,1)*ratio)
                {
                        // we got a match!
                }
        }



  return 0;
 

}
 


and I already allocation the opencv ,but it always say :

d:\练习\orb\orb\orb.cpp(152): error C3083: “gpu”:“::”左侧的符号必须是一种类型
1>d:\练习\orb\orb\orb.cpp(152): error C2039: “GpuMat”: 不是“cv”的成员
1>d:\练习\orb\orb\orb.cpp(152): error C3861: “GpuMat”: 找不到标识符
1>d:\练习\orb\orb\orb.cpp(152): error C3861: “gpu_desc1”: 找不到标识符
1>d:\练习\orb\orb\orb.cpp(153): error C3083: “gpu”:“::”左侧的符号必须是一种类型   and so on............
I don't know why ,please help me;Thanks!

And ,another question is ,where is the problem of the code below;it also the code of ORB,but have some wrong ,i can't find it ,can you help me, too!


using namespace cv;  
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
        Mat img_1 = imread("C:\\image\\img_1.jpg",1);
  Mat img_2 = imread("C:\\image\\img_2.jpg",1);
        if (!img_1.data || !img_2.data)
  {
                cout << "error reading images " << endl;
  return -1;
  }
  ORB orb;
  vector<KeyPoint> keyPoints_1, keyPoints_2;
        Mat descriptors_1, descriptors_2;
 
  orb(img_1, Mat(), keyPoints_1, descriptors_1);
        orb(img_2, Mat(), keyPoints_2, descriptors_2);
 
        BruteForceMatcher<HammingLUT> matcher;
  vector<DMatch> matches;
  matcher.match(descriptors_1, descriptors_2, matches);
 
        double max_dist = 0; double min_dist = 100;
        //-- Quick calculation of max and min distances between keypoints
        for( int i = 0; i < descriptors_1.rows; i++ )
  {
                double dist = matches[i].distance;
                if( dist < min_dist ) min_dist = dist;
                if( dist > max_dist ) max_dist = dist;
  }
  printf("-- Max dist : %f \n", max_dist );
        printf("-- Min dist : %f \n", min_dist );
        //-- Draw only "good" matches (i.e. whose distance is less than 0.6*max_dist )
  //-- PS.- radiusMatch can also be used here.
        std::vector< DMatch > good_matches;
        for( int i = 0; i < descriptors_1.rows; i++ )
        {
  if( matches[i].distance < 0.6*max_dist )
                {
  good_matches.push_back( matches[i]);
  }
  }
 
        Mat img_matches;
  drawMatches(img_1, keyPoints_1, img_2, keyPoints_2,
  good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
  vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
        // localize the object
  std::vector<Point2f> obj;
  std::vector<Point2f> scene;
 
        for (size_t i = 0; i < good_matches.size(); ++i)
  {
  // get the keypoints from the good matches
  obj.push_back(keyPoints_1[ good_matches[i].queryIdx ].pt);
                scene.push_back(keyPoints_2[ good_matches[i].trainIdx ].pt);
  }
  Mat H = findHomography( obj, scene, CV_RANSAC );
 
        // get the corners from the image_1
  std::vector<Point2f> obj_corners(4);
        obj_corners[0] = cvPoint(0,0);
        obj_corners[1] = cvPoint( img_1.cols, 0);
  obj_corners[2] = cvPoint( img_1.cols, img_1.rows);
  obj_corners[3] = cvPoint( 0, img_1.rows);
        std::vector<Point2f> scene_corners(4);
        perspectiveTransform( obj_corners, scene_corners, H);
 
  // draw lines between the corners (the mapped object in the scene - image_2)
  line( img_matches, scene_corners[0] + Point2f( img_1.cols, 0), scene_corners[1] + Point2f( img_1.cols, 0),Scalar(0,255,0));
        line( img_matches, scene_corners[1] + Point2f( img_1.cols, 0), scene_corners[2] + Point2f( img_1.cols, 0),Scalar(0,255,0));
        line( img_matches, scene_corners[2] + Point2f( img_1.cols, 0), scene_corners[3] + Point2f( img_1.cols, 0),Scalar(0,255,0));
        line( img_matches, scene_corners[3] + Point2f( img_1.cols, 0), scene_corners[0] + Point2f( img_1.cols, 0),Scalar(0,255,0));
        namedWindow("Match",CV_WINDOW_AUTOSIZE);
  imshow( "Match", img_matches);
  cvWaitKey(0);

}
I am not good at english and i'm the beginner of the opencv.  Please Forgive;Thank you!

Loading...