Object Tracking in Opencv (Right or Wrong)

Asked

Viewed 233 times

0

Ola Personal I am in a project of visual computing here of the company and I wanted to know of the most experienced in the subject if I am doing the right to train an object.

I read a lot and studied the subject about it I tested several frameworks and came to the conclusion of using Opencv. Only that the results of the training in my Shellprompt are giving different from the examples I see in the tutorials and videos on the net.

I am training 80 80x54 image (positive images) and 160 images (negative) doing the following steps :

1-Create Samples

C:\OPENCV\v3.3.1\build\x64\vc14\bin\opencv_createsamples.exe -info C:\Trainner002\pos\pos.txt -vec C:\Trainner002\data\vector_rg.vec -bg C:\Trainner002\neg\infofile.txt -num 80 -w 80 -h 54

2-Train Cascade

C:\OPENCV\v3.3.1\build\x64\vc14\bin\opencv_traincascade.exe -data C:\Trainner002\cascade -vec C:\Trainner002\data\vector_rg.vec -bg C:\Trainner002\neg\infofile.txt -numPos 80 -numNeg 160 -mode ALL -numStages 25 -w 80 -h 54 -featureType LBP

3- Result of shellprompt training

inserir a descrição da imagem aqui

My training only one line per stage and I have already noticed many tutorials of people doing Object Detection training but several lines per stage.

here are some examples similar to mine in fact I am following the example I found and the result is different.

http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html

Notice the result of his training, the result is similar to mine, but his table gives more results than mine, and one row per table. However my training generates all the files correctly, in this configuration.

Am I right ? Will it be an Opencv version ? Heals the settings that are wrong in the betrayal ?

I stand by!

  • Already in the text of the tutorial you are using, well after the "results", the following is included: "Each Row represents a Feature that is being trained and contains some output about its Hitratio and Falsealarm ratio. If a training Stage only selects a few Features (e.g. N = 2) then its possible Something is Wrong with your training data.". That is: 1) this is not results, it is the characteristics (Haar Features) found and useful in the classification of the data.

    1. If you are getting only 1 of them, it is likely that "there is something wrong" with your data. I would say that they are hardly representative of the object in the real world or, more likely, that the images have little variation. Have you captured images in different orientations, with different backgrounds, etc? Seek to improve your input data for training.
  • I get it, thank you! by the comment and help, in fact are documents more specifically the back side of the RG I have several Rgs in the input data of 100px by 64px because if I increase a little will give burst of memory... I am training 119 faces of RG by 236 of negative images. Do you think I should increase the positive images ? that is to say the 110px by 640px ?

  • Honestly I don’t know how to answer you. This kind of process is through trial and error. Do you have several RG images in different rotations? What I said about variability was this: different orientations and stuff, not necessarily having more images. Although having more pictures helps.

  • Another thing: if you want to detect Rgs, don’t have easier (and faster) approaches? Have you tried with limiarization? I may be wrong, but depending on what you want to do with the Rgs (and how you can capture the images), you can use Haar.

  • I didn’t actually try to file a petition, what would that be? Just to see if we have the same thought, I believe that limiarization would be the fact to take the image leave it in Grayscale and then apply the Threshold (limiarization or binarization at the end) to then detect the rectangle of the RG ? On the other issues I get scanned images I get can have RG front and back or even RG along with CPF and other document, and I’m leaving horizontal you know, all of them come scanned with the white background of the sulfite sheet or the white background of the scanner appliance cover

  • What in its opniao would be more practical or even accurate perhaps ? for document recognition. Here at the company for this project we studied the possibility of using Tensorflow or Opencv, we had the idea that with object recognition in case the document would be of good size to give a Crop since recognized and even leaveslo na horizontal após de reconhecido é obvio e passar para segunda análise que é de OCR em fim. But we need this first step knows.

  • We need more or less that, https://stackoverflow.com/questions/42056592/how-to-detect-document-from-a-picture-in-opencv a pity why it didn’t work here I managed to do that example but here the background and white knows and the largest rectangle detected is being the edges of the larger sheet in the example of the face it is on a colored background which facilitated the detection of the object, we need this for RG, CPF, at last our pilot and RG

Show 4 more comments
No answers

Browser other questions tagged

You are not signed in. Login or sign up in order to post.