How to identify heat regions in thermal images?

Asked

Viewed 1,531 times

8

I’m developing a project where I need to process photos taken from a thermal camera. The idea is to try to identify fires in these images. I would like to know what techniques I can use for such a purpose, if you know any subject, or something that can help me.

3 answers

22

Although I have decided to answer this question, I believe that in questions of computer vision and image processing, the person asking should show how she has tried to solve the problem: list/describe the techniques she has applied to try to solve the problem, or show code that does this. Thus the question becomes specific and it does not run the risk of being closed.

To avoid a post very long, this answer assumes that the reader has a basic understanding of the area of digital image processing.

The image below was acquired by a thermal imaging device that was being used by firefighters during a training.

The capture device has a sensor that can indicate the approximate temperature of the central region of the image. The green bar that appears on the right side of the image shows that the temperature of the red region is above 900 degrees. In this way, we can assume that regions with more heat in the image will have stronger colors.

The algorithm I describe here uses the following pipeline processing:

  • Limiarization
  • Contour detection

The problem can be solved by applying a limiarization (Threshold) to isolate pixels that have a certain color tone, in this case red. This operation produces the following binary image:

Next, identify the contours of objects in the binary image makes it possible to locate the hottest areas. The result of this operation is a array with the contours of the objects:

inserir a descrição da imagem aqui

This looks like exactly what you’re looking for. We can also use this data to visualize in the original image the area that was detected by the algorithm:

This approach presents a simpler way of solving the problem. Note that to get a more robust solution you should test it with a larger set of images. Be prepared to make adjustments and improve this solution.

There is a programming library that implements the techniques I mentioned above. I am talking about the Opencv, which is the largest open source library of computer vision. It is cross-platform and has Apis for C, C++, Java and Python languages.

Good luck!

  • 2

    Ah, you can use Opencv with Node.js too! : ) https://github.com/peterbraden/node-opencv

10

The @karlphillip user already provided an excellent response, but I wanted to complement with some remarks and suggestions but the comment field was small. So I decided to add my own answer.

The idea of using limiarization makes perfect sense, especially for examples like the data in Karl’s answer. However, the person who asked the question (@user6357) did not provide even a picture from your problem area, so it is difficult to provide really concrete suggestions. So, I unfortunately agree with Karl that the question is pretty badly posed. But regardless of the quality of the question, I think the knowledge put here can help the community and so I preferred to collaborate rather than simply signal the question.

Assuming that the images captured by the camera do not have a background (background) as differentiated from the red as the example provided in the aforementioned answer, the exclusive use of limiarization may not provide satisfactory results.

That way, I imagine two alternatives:

1. Add to the limiarization (or any other segmentation method) a preprocessing that considers only the areas of the image in which there is movement.

If you have control over capturing images from the camera (this is also something you didn’t mention in the question) and are able to take two images in sequence, you can compare the images to verify the differences: a mere subtraction of the pixel values between the images will produce a new image whose pixels will be non-zero in the regions where movement occurs. In your question you mention "fires", and the crackling of fire naturally causes movement. If this is the case, it should be possible to pre-segment the image only in the areas where movement occurs to then apply the pipeline suggested by Karl.

There are other solutions with more complex classifiers that also use movement and also texture analysis for fire detection. That article is a great example that can be useful to you.

2. Using a more complex classifier.

Assuming that you have no image sequences, that is, only a single image to perform the detection and that due to the background variations the use of the limiarization is insufficient. In this case, you can try with more robust classifiers.

Since you are using C++ (and Opencv has already been suggested, which is a fantastic library), I suggest using the Cascade Classifier. You will need to train your own classifier with positive (where there are fires) and negative (where there are no fires), and this tutorial is very nice to do this (it detects bananas in images, but the principle is the same - just use the correct examples, hehehe).

This classifier works in a very smart way: basically, during training he "learns" from the sample images the light intensity values for different types of characteristics (Haar-like Features) which indicate when there is or is not a chance that the object of interest exists in a given area of the image. Since these features are easily scaled, it is easy to search the image for variations in size (scale) of the same object - and in practice this is what the algorithm does: it searches windows of gradually larger size until the classifier indicates as true the occurrence of an object in one of the search windows.

The features that Opencv uses in existing implementations are those of the following image, which I believe should be sufficient for your type of problem.

inserir a descrição da imagem aqui

The description on the Opencv website link is pretty cool, but you don’t have to completely understand the idea of this classifier to use it. The quality of detection will only depend on the quality of the examples you provide. In the tutorial I cited the examples of bananas are all horizontal, so the resulting classifier may not be too robust for images with photos of bananas upright. Similar variations can influence your result, so just be aware of the possibility of having to retrain with more examples.

  • 1

    Poxa bacana. Thanks for sharing your ideas with us! + 1

2

I have no experience on the subject but I know that Opencv is widely used for image processing, and, in the case of a thermal camera, I imagine that warmer than normal areas are highlighted on images with appropriate colors, less common than normal, then making a treatment of the existing colors in the images via Opencv seems an option.

This is a reference regarding using Opencv for fire and smoke detection (in this case it seems to use even common cameras): FIRE AND SMOKE DETECTION BASED ON LOW COST CAMERA

Browser other questions tagged

You are not signed in. Login or sign up in order to post.