Happy New Year everyone!
so, I’m working on synthetic data generation for object detection tasks. In short, we want to use blender renderings to train a neural network at detecting objects in images. For this we need two things: renderings containing objects of interest, and annotations describing the position of the object in the image (the bounding boxes).
I have a working script which can load models, and produce renderings and annotations fine.
Now I’d like to add more information to the annotations. Mainly, I’d like to know if an object is fully visible or partially occluded by another one. For example, in the image below, the blue bunny is partially occluded by the orange one.
Furthermore, I’d like to know how much occlusion is going on. That is, for each bunny the percentage of visible surface.
In order to solve the problem, I thought of using the colormap obtained by setting a different indexOB (taken from this). Following that link I can get this image
If you squint, you can see that each bunny has a different color.
Now my questions are:
how do I get very different colors in the map (as in the link)? This would be useful for debugging.
Is there a way of quantifying the amount of occluded area automatically?
For point 2, my solution would be to do one rendering with all the objects (like the one above), and then as many renderings as objects in the scene. In the latter case, only one object at a time would be visible. Finally I’d compare the full rendering with each single one and compute the visible area for each object. This seems a bit too complicated…