bge.texture.FilterColor ,FilterLevel,ImageMix - how do they work?

(VegetableJuiceF) #3

Sure,

but wow, so many things that you hit on the mark.
Yeah a generator is what I used also :D.

I would NOT worry about the shading as that would be the organic part, something i would desire.

I heard that the texture refresh() can be threaded, any ideas?
Also, how would I couple C in this?

I have done pathfinding in C for blender but that through a pipe and stdio.
I fear that the bulk of the lag comes from just asking for the data not iterationg over two sets of it.

Attachments

hyperVision_v4.blend (717 KB)

(youle) #4

Hello! I’m not sure to understand what you are trying to do but you can filter color via a GLSL shader:

http://www.pasteall.org/blend/34520

Sorry if I didn’t understand the problem.

if(color.r == 0.0) discard;

(if your grass is pure green)

(agoose77) #5

So, I made a small attempt towards this.

It essentially does the following things:

  • For each pixel in both the mask and the source:
    [LIST=1]
  • Apply a bluescreen filter to the mask pixel.
  • Check if the mask alpha (from the bluescreen filter result) is not 0 (if it is (transparent), skip this iteration)
  • Apply a color filter to the source pixel (for the player colour)
  • Check if the colour alpha (from the colour filter) is not 0.
    [LIST=1]
  • If 0, then it’s transparent, so it counts as a masked pixel
  • Else, it’s not transparent, so it counts as a visible pixel

[/LIST]

  • The proportion of visible to transparent pixels is the visibility factor.
    [/LIST]

Unfortunately, my demo isn’t quite perfect. I haven’t exposed the colour filter threshold values to the python API, so the player colour filter is probably not filtering very well, requiring some scaling to the visibility value. The shading of the player affects the filtering, which in turn affects the quality of the result. As the player is further from the camera, you will get more discrete intervals of visiblity.

The idea for the demo is good, though. If we could render separate passes for geometry and the player, we could create a geometry mask, which would then be used instead of the chromakey to remove the player from the render (step 1.3) And this would not be affected by shading.

The source code is included, using Boost::Python (which, for size reasons ~350 mb is not included).
You could write a custom filter that would probably run faster.

You could also do this in GLSL provided you pass in a texture. This would probably be faster as it can run on the GPU, depending upon how fast your GPU / CPU is, and how much work they’re both doing.

If you wrote this directly as a VideoTexture filter, it would also run faster, because one of the big overheads is likely the conversion in python -> boost.

This demo will only work in 64 bit Blender, windows. (Because that is what I have compiled for. Compiling for 32 bit requires me to rebuild BOOST, a real PITA). If you build it yourself, you can run on any platform.

NB, in this particular build & source code, my variables are the wrong way round - I’m filtering the visible pixels, rather than the masked ones. Meaning, I inverted the result (1 - visible) to account for it, in Python.

test_visibility.zip (224 KB)

(VegetableJuiceF) #6

yeah, you didn’t :slight_smile:
Most likely my fault.

I will use the image later for calculations, it’s not for visuals.
And getting the result from a shader is loop back to a camera recording a plane.

(agoose77) #7

The blue-screen filter does the same as my code, taking the vector difference (length squared) of the rgb terms from a colour (blue or the player colour) and returns an alpha value.
We do one iteration over both data sets, first checking if the mask has a player pixel in a location by running a blue-screen filter on that pixel, if there is a player pixel there, we then apply a colour filter (same code as blue screen) to work out if that pixel is covered by foliage or not. If it’s covered we increment one counter, otherwise the other. The total player pixels we looked at is the sum of the two counters, and the visibility factor is the number of non obscured pixels over that sum

(VegetableJuiceF) #8

Bluescreen does the difference from the given color, your measures the difference of the two textures.
Okay…

What does Filterlevels do?

(agoose77) #9

Blue screen will act as a chroma key, although it can act on different colours. It makes those colours transparent.
I assume that levels are as image editor levels http://docs.gimp.org/en/gimp-tool-levels.html

Here’s a pure-python version of the filter. This particular example cuts some corners as we don’t really care about how faded the pixel is, so we just use the minimum filter values as the boundary between visible / invisible.

I noticed that the Python version correctly produced a result between 0 and 1, and was more affected by shading. I then realised that there was indeed a mistake in my C++ code - I was forgetting to increment the iterators after each iteration.

This is the new C++ code, with a slightly faster result (not noticeable)
HyperVision2.zip (223 KB)


def get_visibility(mask, source, x, y, z, tmi):
    masked = 0
    visible = 0
    
    min_sq = tmi ** 2
    
    for i in range(0, len(mask), 4):
        mr, mg, mb, ma = mask[i: i + 4]
        
        # Blue screen filter
        dmr = 0 - mr
        dmg = 0 - mg
        dmb = 255 - mb
        
        d_sq_m = dmr ** 2 + dmg ** 2  + dmb ** 2
        if d_sq_m >= 4096:
            sr, sg, sb, sa = source[i: i + 4]
                        
            dsr = x - sr
            dsg = y - sg
            dsb = z - sb
            
            d_sq_s = dsr ** 2 + dsg ** 2  + dsb ** 2
            if d_sq_s >= min_sq:
                masked += 1
            else:
                visible += 1
                
    return visible / (masked + visible)

I was in a rush so the variables aren’t named well.

All the arguments are identical to my C++ version.

“tmi” is the threshold for the length of the vector difference between the pixel colour and the filter colour, only used for the player filtering, as we know bluescreen looks for pure blue (0, 0, 255) from the non-shaded world background in the BGE.

Have a go with the pure python version, and disabling the shading on the player :wink:

(agoose77) #10

I’m not sure what you’re asking for, then. The examples that I posed determine the visibility of the player, as a value from 0.0 to 1.0. The means by which they do that is pixel counting, of the visible vs obscured pixels.
That’s essentially what you asked for.

(HG1) #11

FilterBGR24():

  • Switches the Red and Blue color channels. Red becomes blue. Blue becomes red.

    FilterBlueScreen():

  • Creates a Bluescreen effect. Blue pixels are rendered transparent.

    FilterColor():

  • Applies a custom color filter to the image.

  • FilterColor().matrix

4×5 integer matrix that contains parameters for color calculation.
Matrix provides general color transformation filter.
Resulting color values are calculated using matrix 4×5.
Every matrix row contains parameters for one color channel calculation.
Order of rows: red, green, blue, alpha.
Parameters in row define coefficients for source color channels in same order.

 <b>FilterGray():</b>      
  • Converts image to grayscale.

    FilterLevel():

  • Filter for levels calculations.

  • FilterLevel().levels

levels matrix [4] (min, max)

 <b>FilterNormal():</b>      
  • Converts the images to normal maps.

    FilterRGB24():

  • Resets the color channels. Red becomes Red. Green becomes Green. Blue becomes Blue.

    FilterRGBA32():

  • Uses the Blender 3D color sliders to set the Filter’s Red, Green, Blue and Alpha channels.

(VegetableJuiceF) #12

It doesn’t explain well enough,
so that is why I ask for help here.

For example it doesn’t say anything about what happens when a pixel is out of the min max range or when min max are both zero.

I don’t know what I should put in the filter color matrix to get the result I need.

I need examples.

(firefox.jco) #13

You are trying to figure out some way to create a stealth system for games? :smiley:

(agoose77) #14

Could you explain what you’re trying to do, and why my blend file doesn’t do that?

(VegetableJuiceF) #15

I wish to apologize to test you patience for so long.
I don’t know how or why I can be so hard to work with.

I never meant for this topic to go offtopic but it went.
DAMN you gifs…

bge.texture.FilterColor ,FilterLevel,ImageMix - how do they work?1. ImageMix is the easiest, I know how to use it, but i am bit clueless about what it does when the colors exceed the bounds(0 to 256). I suppose 0 - 1 = 255?

  1. FilterLevel - this does something that is really useful for me, but I have no idea how it did that.
  1. FilterColor - no idea what or how any of that api means.

I was hoping I made it clear enough that the problem being addressed here is a thread about what those filters do, how they handle border cases and how they work.
I can read the api, but it was writen with so little thought and examples that it was just gibberish to me.
Also this thread is probably the first one, where it is asked.

Now for the downfall:
I understand, that providing a reason for why one would be interested in this is crucial but having a distracting picture
turned out to be counter productive.
It made you ask for the blend, which I didn’t provide in the first place as It had little to do with the questions, as you now say also.
Maybe it was because so many have recently asked for “natural AI vision” and etc that you assumed it was that.
Or it was my fault to explain myself so badly, adding text that would have better been left out.

offtopic, What I try to do:
The biggest problem with videotexture module and AI vision is not the counting but the access logic spike.
That is why I would want to do all the math inside the engine using the features present.
It is possible as of set theory to make this into a image which pixels values are the difference of the two views.
Then I would need only to access that single image and do a simple count, not calculations.
I even tried to find image in ram as it gave a memory pointer but found only logic bricks and gibberish.

What I ask:
On the topic of filters, give better explanations than there are in the documentation.
I need to convert zero alpha pixels ( x,x,x, 0) to be (0,0,0,0).
Later i will acess those pixels, no glsl material shader please.

(agoose77) #16

I understand you, that you want to learn more about the filters! That’s understandable. I was under the impression that you wanted to know this in order to achieve a visibility calculation tool, and it seems that you do.

I would say that using videotexture for this isn’t going to be as fast as writing a hand-rolled solution (in C++), if you avoid the overhead that I have due to conversion of large lists. The video texture method will iterate over the entire image, whilst you actually only need to perform additional calculations for the visible pixels of the mask - so you can improve performance by calculating only when necessary.

In terms of what the filters each do, it’s easiest to look at the C++ source code to be honest. They were written quite a while ago, and few use the more advanced features.

No need to apologise. I’m simply curious :slight_smile:

(VegetableJuiceF) #17

While I download the 10gb of data to actually try to edit videtexture source,
can someone give an example how to use FilterColor?

FilterColor():

  • Applies a custom color filter to the image.
  • FilterColor().matrix

4×5 integer matrix that contains parameters for color calculation.
Matrix provides general color transformation filter.
Resulting color values are calculated using matrix 4×5.
Every matrix row contains parameters for one color channel calculation.
Order of rows: red, green, blue, alpha.

Parameters in row define coefficients for source color channels in same order.

I know this much about lin-alg but I dont get how that explanation would apply for a pixel.

EDIT THIS PLEASE

  1. what is the fifth column for?
  2. what is the fifth scalar/number in the result color matrix/vector?
  3. Is a row column replaced by the pixel it is currently over?
  4. Is there a matrix that converts zero alpha pixels ( x,x,x, 0) to be (0,0,0,0)
    and converts non-zero alpha pixels ( x,x,x, X) to be the same ( x,x,x, X)?
(HG1) #18

I made an Example. A little bit more general documentation is inside file.
I hope it will help you.

Attachments

Filter.zip (382 KB)

(VegetableJuiceF) #19

This is a huge help!:yes:

The fifth column doesn’t work, now i get it.
If it is a 4by4 matrix the that is easy to understand.

Also, this is the saddest part of today.
Which means that i cant multiply the pixels by their alpha.
Which means i can’t filter the pixels out I need to.

(youle) #20

Thank you HG1! That’s very interesting! It would be very cool to add this in the documentation. (thank you also for added frame number to the name of screenshot in makeScreenshot function!)

(HG1) #21

I made a mistake the last column is working. The Mona_Lisa.jpg is not good test picture.
I have made a new test image.

Attachments


(VegetableJuiceF) #22

This brings me back to square one…

What does the fifth W channel do then?


Offtopic:
This is a 2 by 6 pixel picture ->:
:<–

Easier to print out the whole image to see what the values changed into.
Tried to play around and see what the values changed into but no no idea what is happening.


#convenient to use
for p in range(0,len(tex.source.image[:]),4):
    s = '{0}	{1}	{2}	{3}'.format(*tex.source.image[p:p+4])
    print(s)