There is a simple method that we use in an xbox360 (so a realtime) engine to use Photoshop (or any 2d paint program, for example Gimp) color filters.
Look this image: http://etyekfilm.hu/3d_original.tga
This is color table that does not contains every color, but with interpolation it is possible to construct a full color table.
So the method:
- load this image in Gimp or Photoshop
- put some color filters on it
- save it
- use on an image in your program (for example Blender, but we use in realtime engine too) to change original colors to colors of this color table
Note: I think our programmers in our realtime engine loads this color table as a 3d texture, because the hardware can make correct interpolations when you use 3d texture as a color table like this. So maybe this is important I think.
it seems interesting but I can’t imagine how to use it in blender or bge… do you have some samples?
which effects do you apply this way? i’m asking, because lots of those photoshop effects’ algorithms are well known and could be applied easily directly.
or is there another advantage i’m not aware of (like speed)?
and btw: i thought filters are operations, that take surrounding pixels into account (therefore there is a filter size)? because in that case i’m pretty sure you can’t sample a filter that way. it would only work for per pixel effects like brightness/contrast conversion for example.
atti, ok maybe I use wrong word: these method is color adjustment not filter
of course everything can be implement, but with this method we can use any color adjustment that already in PS (or Gimp), and more important: we can import the settings that we sets in PS or Gimp
I’m a big fan of GIMP and your idea is quite professional. Hope it gets implemented in the near future endi.
I’m having a hard time to understand what you mean endi. Could you do a screenshot or something more visual to show us what that technique does?
The idea is that you have a 3-dimensional texture, with each component of the input image (r,g,b) maps to the coordinates in the texture (u,v,w). This way any color to color mapping that you can do in your image manipulation program is possible.
Using the original color map, from endi’s post, directly would result in an identical image (except for some minor quality loss). But with any color adjustment applied to the color-map texture, using this technique, that same adjustment would then apply to the final image.
It would be pretty simple to do as a glsl shader I guess. I tried doing it using composite nodes, but i failed for some reason, but i’m sure it’s possible with a relatively simple node setup.
This is actually a pretty cool (yet simple) idea… Have no idea if it’s commonly used, but I’ve never heard of it before.
do you have an example blend or something?
Yeah, give us some example file endi! But please don’t post .3ds again!
This is the best i can manage with nodes. The green channel is garbled, probably due to rounding errors (I tried subtracting 0.5 before rounding, but it didn’t help.). Of course, the interpolation would be hard to control using nodes as well. Without interpolation you will lose color depth…
With a proper glsl shader it would probably be useful in real time applications though.
It’s a very common technique in video games…
It enables you to do a screenshot of your game, colour correct it however you like, then apply that same colour correction to the sample image (in photoshop this is easy with adjustment layers)
The advantage for realtime is that it’s a fixed cost… you could do a curves, an invert, a tint etc etc
you can be subtle or radical… create “night vision” effects or a “bleach bypass” or whatever…
it’s just a CLUT you can interpolate between two for radical effects.
If that were a VSE strip effect (Using GLSL) then you could set up the effect in Nodes-Non-realtime, apply the result to video for realtime playback. That would be very cool for video grading and steps around rendering in the nodes window.