Area effects with Material nodes (possible?)

As far as I gathered from the node API documentation and examples (see e.g. [1]) the nodes do only have access to one datapoint (e.g a colour pixel) at a time, without seeing the surrounding points.
Is this right? And if not, how could one access the rest of the image/data?

Cheers,
Werner

PS: With “area effects” I mean something like blurring, edge detection, dilate/erode and similar awesome stuff :wink:

[1] http://wiki.blender.org/index.php/Resources/PyNode_Cookbook/Color
http://wiki.blender.org/index.php/BlenderDev/PyNodes/API
The C-code seems to work exactly the same.

One possible idea to try out would be to use a passed solution. On first pass (render) you store color information of each pixel rendered (shi.pixel) and pickle it. On second pass you use this information. Downside of this approach is that it is quite limited and you have to render twice. Pickling could be made optional though (set a pickle flag and add some simple logic to check if data has been already pickled).

If you decide to try this approach, you may find it useful to pickle other information besides color and pixel information as well.

Hm, I’m confused now … this (passes) works with material nodes? How would one access that data inside a (custom) Material node?

Thanks,
Werner

I used pass as a metaphor. To access adjacency data of some sort you need to create this data before you can use it. Hence the name pass.

First pass is needed to generate adjacency information. Second one uses it. Here’s some pseudocode that tries to explain it better:


class Node():
    def __init__(self):
        # init sockets here. could use one to define whether or not to generate pickled data
        
        self.data = load_pickled_data()

    def __call__(self):
        if self.data:
             # do stuff with adjacency data as you like
        else:
             # pickle/store color etc. data somehow. probably need to use tiny chunks that are assembled later (could easily use a render script link for this for instance)
        
         # do some other stuff as needed (set outputs etc.)


It is a bit problematic that there is no “finalize” call for a node. This would be useful as whole pickling procedure could be done in it. Now it has to be done partially which is not particularly nice but should work.

Anyway the idea is just to store some data for use in the future. If you are not familiar with pickling, check out http://docs.python.org/lib/module-pickle.html . In other words pickling means just object serialization.

Hm, that sounds like an interesting approach … although I’m not sure how one can check if there already is all the data stored from the first pass.
Because if pickling works pixel after pixel then all pixels would need to be finished for the second pass to work correctly, no?

Also, how can one check where the code is right now in the second pass? (So one can find the corresponding data from the 1. pass)
I know that there are texture coordinates (though I have no clue yet what exactly is stored there), but is there something more common that enables one to get the surrounding pixels?

Werner

Essentially all you need to do is to dump the data somewhere (each pixel) and load it to suitable data structure if it’s found. You can serialize data of each pixel to /tmp/somedir for instance. If that directory contains serialized data (figure out some nice naming scheme or set some standard how to do this), you can load it in init.

After loading you can analyze the data as needed. This is where you could handle simple 2d adjacencies based on pixels for instance. One way would be to write a class that contains “sparse” implementation of an image. This means that all it does at its simplest is to retain references to known pixels and their colors. It could look like something like this:


class Pixel(): # container for pixel
    def __init__(self, x, y, color):
        self.x = x
        self.y = y
        self.color = color

class SparseImage(): # if pix is not found, it's black? hence when adding pixel, could skip black ones
    def __init__(self):
        self.pixels = []
    
    def add_pixel(self, pixel):
        self.pixels.append(pixel) # could assert that type is Pixel. also check if trying to add same pixel twice?

    def get_pixel_adjacencies(self, x, y): # x, y are pixel locs
         ... # this would check array and find adjancent pixels. could return them in 2d array (list of lists)

shi.pixel is extremely useful in this case. Use it to access and construct SparseImage shown above.

If you get the idea working, further work would include thinking it more in 3d (store surface normal and camera vec? etc.).

Many thanks for the hints, I’ll definitely look into this (once my time schedule allows it) :slight_smile:

Edit: Although I’m still not entirely sure how one can make sure the whole data of the first pass was written (since this needs to be checked in the 2. pass) … is there an end or size information somewhere?

Werner

Here’s what I got so far:
http://pastebin.com/f4f3cba43

The data from the 1. pass is not yet written to anywhere since I have absolutely no clue how to check if the node is still calculating the 1. pass or it’s the second pass already. See the “if 0:” statement inside call for details.

How is that handled for pyNodes anyway? I have the slight fear that this isn’t even possible to check.

Werner

Hey. I changed the code a bit (not tested, just sketching). You can find it at http://pastebin.com/m1cfbcfb3 . Hopefully it gives some clues how to proceed. There are probably some spaces and tabs mixed as I use spaces (4 spaces per tab) when coding Python. There are probably converters that can handle this issue though should you wish to use the code.

For clarity it might make sense to keep the basic functionality separate and build blur and other nodes later. This would mean that your base classes would handle pickling operations. Extending classes would add adjacency queries and what-not.

If you want, I could try to whip up a simple example showing how to use pickle in a node.

Most recent code is here:
http://pastebin.com/f139e67e7

Aside from the facts that it’s pretty complicated to use and being insanely slow it doesn’t even do anything yet :slight_smile:

Complicated to use:

  1. Right now one needs to make sure the target directory is empty before updating the node.
  2. the one needs to init the node (slow)
  3. then the final data is calculated (not as slow but nothing to see yet)

Insanely slow:
Well storing all this data in a bunch of external files is kinda bad with as much data as a common material calculation would create - isn’t there a way to store this in memory?

Werner

One idea would be to try Blender’s Registry module (http://www.blender.org/documentation/246PythonDoc/Registry-module.html) for this purpose instead of pickle.

Good idea, I’ve switched from files to blender registry for storage - it’s a lot easier to use and faster now (still slow overall for init=0 though):
http://pastebin.com/f36b239fa

I finally get a blurred preview, but I guess the x/y coordinates (self.shi.pixel) are not what one needs for local shading since the blurring looks pretty random.
Also in the rendering the texture is completely different as well.
The real 3d coordinates of the rendered pixel would be cool to have.

Usage of the script:

  • Add a node material
  • Add a dynamic node and assign the script
  • Switch to init=1.0 and wait until the whole image is built
  • Switch to init=0.0 and hope :slight_smile:

Werner

The real 3d coordinates of the rendered pixel would be cool to have.
These should be derivable using location of the active camera and self.shi.viewNormal I think.

Nice to hear that you got the initial version work. :slight_smile:

For that I would need to know what the viewNormal values are supposed to be first (the only documentation I’ve found so far is the name - which is not much).

Note to myself - check if this is still valid code:


scene = Blender.Scene.getCurrent()    # Get current scene.

camera = scene.getCurrentCamera()    # Get camera object.
#camObj = camera.getData()        # Get object data from camera object.

camLoc = camera.loc             # Get location of the camera.

“Working” might be a slight exaggeration here … it calculates something from something. :smiley:

Nod to (python-api/Blender) developers:

This (e.g. material blur and similar) would all be much easier to do (and not to mention a lot cleaner) if the whole colour input would be exposed/accessible directly while processing a single ‘pixel’.
Edit: I’m not entirely sure this is even possible - especially since I suspect the whole node tree will render one pixel and if that’s finished the tree renders the next i.e. there is no full colour input for single nodes.

Werner

The snippet that gets the location of the active camera looks fine to me.

viewNormal gives you the vector from the camera to the shaded pixel. I used it in my toon shader test (http://wiki.blender.org/index.php/Resources/PyNode_Cookbook/Toon#Toon_1). I get the depth of the shaded pixel using viewnormal. It seems I forgot to take z axis into count but that would be trivial to add.

Anyway if you get the location of the active camera as in your snippet and add viewNormal to it you should get the location of the shaded pixel. You can verify this visually by storing the values using Registry and writing another script that reads the values from there and constructs a mesh using them. You can think this as a simple way of remeshing. If you decide to write such a tester script, you could even use vertex colors and so on…

<edit>
I just noticed that scene.getCurrentCamera() is deprecated. It’s better to use scene.objects.camera instead to get the active camera.
</edit>

If anybody is interested, here’s the most recent code:
http://pastebin.com/f1fdf4c0d

I’ve given up - this isn’t working at all for some reason. It simply slows down the rendering process.
I guess I have to wait until something like this is implemented in blender directly.

Werner

One possible explanation for the slowness could be that you go through each stored pixel for each pixel to blur. To overcome this it might make sense separate the blur operation from Pixel to its own function. Then you would give BufferImage and pixel to blur for this function. The function would handle adjacency checks by using a function provided by BufferImage (ie. getAdjacentPixels(pixel, radius=1) or something like that). The implementation of that function could be something like (in pseudocode)


def getAdjacentPixels(pixel, radius=1):
    #1. validate radius. radius must be positive and non-zero
    #2. get coords (x, y) of pixel and return slice of stored pixels

  1. means that it may be beneficial to actually store pixels in a structure that is easy to slice. In other words it might be nice to use list of lists or even better, numpy array. I would be inclined to use latter for this particular purpose. You can find more information about this at http://www.scipy.org/Tentative_NumPy_Tutorial . Read particularly the sections about slicing! Also http://jehiah.cz/archive/creating-images-with-numpy is worth checking out.

I am not sure if this yields enough performance but may be worth trying. It may also be beneficial to profile the node if possible to see where it spends its time. It might be fun to try timeit (http://docs.python.org/lib/module-timeit.html) for this purpose.

If only it would be simply too slow - that would not be a problem (speed issues can surely be optimized).
The main problem is not the speed or the handling (these are just annoying) but the fact that the code doesn’t do anything. Could be a fault in the algorithm, the storing or whatever. I could write a simple dummy node that does “output.color = input.color” and get the same result.

And to be honest I don’t care to dig deeper into something that simply isn’t working for the result I want(ed) - see also my previous post.

This started out as a test if blurring and similar stuff in a material node setup is possible at all right now - until somebody proves me wrong I have to say: no it isn’t.

Cheers,
Werner

Alright. I took a closer look at the code. I noticed these issues in getBlurredCopy:

  • dist = (pxl.loc - loc).length -> Where loc is defined? Should be self.loc? Subtraction pxl.loc - loc doesn’t produce an instance that has length instance variable. Perhaps it’s easier to handle this case with simple dist check. I wrote generic, recursive dist check for my own project. You can find it at http://code.google.com/p/cassopi/source/browse/trunk/cassopi/utils/math/misc.py .
  • surrounding is isn’t defined anywhere.
  • except is clearer if you print out the exception instead of counting on certain one (ie. Registry not working). Use except Exception, e: print e instead. It’s fine to check each known exception explicitly too if you need to handle the case differently. As a sidenote sometimes it’s nice to use with instead of try except finally block. You can find an example of this at http://effbot.org/zone/python-with-statement.htm .I “fixed” those issues for my version and validated that values are saved to the Registry. I printed out the values stored and they looked alright. There is one thing that came to my mind though. There is a possibility that node preview renders could corrupt the data stored in Registry. Furthermore when I execute the node with data loaded (init=0), the renderer just locks up.

Thanks for the hints, I fixed up the python code in my previous post. I forgot to save some of my changes:

“(Vector - Vector).length” always gave me the correct distance so far. I could’ve missed some place where it isn’t stored as a Vector instance, but it worked last time I used it.

Other than that: Your help is much appreciated, but I’m not going to put even more time into this script.

Furthermore when I execute the node with data loaded (init=0), the renderer just locks up.
Welcome in my world of pain.

Edit

There is a possibility that node preview renders could corrupt the data stored in Registry. Furthermore when I execute the node with data loaded (init=0),
Yes, that’s very likely - and since I do know nothing about the mechanics behind all that I know of no solution for that. Not even mentioning that the blurring doesn’t work in the preview as well (there is no camera there to use - the current code uses wrong location data).

Werner