As far as I gathered from the node API documentation and examples (see e.g. ) the nodes do only have access to one datapoint (e.g a colour pixel) at a time, without seeing the surrounding points.
Is this right? And if not, how could one access the rest of the image/data?
PS: With “area effects” I mean something like blurring, edge detection, dilate/erode and similar awesome stuff
One possible idea to try out would be to use a passed solution. On first pass (render) you store color information of each pixel rendered (shi.pixel) and pickle it. On second pass you use this information. Downside of this approach is that it is quite limited and you have to render twice. Pickling could be made optional though (set a pickle flag and add some simple logic to check if data has been already pickled).
If you decide to try this approach, you may find it useful to pickle other information besides color and pixel information as well.
I used pass as a metaphor. To access adjacency data of some sort you need to create this data before you can use it. Hence the name pass.
First pass is needed to generate adjacency information. Second one uses it. Here’s some pseudocode that tries to explain it better:
# init sockets here. could use one to define whether or not to generate pickled data
self.data = load_pickled_data()
# do stuff with adjacency data as you like
# pickle/store color etc. data somehow. probably need to use tiny chunks that are assembled later (could easily use a render script link for this for instance)
# do some other stuff as needed (set outputs etc.)
It is a bit problematic that there is no “finalize” call for a node. This would be useful as whole pickling procedure could be done in it. Now it has to be done partially which is not particularly nice but should work.
Hm, that sounds like an interesting approach … although I’m not sure how one can check if there already is all the data stored from the first pass.
Because if pickling works pixel after pixel then all pixels would need to be finished for the second pass to work correctly, no?
Also, how can one check where the code is right now in the second pass? (So one can find the corresponding data from the 1. pass)
I know that there are texture coordinates (though I have no clue yet what exactly is stored there), but is there something more common that enables one to get the surrounding pixels?
Essentially all you need to do is to dump the data somewhere (each pixel) and load it to suitable data structure if it’s found. You can serialize data of each pixel to /tmp/somedir for instance. If that directory contains serialized data (figure out some nice naming scheme or set some standard how to do this), you can load it in init.
After loading you can analyze the data as needed. This is where you could handle simple 2d adjacencies based on pixels for instance. One way would be to write a class that contains “sparse” implementation of an image. This means that all it does at its simplest is to retain references to known pixels and their colors. It could look like something like this:
class Pixel(): # container for pixel
def __init__(self, x, y, color):
self.x = x
self.y = y
self.color = color
class SparseImage(): # if pix is not found, it's black? hence when adding pixel, could skip black ones
self.pixels = 
def add_pixel(self, pixel):
self.pixels.append(pixel) # could assert that type is Pixel. also check if trying to add same pixel twice?
def get_pixel_adjacencies(self, x, y): # x, y are pixel locs
... # this would check array and find adjancent pixels. could return them in 2d array (list of lists)
shi.pixel is extremely useful in this case. Use it to access and construct SparseImage shown above.
If you get the idea working, further work would include thinking it more in 3d (store surface normal and camera vec? etc.).
Many thanks for the hints, I’ll definitely look into this (once my time schedule allows it)
Edit: Although I’m still not entirely sure how one can make sure the whole data of the first pass was written (since this needs to be checked in the 2. pass) … is there an end or size information somewhere?
The data from the 1. pass is not yet written to anywhere since I have absolutely no clue how to check if the node is still calculating the 1. pass or it’s the second pass already. See the “if 0:” statement inside call for details.
How is that handled for pyNodes anyway? I have the slight fear that this isn’t even possible to check.
Hey. I changed the code a bit (not tested, just sketching). You can find it at http://pastebin.com/m1cfbcfb3 . Hopefully it gives some clues how to proceed. There are probably some spaces and tabs mixed as I use spaces (4 spaces per tab) when coding Python. There are probably converters that can handle this issue though should you wish to use the code.
For clarity it might make sense to keep the basic functionality separate and build blur and other nodes later. This would mean that your base classes would handle pickling operations. Extending classes would add adjacency queries and what-not.
If you want, I could try to whip up a simple example showing how to use pickle in a node.
Good idea, I’ve switched from files to blender registry for storage - it’s a lot easier to use and faster now (still slow overall for init=0 though): http://pastebin.com/f36b239fa
I finally get a blurred preview, but I guess the x/y coordinates (self.shi.pixel) are not what one needs for local shading since the blurring looks pretty random.
Also in the rendering the texture is completely different as well.
The real 3d coordinates of the rendered pixel would be cool to have.
Usage of the script:
Add a node material
Add a dynamic node and assign the script
Switch to init=1.0 and wait until the whole image is built
For that I would need to know what the viewNormal values are supposed to be first (the only documentation I’ve found so far is the name - which is not much).
Note to myself - check if this is still valid code:
scene = Blender.Scene.getCurrent() # Get current scene.
camera = scene.getCurrentCamera() # Get camera object.
#camObj = camera.getData() # Get object data from camera object.
camLoc = camera.loc # Get location of the camera.
“Working” might be a slight exaggeration here … it calculates something from something.
Nod to (python-api/Blender) developers:
This (e.g. material blur and similar) would all be much easier to do (and not to mention a lot cleaner) if the whole colour input would be exposed/accessible directly while processing a single ‘pixel’. Edit: I’m not entirely sure this is even possible - especially since I suspect the whole node tree will render one pixel and if that’s finished the tree renders the next i.e. there is no full colour input for single nodes.
Anyway if you get the location of the active camera as in your snippet and add viewNormal to it you should get the location of the shaded pixel. You can verify this visually by storing the values using Registry and writing another script that reads the values from there and constructs a mesh using them. You can think this as a simple way of remeshing. If you decide to write such a tester script, you could even use vertex colors and so on…
I just noticed that scene.getCurrentCamera() is deprecated. It’s better to use scene.objects.camera instead to get the active camera.
One possible explanation for the slowness could be that you go through each stored pixel for each pixel to blur. To overcome this it might make sense separate the blur operation from Pixel to its own function. Then you would give BufferImage and pixel to blur for this function. The function would handle adjacency checks by using a function provided by BufferImage (ie. getAdjacentPixels(pixel, radius=1) or something like that). The implementation of that function could be something like (in pseudocode)
def getAdjacentPixels(pixel, radius=1):
#1. validate radius. radius must be positive and non-zero
#2. get coords (x, y) of pixel and return slice of stored pixels
I am not sure if this yields enough performance but may be worth trying. It may also be beneficial to profile the node if possible to see where it spends its time. It might be fun to try timeit (http://docs.python.org/lib/module-timeit.html) for this purpose.
If only it would be simply too slow - that would not be a problem (speed issues can surely be optimized).
The main problem is not the speed or the handling (these are just annoying) but the fact that the code doesn’t do anything. Could be a fault in the algorithm, the storing or whatever. I could write a simple dummy node that does “output.color = input.color” and get the same result.
And to be honest I don’t care to dig deeper into something that simply isn’t working for the result I want(ed) - see also my previous post.
This started out as a test if blurring and similar stuff in a material node setup is possible at all right now - until somebody proves me wrong I have to say: no it isn’t.
except is clearer if you print out the exception instead of counting on certain one (ie. Registry not working). Use except Exception, e: print e instead. It’s fine to check each known exception explicitly too if you need to handle the case differently. As a sidenote sometimes it’s nice to use with instead of try except finally block. You can find an example of this at http://effbot.org/zone/python-with-statement.htm .I “fixed” those issues for my version and validated that values are saved to the Registry. I printed out the values stored and they looked alright. There is one thing that came to my mind though. There is a possibility that node preview renders could corrupt the data stored in Registry. Furthermore when I execute the node with data loaded (init=0), the renderer just locks up.
“(Vector - Vector).length” always gave me the correct distance so far. I could’ve missed some place where it isn’t stored as a Vector instance, but it worked last time I used it.
Other than that: Your help is much appreciated, but I’m not going to put even more time into this script.
Furthermore when I execute the node with data loaded (init=0), the renderer just locks up.
Welcome in my world of pain.
There is a possibility that node preview renders could corrupt the data stored in Registry. Furthermore when I execute the node with data loaded (init=0),
Yes, that’s very likely - and since I do know nothing about the mechanics behind all that I know of no solution for that. Not even mentioning that the blurring doesn’t work in the preview as well (there is no camera there to use - the current code uses wrong location data).