here is a link to the original question…
http://blenderartists.org/forum/showthread.php?t=85691
does it mean I will have to use node? (still afraid of the name :p)
here is a link to the original question…
http://blenderartists.org/forum/showthread.php?t=85691
does it mean I will have to use node? (still afraid of the name :p)
as has been previously stated in the original thread, static particles with deflectors as per hair tutorial. Make your material partially alpha transparent to get the asian look you spoke of. If you alpha the outside edges using a stencil that is white in the middle and black at the edges, they will blur together where they overlay. Hope this helps.
to get longer bangs in the front, etc. use weight painting to control the length of the hairs. see the wiki on static particles and weight painting.
Hey, practice makes perfect. If you want to be able to pull off the most dazeling effects that Blender has to offer then there is no way around getting to know the nodes. Besides, once you get over the initial shock and frustration that you may experience you will begin to consider them as being among your best, digital friends.
I love this question. I was forced to learn something new in order to answer it. Be prepared: it’s about to get deep in here, but I’ll try to do it in a language that any newbie to the compositing nodes can easily understand.
In order to circumvent confusion when you open the .blend that I’m uploading, please download the latest CVS version of Blender (for your OS) from this site:
http://www.graphicall.org/builds/index.php
Note: the version with the jahka particle won’t help as of the date of this post as it is simply v.2.42 with the particle patch applied. This is not what you need.
You can download my example .blend here:
http://uploader.polorix.net//files/89/City.blend
When you open the file go ahead and press F12 to render it. All of the images as I desribe them will pop up in their respective UV/Image Editor windows.
From what I can see in your images it seems that you only want to blur the edge pixels of the hair. I don’t think that can be done via any one blur node in Blender as of yet, but I could be wrong. I know that it can be done with a seies of masks or overlays, but I’ll get to that in a bit.
For this technique you will need to seperate the hairdo from the rest of the mesh and put it on it’s own layer. Also you will need to use a seperate render layer for this layer. You then apply a blur to the layer via a blur node, then recombine this render layer with the first render layer (the one on which the main body of your character resides) via a z-combine node (Spacebar>Add>Color>Z Combine).
(( If you do not want the artifacts (Aliasing) that are associated with the recombination of depth maps (BTW, this is greatly reduced in CVS versions of Blender) then you will need to render out a double sized IMAGE of the depth map for each renderlayer, scale these maps back to original size (which will reduce aliasing) and then apply a 2-3 pixel blur to the scaled result. The reason that you have to use an image of the depth map is that you can not blur the edges of a real z channel. The blur will only dialate or erode the edges and do nothing for the aliasing of the edges. You can find an excellent tutorial on depth maps here:
I did not composite with images of the z-buffers in this file. Thist is just information that you need to be aware of so you can bypass this part for now.
When rendering the depth maps you will need to run them through an inverted color ramp (just move the black and white color bands to the opposite positions from their defaults) which will cause Blender to render them as IMAGE files rather than real depth mattes.
From there you need to inport the scaled versions of the depth maps back in as image noeds, apply the blur, and connect the result to their respective z-input s on the z-combine node. In v.2.42 I have been accomplishing the scaling process via a custom action and batch processing in PhotoShop, but this whole process has become much easier and has greater possibilities in the CVS versions of Blender because the devs have coded so many new node types into the program. CVS versions include a scale node that circumvents the need to go external in order to scale the depth map image sequences and also provides greater flexibility in terms of types of blur that may be applied. ))
I have highlighted in red what it is you are looking at in the multiple UV windows. This type of window setup can only be acomplished in CVS also. Clockwise from top left: 1. The main body of the scene (this is akin to the main body of your character, 2. Roof only (akin to your character’s hair-do, 3. The combined render result node (no post processing effects applied here), 4. The combined result with blur selectively applied to the roof only.
Now for part 2, just in case you only wanted to blur those edge pixels of the hairdo rather than the entire do
You probbly noticed when you opened the file that there are 2 seperate node setups (as in screen shot # 2) connected by a single common thread (noodle). Go ahead and click on the viewer node in the lower node setup and you’ll notice that the result in the lower left UV window changes. This is the result of post processing with a different set of nodes within the pipeline. Kinda cool how you don’t have to re-render the entire scene, huh?
As you can see image in that particular window has a very distinctly blured outline (on the roof) which envelopes a totally unblured inline, but what is that hideous white edge that seperates the two? That is the edge of the matte of the scale node. Click on the “Convert Premul” button on the alpha over node and notice how this goes a long way to reduce the offending outline but does not get rid of it. I find this totally unacceptable and you should too.
Let me break this down for you. I intentionally left this set up this way so that you will hopefully learn from the example. What I did here was duplicate the renderlayer named “Secondary” (the roof layer) via Shift+D. Then I added a dialate/erode node (Add>Filter>Dialate/Erode). Next I added the renderlayer’s alpha channel noodle to the dialate node and reduced the signature of the alpha channel via negative values (erosion). Then I added a mix node to which I added the output socket of the dialate node to the top image input and I added the Image output from the renderlayer to the bottom Image input as well. In this way I was able to premultiply the non-eroded image of the render layer by the eroded alpha channel. It still needed a factor to mix with (the top gray socket of the mix node) so I dragged the the erode output socket to the factor socket of the mix node. Andf that is where the offending outline is coming from. Go ahead and drag the alpha channel output from the renderlayer to the factor input of the mix node. Viola!, problem solved.
Now, what’s up with the alpha over node anyway? These nodes simply allow you to combine images in a 2 dimensional manner as opposed to a 3D manner (which is what the z-combine nodes are for). Therefore you can combine the unblurred roof with the blurred roof without the hassle of the z-combine node
All of this will not however solve all of your problems. Why? Because, this is the wonderful world of 3D and when you seperate a mesh into layers like this, you have to deal with the alpha channel that results from the back side of the mesh as well.
Enter screenshot # 3.The image on top represents your hairdo seperated from the rest of the character’s mesh. Notice that when rendered you can seethe back side of the medh. So what? Were using a z combine node that will hide all of that right? The answer is yes and no. Because we are using the dialate/erode node and to alpha over the blured part of the hairdo, it gets a bit more complicated. You see, the erosion works on ALL parts of the alpha channel, not just the parts that may be visible. It erodes from the outside and works it’s way inward. This includes the back side of the mesh. If left as is you will not get the erosion that we desire becauser it’s working form parts of the alpha channel that we want to mask out.
So how can we mask out that back side? The developers, in their infinite wisdom, have provided us with a method to do this also and it’s much simpler than you might think. It works via a MATERIAL setting, independent of the nodes. It’s that wonderful little button named “ENV” in the material buttons on the material tab. You can see my mouse cursor highlighting it on the lower portion of screen shot 3. But wait, there’s more. CVS versions also have a new option to replace ALL materials on any given render layer with a singular material via the filed labeled “Mat:” on the renderlayers tab. (Note: you can see this option in the first screen shot but not the 3rd because I screwed up and used Blender v.2.42 for the examples on screen 3. The field labeled “Light:” does the exact same thing for light groups too.) It is located just under the Passes buttons. How is this useful? You can set up a material that has the “Env” button set, type it’s name into that field (on the main renderlayer where your character mesh resides) and use the entire render layer to mask out any portion of the hairdo that would not be seen in the final render, thereby masking out that portion of it’s alpha channel as well. This is much more efficent that duplicating the mesh and replacing all of it’s materials with this masking material. Simply deleting the material name in that field will revert your mesh back to using all materials that you have applied to it. This is a HUGE time saver!!! Now the dialate/erode node will work the way we want it to!
Edit: you must make both layers visible in the “Scene” field AND in the “Layer” field on the render layers tab in order for this masking trick to work. You can see that I have done exactly that on screen shot #3 on the lower portion of the image. Sorry for the double screenshot here, Blender Artists forum will only let me upload 3 images per post.
You can find out more about this in the messages that were passed back and forth between Fligh and myself in this post:
http://blenderartists.org/forum/showthread.php?t=84751
Allright, this is turning into a book so I’m gonna cut it short here. I hope some genius answers this post and shows us an easier way to accomplish this process cuz I’m down with anything that’s better.
On a final note, if you find any of this useful, please drop Ton Roosendaal a line to thank him (and the rest of the development team!) for his/their efforts. Good luck!
wow…
I am still half-way through the reading
It’s more complex than I thought but nodes sure can do neato things…
Thanks for the answer. I will keep in touch, in case there are things I can’t understand