As far as self-collision is concerned, that would actually be more related to Blender’s simulation features than hair-rendering features so it would probably be out of scope for this project.
You’d probably need to get a developer that has experience with Blender’s particle system as well as with physics development.
Yes,raytracing is the way to go for realism,shadow maps(at least in Blender),are not precise enough to capture fine details like hair self shadowing(the 90 percent of the hair look is given by shadow).
But in an animation it would change so instead it is better to use something predictable.
Or as I asked sometime ago: add a rand() option in the Math node (not you Broadstu but DingTo or Brecht). This rand() would need to have a seed value exposed too. Just such a rand() function would be much quicker than a noise function and in some cases much better too.
This wouldn’t work, the rand() function would not be predictable, and when taking more samples it would just converge to 0.5 always. You could re-seed the random number generator each time, but then you need a seed value and you haven’t really solved anything.
The easy solution is to just add an extra random attribute, if it’s properly implemented it will stay fixed under animation and if it’s per curve it’s not a lot of memory. With some extra work we might be able to avoid the memory usage and use a hash function during render, but that wouldn’t work if we ever add level of detail / simplification.
Note that noise is also a type of hash function, but perlin noise is slower than needed, cellnoise is good enough for this case.
I tested shadow maps up to 8192,at this res(tweaking at the best all the parameters)you are not able to be even close to raytracing(keep in mind that I’m not speaking about long hair,but short fur as seen in animals like dogs or lions).
Even with array lights you can’t reach the same results,you can have good results but not as raytracing(some shadow maps type gives too much dark shadows,other like deep map too bright).
The funny thing is that at high res,not only memory for shadow map is high(deep maps,other not much),but render time is becoming even slower than raytracing.
Raytracing is the clear winner when you need details.
Great hair render examples! Did you control the hair distribution via texture or vertex groups? If via texture, would you mind sharing your setup? I’m trying to use a texture to control hair distribution, but not being successful right now. Thanks!
Imagine the Math node with a new rand() function: It would have a seed input and would output a float in 0 - 1.0 range.
Now in a material shader we could use the objects UVs corrdinates to calculate a number that would be entered in the seed. And we would have a random value to be used, it would be the same in every sample pass. And it would be faster than a noise node and useful I think. I probably someday just modify some of the existent Math node functions to convert it to a rand() function and test speed compared to noises (perlin, cell,…) and will post here or in the Cycles thread.
Implementing a random value per strand of hair: And childs? And memory use? I am liking more initially just a rand() function in the Math node and then using the UVs to create the seed: no more memory used, no problem with childs.
That’s right! Curve segments are now available. They are much slower than line segments. The slowdown does depend on the scene though. With the basic square + default particle system I find it’s only 30% slower. Once a little variety is added it goes up to 50%. In complex scenes it can even take twice the time to render. It’s also likely to cause a slight slowdown for long line segments since the BVH structure is always based on curves. For both types, having more steps usually results in faster renders.
There is a new parameter for curve segments called subdivisions. The intersection test performs recursive subdivisions of the curve. The points found doing this are then connected. The number of subdivisions is controlled by this parameter. In almost every case 3 (which gives 8 pieces) is fine. If it doesn’t quite look smooth, 4 should definitely be enough. If this parameter is set too high you may begin to get strange artifacts.
There is also a curve ribbon primitive which uses the same method but gives a smooth flat appearance. I’m not sure if any use will be found for them. They were from an earlier test and are actually slower than the more realistic curve segments.
Note: There is also a bug I will try to fix that causes a few curve sections to disappear.
Any estimated date (aprox.) about when can we mess arround with GPU rendering? I find very slow cycles in CPU but in GPU is just fine so can’t wait to have hair in GPU with the right performance.