Polygon hair lab

Hello Blender user, first im not a coder, I have an idea to create a tool that facilitates the creation of polygonal hair style, this is one of the most used techniques but the downside is the process of creating polygon by polygon hair so I have an idea to facilitate the creation with this technique, something like hair polygon editing lab that allows to edit the hair as the particles hair.

The result with this technique is amazing squaresof company and others use this technique in their characters.

I will put some sketches of how this technique should work if anyone is encouraged to work together to develop it.

some picture for reference

this is another example

This forum is for finished scripts.

Yep, wrong forum, but a very cool idea. I also cannot code, but this would be incredible to have in blender! Also if it just had the hair as some sort of particle system and the mesh is still intact that would be fantastic because then you could still edit the mesh, and cycles could use its really powerful instancing or else it would be really slow.

I hope you get some support!

Hope this will help

patricia this script is base in particle my idea is whit polygon is more flexible and the technique is whit texture whit alpha
and the result is boom and the render time is flooommm

the point is to facilitate the creation of hair polygons without doing manual I have not seen any software that has tools to facilitate the creation of hair polygonos.

Download these models from blendswap having polygon hairs


what’s the point in downloading models made by another person

I really like your idea. I think it has merit. Perhaps a friendly admin would be so kind as to move this post to the proper subforum.

I think in order for this to work, one would need to have a library of polygon assets (bangs, ponytails, curls, fringes, etc) that could be combined together to create different hairstyles.

I don’t think shapekeys and morph targets would be the answer. One would never be able to get really original hairstyles. Everything would just look like scaled variations of the stock meshes. ( complete with texture stretching and blown-out polys)

I think procedural geometry would be the answer here. Such that the different elements ( bangs, curls, etc… ) would each be generated via custom mathematical scripts applied to the seed (base) geometry. Bangs would follow a parabolic formula construct, while curls would be based on a spiral.

My coding skills are nowhere near good enough to accomplish this, but I can muse on the process. Maybe we can enlist a developer(s) to help if this gets moved to the Python Support forum.

How does Entoforms generate the geometry? http://www.entoforms.com/scripts/

Here’s an interesting 42 page publication, “Realistic Hair Visualization in Maya: A new work-flows and methods”
Yes that’s the title. Nevermind the bad English, it’s a fantastic method combining polygon and particle hair methods.

sory fo posting here I thought it was the best place

yes in my sketches i propose procedural geometri that can be stretche or inflate you’ve come to the point

im bad at english to, i have see this paper before but this convert polygon to paint effects hair them to particle hair in maya

Great minds… I was pondering this to. Hair and fur tend to work as distinct groups, even hair that has not been styled tends to fall into groups naturally.

Obviously there will be random hairs that extend out of the natural clustering, say for example hair that is combed backward on the head and downward over the shoulders will have hairs at non equal lengths that will stray out of the cluster group, but in the main the hairs naturally follow a curve equally.

My idea, very similar to kakachiex2’s examples would be to create a mesh object, or cloth object, that when rendered would produce curves or particle guides that would then be able to render realistic hair strands. The advantage would be that existing cloth and soft body, etc; dynamics (collision, force, etc.) would be able to be used… The advantage being that these are fairly robust, actively maintained and, with some exceptions, work.

Further thinking brought up additional requirements…

The mesh object must be constrained to only allow extrusion of a “base emitter area” [multiple extrusions can be used] so although you can move and manipulate the mesh you can only increase topology detail by using loop cuts that return to the point of the start of the cut. The reason for this is that the curves are emitted from the plain and must be averaged out along the whole extrusion, to add non uniform additional detail would not allow for easy calculation of where the curves should go in respect of what is the start and what is the end of the mesh. By using extrusion only the tip is always in a logical line.

I guess you could think of it as working like the play-do heads that made hair… for every squeeze you extrude longer hair… in the above for every extrude you are pulling the hair longer, and adding more points along the curve.

I think the main advantage of this type of system is that calculations are done on the mesh, and not on huge numbers of hairs, or hair guides. Self collision and object collision works far better when there is a mesh than with individual strands (or clusters of strands) and also the mechanics are already there for things like dynamic paint to affect the mesh.

If you want finer detail then instead of a mesh extrusion, you would use cloth extrusion. This would allow for a single set of hairs to be generated which would then be able to create the illusion of individual strands. So it would be possible to create the classic effect in final fantasy where she stands on the battle ground and her side (fringe?) blows in the wind. The cloth would be set to silk, or somesuch, and would be one hair (or a pre-set amount) thick… if that makes sense.

Obviously the hair strands generated and following the computed hair guides would allow for all the usual hair adjustments such as random length, frizz, etc.

Its been a while since I modeled, but the following pic is a quick example. The yellow pencil denotes the root emitting area, the red the tips, and the blue is a rough guide to direction.

I guess this is very similar to the way smoke/volumetric work. So perhaps a good name for this is volumetric hair.

That describes very much how I would do it!

So how are your coding skills? :wink:

we can create a brainstorming and see what we can combine all this ideas

Not very good :-/

I guess it would be reasonably easy to do, especially on the modeling side as this is already being worked on for bmesh… with things like extrusion. If the base emitter area was some how marked as “hair root” then additional tests could be performed to make sure that the geometry was kept in a consistent state.

I also guess that the base emitter must be a seperate mesh to the head. The reason being that it would break as being non manifold if it were to be an extrusion of an already manifold object.

Also the base emitter area would have to reverse the normals prior to calculating the emitter as they would be pointing down towards the scalp, not upward into the mesh volume.

As collisions would be done on the mesh, all the collision stuff could be removed from the hair system (it doesn’t work at the moment).

If the hair creation created curves (guides), then these curves could be further modified; by applying maths (say to create ringlets based around vertexes on the emitter) or by allowing the curve to be modified after initial creation… or for detail additional cloth extrusions could be applied over the top of the major bulk mesh.

As curves are a valid primitive they allow for easy export to other renders, such as RiCurve in renderman renders.

perhaps this paper can be of any use anyway I’m working on ideas shall see if I upload something tonight


yeah blender needs this.