I am currently working on creating a refined, semi-automated node group for non-photorealistic crosshatching. While the process is borrowed from existing techniques that create a masked crosshatch texture, I’m trying to design the noodle so that it won’t require so much internal tweaking for every object in a scene.
I’m having trouble with two aspects that are important when emulating a crosshatching style (or any true, hand-drawn style). First, an artist using a real pen or pencil will have a practical limit on the thickness and length of the stroke; when drawing objects that are more distant in perspective, the artist can only draw so thin, and cannot create as much detail due to how short the stroke must be to denote fine shading. Second, many artists incorporate Rembrandt lighting in their sketches, where more distant objects will have darker shading causing the object to recede into the background.
To emulate these things, I know I need to manipulate the scale of the hatch textures, as well as the light intensity used by the mask that reveals the darker hatching textures, by setting up nodes which calculate the distance of the camera from each object and apply it properly.
Admittedly, vector math is something of an enigma to me and I only understand very fundamental principles in an abstract, qualitative way. I’ve tried to play with the noodle by using the camera and geometry input nodes in conjunction with other conversion nodes based on what intuition tells me, but I’m not getting the result I’m after.
I hope someone might be able to help me figure out how to alter the texture mapping so that the scale stays proportional to the distance from the camera. From here, I may be able to derive the shading aspect but help on that would be great as well.
Thanks for reading, and thank you for your replies.