I’m attempting to have a texture map with a constant scale in the camera view (faces in the distance have a scaled up texture compared to objects that are closer to cam), much like the result you get from camera projection mapping.
But I also want the texture to follow the contours of the mesh and move with the object as well.
I’ve so far been trying to combine camera projection with UV mapping in various ways to achieve this but have had no luck so far.
I realise there is probably something fundamental I’m misunderstanding about vectors and the mapping node and it’s holding me back.
Yeah, sorry about the vagueness, it’s a small part of a larger project that I can’t really discuss here.
Basically I’m attempting a crosshatch sketch shader, but not like any I’ve seen before.
So the lines have to present to the camera as if they were drawn by the same pen, with uniform size and spacing. But, they also have to be attached to the object so they don’t just stay in camera space, also the line angles should follow the contours of the object, as defined by the UV map.
So far every attempt I make to take the scale from the camera projection and combine it with the position and location from the UV map has failed.
Hmm… thinking… basically you want to UV project a pattern to some individual surface parts with almost no stretching ( Problem 1 ) … for example for the top-head from a top projection… and then you also want to not just project it onto those surface but somekind have the distance of the projector to the camera the same so that the width of the strokes are identicall all over the animation sequence… ( Problem 2) independtly from the distance of the object to the camera
So you have to… :
Problem 1: find the suitable parts , separate the object (also materials?) and add the projectors to every part
Problem 2: for every frame add temporaly projectors with adjusted distance to camer to get the wanted projection
do same sh*ader* ma*gic* to re-arranging the compute path of the initial camera projection… ( ← which i have no idea of )…
(thinking aloud:) Maybe it’s just using a normal layer of the object to get a “simple” 2D distortion of the original pattern ??
That’s basically it.
Unfortunately distorting the camera projection with normals won’t have the texture moving with the object, so they will swim over the object in camera space.
I’ve already tried this and it’s quite distracting.
But that introduces all sorts of sliding. We can’t correct and not slide at the same time.
Per face can be done with a bit of geo nodes, BUT, if the correction is done per face it means the texture coordinates will not be continuous between faces, And if we try to interpolate between faces to correct that, then we get sliding again. It would be a cheapo version of the per-pixel example.
it looks like a post process on 3d footage, as it does with a purely camera projected texture.
it looks like 3d footage of a model that’s been drawn on.
I’m trying to get it to look as much like a pen shaded drawing as possible.
I realise that there will necessarily be some floating textures, as characters are deformed, and when they move in Z depth and the when camera moves, for example.
However I don’t mind this floating as it will sort of “make sense” to the eye, and it’s something that can be minimised with camera, lighting and staging choices.
Edit: there’s a typo in the picture… I mean to say “Faces in the background have the same line spacing and thickness.”