Better ways to achieve this weave pattern?


I’m trying to setup something like this weaving pattern. The mask isn’t that hard, but I’m having issues getting useful coordinates out of it so that I can easily apply texturing effects based on these coordinates. Although I have managed, I think I’m overcomplicating things. Surely there must be smarter ways to achieve what I have here (big png’s no longer allowed?):

The spaghetti node code on the left is what creates the coordinates; basically a 4x4 grid with a “semi-unique” coordinate system (UV range 0-1 for each slot, although I’m also doing a mirror if applicable, so instead of 0-1 I’m actually coming out with 0-1-0), then adding them all together before merging the x and y.

But yeah, it looks ridonculous. I did find a renderman approach, but I’m not good at reading code, but the for loops kinda threw me away anyway from looking deeper into it.

So, is there a better way to achieve this pattern (coordinates), maybe a more unified approach to achieving any woven pattern?

As far as i get this in your case renderman approach ends into the code snippet right above red/black weave pattern image; the rest of the code, loops included, deal with color coding tartan. But this would be straightforward OSL unless there’s a way to build a node version.

That spaghetti looks impressive indeed ;).
I wonder if Insert image from URL (not Copy)( e.g.) would work better.

Using nodes, how would you achieve the coordinate system shown in their image #3 (black and white arrows)? One arrow would be full UV space, separating colors could be mapped to the z coordinate. I could shape to cylinder and taper ends afterwards using smoothstep or something, not a problem (I just did it inside there to be able to keep track on what the heck I was doing :)). Would it be possible to set a gridsize/matrix and then plug in reference numbers to describe pretty much any kind of pattern generation? Maybe that’s more for the devs though.

Come to think of it, I don’t even know how to do floor/ceil commands :stuck_out_tongue: Strange how those are missing from the math node.

a small experiment…

edited: Ohh, forgot to expose the values to be used in both ‘modulo’ nodes… they should be 1.0; and the color2 from the Multiply, and Color1 from the Checker should be (0.5,0.5,0.5), and the Color2 is black! :wink:

this will produce the following UV pattern:

which can then be used with image textures or other procedures to produce weaves, tiles, etc.

and with some changes, it’s possible to create different UVs for each tile.

@CarlG See? Easy, just a ‘small experiment’ of Secrop solves all while i was still shifting some similar components around after reading this and you had come up with (how many nodes were involved in your version?), lol.

@Secrop I can only admire ease you do this stuff…

I think it’s fairly easy to include them in the math node… But while we don’t have them, we can use the ‘round’ function: round(x+0.5) does the ceiling, and round(x-0.5) does the floor.

@eppo thks, it’s basically a divide and conquer method… :slight_smile:

That’s pretty impressive indeed :slight_smile: I’m trying to preview step by step but I’m not fully able to follow the math/logic. Usually in my own setups (and in the spaghetti shown above) I’m able to extend 0-1 texture space (UV/Generated) to also work with negative texture space (object) simply by replacing all modulo math nodes with a custom modulo that is continuous across 0 (same as checker texture; you don’t get two neighboring whites just because you cross into the negatives). When I try that here (the two %1 nodes, it breaks if I do it to the %2 node) it repeats well over x, but fails for y. The quick hack was to add a xy vector (too big will cause very visible calculation accuracy artifacts) effectively offsetting everything into the positives. Not ideal, but…

But how do you shrink the “thread width” leaving gaps, something like this? I tried simply masking out the center, but obviously that won’t look right.

Okay, so I’ve updated your example with a black and white mask output, which could be used for anisotropic rotation (* 0.25 not performed):

“and with some changes, it’s possible to create different UVs for each tile.”

How would you go about to achieve this? More specifically, how do I end up with each 2x1 and 1x2 tile to have a random color? I prefer this to randomize UV lookup (rather than a fixed offset or something).

Took a bit more time to think about this pattern, and rebuild it, this time without the checkers, and added the Z coordinates to extend different UVs per tile.
I’m still not very happy with my solution, because at big scales one can see the patterns from the Z values… (I’ll try to figure that out)
It doesn’t work also with negative coordinates, so if we are using World or Object coordinates we must, as you already referred, add some big vector to them. (using the absolute, unfortunately mirrors everything at axis :frowning: )

Haven’t got around to try the new version yet. But, “see the pattern from the Z values”? I’m curious, do these get more visible as you increase the initial vector offset (further away from 0,0)? Sometimes when checking if I’m within a certain range of coordinates (isBetween, mix of > and <), I get some mystery lines which I cannot explain, and these become a lot more visible if I’m offsetting a lot. I get the same (I guess) looking artifact if I try to calculate atan2 with math nodes, to the point I just abandoned it (using radial gradient instead).

I’ve notice artifacts when using big coordinates values, specially if keep plugging more nodes to create the textures (I’m still using 2.77a, must check the lastest build)… but the patterns I’m referring are dued to using same Z values for different tiles; it produces two visible bands, that are even more apparent when the scale is increased. I can try to introduce a better formula for the Zs, maybe with some primes product, or with some auxiliar dimension… (these are the moments that I feel the urge to jump to OSL! :slight_smile: )

Hehe, yeah I know. All this is just an exercise for me though. I don’t touch OSL due CPU only. I would have used a texture generator if it was time critical :slight_smile:

made some changes in the nodes, Z values are a bit better now, and using your idea of modified ‘modulo’ function, it now works for any coordinates! :smiley:

better if I post the blend with the node group:
Tartan_proc.blend (558 KB)

So (n * 673) % 457 is about continuous mod? I’m not following the math, but how the heck did you come up with those numbers? :smiley: First time I have googled numbers, and found them to be primes. I think I have to fire up spreadsheet to comprehend this one. Let’s just say my own version of continuous fmod/vmod is built completely different (if negative then do stuff).

Anyhow, outputting z component to noise texture vector input (I’m using scale 42.5, detail 0 because not needed, and distortion 50), I get random colors out of it. That’s 3 random floats we can use for whatever random stuff we want. Brilliant stuff. This one goes straight into my library of useful functions, and I need to spend some time playing with it. Thanks a lot.

yup, primes. Just peeked them randomly; they are not so big but still their product is big enough to avoid repetitions in nearby tiles. There may be a better method, but I must let my head rest some days. :slight_smile:

At least it’s working… A few more tweaks and can be used in production (i.e, it only works if there’s a vector plugged in… but it’s possible to turn it into a pynode with the ability to check if some socket is connected and take an appropriate action).

About the modulo function, I stripped any unnecessary math nodes. The original version is like this:

Just wanted to share a couple of tests. Included the node and main group setup to show “overview of thinking process” more than extreme detail. The lin/smooth/smoother-step function is extremely useful in all this in order to numerically control (and numbers could be exposed out to the group controls) various important bumps.

Twill 2/2 laid carbon fiber pattern.

Twill 2/2 fabric. Random based coloring (pseudo working) is collapsed and instead I’m showing coloring based on color ramp. For some reason this seems extremely inaccurate wrt the coordinates given, and not flexible in terms of exposing out to a group. I may rethink the way this is approached. I’m using the bumpfix node rather than the builtin one. Only an experiment, so if she looks like she’s wearing a floor mat that’s why :slight_smile:

Twill 2/2 fabric closeup with the inner flow of the bumping, where a variation of your generator is used. The slightly fuzzy look is caused by a blur node on the coordinates (not dof), and the thread (twisting) angle is what drives the aniso angle. Same is true for the carbon fiber, but the twist angle is zero so it will coincide with the macro lay pattern. I’m mapping the cylinder bump (carbon fiber is rather flat) to also darken the edges, which will hide the error mentioned above with the color bleeding outside the assigned coordinates. Probably a small bitmap (without blending) would have been a better way to assign colors, I will have to try that out when I get time.

That’s looking good… :slight_smile:
Can’t really see clear what’s going on in the nodes… Is the ‘Util.mapping…’ based on my setup? (I like the result)

About the ‘cylinder’ bump, you can also add a bit of bump in the X axis of the UV, so it looks that the fibers are coming from under the others.

Util.Mapping iirc I’m guessing contains the blur function and a simple z axis rotation (because on the BMW I needed the shape on the hood, which is not shown here, to form a V). The green node contains your coordinate creation stuff, which in turn drives the bumping effects, which in turn or combined as exposed, drive the material (shading, color assign etc).

Yeah that’s what the curvature bump was supposed to control; the overall/global up&down shape along the thread, whereas the over/under height should do the same but only at the very end. I may have broken it in the process of adding more :smiley: I should also get better at naming group nodes in terms of what they are contributing to in the end, since following the effect without ability to preview (easily) is kinda hard. Even when I set it up myself :slight_smile:

This is some amazing stuff. I tried to do carbon fiber a few times but ended up giving up and using a B/W mask. Time to spend a few hours dissecting Secrop’s .blend to see if I can wrap my head around the math… From what I’m seeing so far it looks like different weaves would require a different ground-up build - or could that be controlled with inputs?

CarlG, any chance of a .blend when you’re satisfied? It looks like you’re taking some of the outputs of Secrop’s setup and adding a lot more detail (thread twist, curvature etc) but I can’t quite follow what all the steps do (might be jetlag…). I’d also be interested in how you do your lin/smooth/smoother group. I have a mapRange utility group which I think is similar to linstep (takes an a…b input and maps it to a c…d range). Adding smooth/smoother to that seems like a good idea.

Sure, I’ll try to remember it when I get home. Note that this is just an experiment. Used for anything of production value I’d probably go with texturing the pattern. I’m happy with it for carbon fiber use since it has no gap. I wasn’t able to convert it fully for cloth/fabric fibers with gaps (note how the fabric “cylinder shape” is very pronounced and very dark at the edges :)). Ideally I would have it less dark and gap between the fibers.

I think a better approach is needed; independent u and v weaves with offset coordinate systems, and then a mask switching between the two. I’m also not able to wrap my head around the maths, I’m just able to use the outputs :smiley: