Halftone pattern in Blender - Update

Hi. I just found out this:
https://forums.tigsource.com/index.php?PHPSESSID=gefqvt9ff14ck2dq7461ld2d33&topic=40832.msg1363742#msg1363742

We´ve seen some pretty amazing tutorials on CG Cookie on how to do stylized pattern shaders.
But this approach about fixing the pattern to the camera grabbed my attention.
How could we translate this into Blender shading network? (Mapping with a “view at” camera?)
Thanks.

Could be interesting to experiment with the idea. I think you could get some interesting effects with either the reflection tex co-ords, or maybe mapping the normals to the camera through a vector transform.

However, that’s not going to get you the pixelization at the edges of, and between, objects. So a better approach might be to render out a grayscale scene, then use compositing to map a bayer matrix to the different shade values.

I’m not sure if you’re still interested in this, but I found it intriguing and had a little free time tonight. I only tested out the display portion so far, though the motion shouldn’t be too hard to work out following the same methodology in your link.

I started with compositing, which was dead simple. Then tried a shader only version just for kicks. The problem with going shader only (aside from what I said above) is that there is no way to take the shadow gradation into account. At least none that I can think of. Thus I used the ray length and AO to generate some usable values.

post processing:

shader only:

dither pattern used (scaled up)
dither_tile

5 Likes

W00 my mind!! This is superb! How is this not good?! It´s perfect!
The first image is this effect done in the compositor, and the second image, is created at shader level, correct? OH my this is sweetness!
Okay, just like the original link document mentioned, the problem is that the camera will move the pattern axis, and it somehow needs to be locked to the camera angle fov. That´s going to be interesting to setup.
I have no clue right now. I´m on deadlines, but I´d love to try this setup urgently!
Great work.
I guess a Lightpath (Is Camera ray) using some fov math could solve the camera movement?

hehe Thanks. It’s a fun little exercise.

Correct.

I’ll have to ponder on the movement a bit. For the shader only version, I’m thinking the camera angle/fov math explained in the link might work to just directly plug into window texture coordinates. As for the post version, I don’t think there’s a way to translate the texture in the compositor, so I’m thinking a second render pass with the pattern mapped to a sphere.

In the meantime, I experimented with some other dither patterns. I think the Bayer matrix still looks the coolest, though.

Bayer:

Blue noise:

Halftone:

2 Likes

yes, shader view dependant. Oh man! Awesome progress!!

I was going to sit down and play with this some more, but I really gotta learn to stop procrastinating on my own projects! haha

Here are the tiles of the 3 patterns if you want to play around with them, David. Just use the mapping node to tile them to the render size. And I’d love to see an update in the thread if you find the time to come up with something interesting. :slight_smile:

Bayer: bayer_tile
Blue: blue_tile
Halftone: halftone_tile

Hey guys,
I came across this thread, very inspiring, thank you.
I wonder if any of you made any progress in understanding a way to take lighting and shadows into account at shader level.

Do you mean as in translating the shadow on the object into the halftone pattern?

That can be done using the Shader to RGB node in Eevee. You can see some examples in this thread:

1 Like

Thanks a lot for pointing me there, very interesting study and answers pretty much my question. :smiley:

I finally had some time to experiment further with this.
Really interesting topic the one of the dithering and matrix color remap.
I have a question about the technique showed by @cgCody
From my understanding the value of grey should be processed and once assessed that they are greater than a certain value of grey, outputted as black or white following the bayer matrix. While rendering in cycles though, i notice that some of the “pixels” still are shaded grey (see reference)



Does anybody have any idea bout this? for me would be very important that the output render would exclude anything outside the dual tone that i am targeting with the color ramp.

At what resolution are you rendering? Especially compared to the dithering pattern you’re using.

When zooming in on your first image you can see that the pixels of the dithering pattern are not just one shade but are basically dithered themselves.

This probably happens because the Shader is evaluated for each pixel.

My first idea would be to render at a resolution so that every pixel in your dither pattern matches a pixel in your rendering and then upscale the image afterwards.
You would loose the smooth silhouettes of the meta balls though, which would be kind of a shame.

One (pretty bad and roundabout) way to maybe keep the silhouettes could be to render the Shader without the dithering, pixelate it, map it onto the object using the window coordinates and then dither it. That could result in artifacts on the edges though.

EDIT: I’m positive the dithering while keeping the silhouette is possible with Eevee. Though Eevee doesn’t have the Light Path or Ambient Occlusion Node…
I might experiment with this later, as it seems like an interesting problem.

EDIT 2: Turns out the shader that I thought was doing this in Eevee was actually just mixing different dither patterns based on the Value of the shader. That leads to cut off pixels, though. (If you use enough samples, I think your approach should converge to similar cut off pixels)


Found it on reddit.

Yes, exactly my same conclusion. It kind of work with eevee, but some parts of the pattern interrupt abruptly + the AO node does not lead to any results. I am gonna experiment a bit with your first idea because besides loosing the smoothness of the MBs sounds like making a lot of sense!

Thanks for taking the time, keep you posted with updates!