Thickness maps

I was thinking about sss and I wondered if I could create a UV map for a model which represented the thickness of the model at each coordinate.

Does anybody know if this is possible?

You are talking about a translucency map, I think. In this case, yes, it’s possible.

Translucency maps are not what I was thinking about.

I would like a map that is actually generated from the geometry. I want the map to represent how thick the object is at any particular point. If you take Suzanne, her ears are thinner than her head so this would be shown in the map.

Hope this makes sense!

I see. It could make sense, actually, but, as far as I know, no such a thing exists in a 3d software (with the exception, maybe, of ZBrush?).

Isn’t that what the SSS script does already?

Anyway… it wouldn’t be too difficult to bake thikness into the vertex colours using python. Just basicly use the inverse of the vertex normal and see how close the closest point in the mesh is along that vector…

But I personally pretty much figured that that is what SSS did?

I’m not sure ('cause I never used it), but I don’t think that the sss script creates an UV map representing the thickness of the model at each coordinate. I could probably be wrong.

hmm, not sure what you are saying exactly. UV maps are 2d maps. Acutally UVW originally which corresponds with XYZ. UVW maps are really 2.5D and not 3D. That is they are 2D surfaces shaped into 3D space. So a depth map representated by UV is, technically speaking, not possible.

SSS is measure of density I believe. To represent density then there are the various schemes. Usually by adding a predetermined density by way of materials. Materials that can be mapped by UV. So, if you have multi-material surface you can map density by UV.

And thickness is, like, a geometric thing? Like the thickness of. . . forget it. been reading wu. How do you do thickness anyway?

Unfortunately, it does not make sense. Depth (as in relation to raycasting) depends on from wich angle the ray is casted, so it cannot have a predetemined depth at a given point. It needs to be sampled at rendertime Take a window for example, if you look at it from a perpendicular angle it has it’s most shallow depth, but the more narrow angle you have the longer the rays inside the window gets (since they get reflected diagonally inside the glass).

Thinking about this it would seem that thickness would have to be a function of the shader. For example, a skin shader would say something like: the 2D surface that this shader resides on is 5mm thick and all calculations have to be limited to that distance from the surface. If light enters at a given angle than after it travels 5mm away from the surface, caculated pependicular to the UV, then stop calculating density and eveything is zero, or 1 or whatever. Program your shader and you are good to go?

Not ray depth, rather the relative thickness as in centimetres or inches of the geometry at that point related to the surface normal.

The relative thickness could be generated onto an unwrapped UV map of the object itself. Black = maximum thickness. White = minimum thickness. A uniformly thick object such as a sphere would be equal thickness so the resulting map would be 50% black.

Does this make sense?

ahh, surface normal, not uv. Good. But yes I said mm or milimeters (if you’re refering to my post). :slight_smile: And rays though are the vehicle that travels the path and they’re actually how all the calculations are determined I believe.

It does. The gray level would determin the calculation mentioned above.

Again, it would be done in the sss shader. If you can substitute the depth calculation already onboard with a check for gray scale you would have variable thickness.

A way to set sampling would be needed too. Sampling limited to wide area would be fast, sampling to infinity would take a while.

Blender has nodes now but not really a shader language? You’d have to know some math.

That makes perfect sense to me. I don’t know what you’d use it for, but I see no reason why you couldn’t do that. The only catch is that you need to set a constant min/max value for your percentage calculation.

Also, if you want more resolution, there’s no need to use greyscale. You’re effectively limiting yourself to 8 (I think) bits that way (28 = 256 values). If you do some conniving with the RGB values of the pixel, you can squeeze out 24 bits worth of depth (224 = 16,777,216 values). That’s a pretty big difference.

I don’t know what you’d use it for, but I see no reason why you couldn’t do that.

Neither do I really! I just had this idea whilst thinking about SSS. An ear is thin and the head isn’t, as a result of this I posted the first message.

Do you think that it could be useful?

A quick search at google turns up your answer. Already there in max and probably others.
We are behind the game because shaders are still in developement.

[edit]that is to say thicness is there. variable image based thickness is what, you describe it[/edit]

First of all, I think its shortsighted to say something is not useful. It may not be useful 99% of the time - but when it comes time for that 1%, it can be a life-saver. So just because I can’t think of why you would want it doesn’t mean its not worth wanting.

Second, now that I’ve given it some thought, I think it could be useful for a non-physically correct, but very fast SSS implementation:

Think like a shader with me. We’re at a pixel that needs shading. We do the regular stuff like use the face normal and light direction to calculate the amount of diffuse light and the viewing angle and light direction to calculate the amount of specular light. THEN, we take the reverse or back-facing normal and compare that with the direction of the light. If you assume that more direct light from the back will increase the scattered light, you can do something like

additiveSssResult = light_color * materialSssColor * normalize(reverse_normal dot light_ray) / depth

Add that to what you already calculated and you could probably get a decent SSS appearance. It would be pretty fast, but it would only work for non-deformed geometry. Or perhaps I should say, it would be more incorrect for deformed geometry.

Two more thoughts.

  1. If you compare the point+depth to a shadow buffer from the light, you could get a decent idea of whether or not the back side of the model is in shadow from other objects. It need not be exact since you’re already talking about indirect scattering. As long as there was a smooth transition from shadowed to non-shadowed, it should look fine.

  2. If you’re using this for SSS, I think it would be a good idea to cast several rays in a spread pattern when precomputing the depth and (weighted?) average the result. That would give a more smooth appearance and more accurately represent weird geometry like an ear.

Actually guys, since there is no sss shader, the way to do it now would be as I mentioned above.

Write a small python script that creates vertex colours relevant to thickness. Then if you want you can bake those into a UV map. There’s a script that does that bit already included in blender.

Calculating the vertex colours could be done quite easily by adapting the “self shade” & or the “MaBaker / MaSelf” scripts that were written by cambo & myself.

Actually I really honestly don’t know how the SS script would calculate density of a material. The logical solution for not having actual density data available would be using “thickness”, which is why I think the SSS script that is already there probably does exactly what you are looking for. Or at least a close enough comparison. I’d just use that one, unless you want a challenge and learn to script yourself.

I can definitely see where this may come in useful. It could come close to real time dynamic SSS in games and for quick renders.Currently the only SSS seen in games today are all prebaked which is of course not the best solution.

I’d be very interested in seeing this progress. Hopefully a coder with sufficient talent can produce this.

yah, me too. :rolleyes:

I have to be honest and say that as I am a relative beginner a lot of what is being said here has gone completely over my head.

Hey Charlie, your concept doesn’t work because…, well, maybe this will help:

http://mpan3.homeip.net/f/illu.jpg

THe green dot is where the the thickness of the object is measured. From eye/camera 1, the thickness of the material is 1 unit because the ray exists the object (black) relatively fast due to the angle. However, from eye2, the camera sees a much thicker object because the ray has to travel 3 unit before it reaches the other side.

To conclude, the thickness is depended on the camera angle, and that one point on the surface of the object can have infinite amount of different thickness depend on the camera angle. And thus, you can’t map a thickness value to the surface as a texture, UNLESS you don’t plan to move the camera nor the object*.

*In which case, Nvidia invented a pretty neat way to calculate the thickness of the object by first

  1. render the front face of the object and store its Z-value as texture1.
  2. render the back face of the object and store its Z-values as texture2.
  3. take the difference between tex1 and tex2, the result is the thickness of the object from the given camera angle.
  4. of course, the down side is that this method doesn’t work with concave meshes.