I’m trying to make a tube with glowing gas. I have a curve object with depth and I applied principled volume with emission to it. I want to make a gradient from the center axis of the curve to its surface so I can make it more dence inside. I’ve made a similar thing to a regular cilinder but I want to apply it to a curve.
Images:
I spent a lot of time researching how to do that with a curve, but didn’t find anything helpful. I would appreciate any help.
To my knowledge, there is no way to densify a volume material at its center. How about faking it? Here is a version done by stacking concentric layers of the object and giving it an additive transparent material. There is no volumetric shader involved.
This is pretty heavy, so it might not be a solution if the scene needs lots of this effect.
Unfortunately, this method will not work in my situation. I would like to have some more control, to blend some noise with it or to control the falloff for example, to make this as photorealistic as possible.
I’m pretty sure that there should be a way to do that with a curve as I did with a cilinder. Because I just projected radial gradient to it, using position coordinates. Maybe it’s possible to acheive with geometry nodes, but I don’t know them well enough
Anyways, I’m grateful that you spent your time trying to help me, thanks!
Are you making a neon sign? Because if you are, the volumetrics Blender give you by default are actually the realistic option.
–
I might have an other idea to do it though. Maybe it could be possible with compositing? The volumetric object would need to be rendered separately in its own pass, with all other objects set to holdout. Then, it would be possible to use the “power” option of the math node to change the gradient of the volume.
but it’s for c4d so I’m trying to recreate it in blender.
And in this video, on 20:38 you can see that he changes the “voxel falloff”, so I basicly was trying to do something simillar in blender, because it should give me the most convinient control over how it looks.
I see, Blender’s volumes aren’t made from voxels by default, but there is a way to do it. I could somewhat replicate this video using the “point density” texture. It allows the vertices or particles of an object to be turned into voxels. It’s not going to be a fully clean and controllable gradient like I was trying to make, but it will have a certain amount of fade at the edges, which can be controlled with a color ramp.
This method looks really good, but it’s very heavy…
When I made a few of those curves, firstly it started to crash if I moove during viewport render using GPU (but it rendered normally if I used “Render Image” with GPU or in viewport render using CPU). Then when I added more of those curves, it started to crash even in “Render Image” using GPU. It still renders normally on CPU but its’ render time is 5 times longer and I will need to render an animation of that thing…
I don’t know how to read crash logs, so I gave it to Chat GPT and it said that it’s most likely lack of video memory. But my GPU is rtx 4090
I would prefer to use this method because it’s the best so far, but I don’t know how to fix those crashes.
UPD: I think I could fix crashes if I lower point density resolution or if I decimate geometry of curve, but it will look very chunky and bad.
Are you using a single volumetric object for all the curves, or one volume for each curve? Are the different curves close to each other, or spread through a scene?
Just asking so I know if you are using the ressources efficiently or not.
If the various curves are all clustered closely, you could combine them in one object and use a single volume.
But if they are spread far apart, you should probably use one volume cube per curve, each with its own point density material. Using a huge volume covering an entire scene for some spread out objects would require the point density to have a huge resolution to catch them all in detail.
The curve in my example has barely over 100 vertices. If you run out of memory because of the curve and not the volume, I can’t imagine how much you must have subdivided it.
I use multiple volume objects (almost 1 object per curve), because I have multiple colors of glowing gas and I need some control to animate them separately.
I will need about 5 volume objects, but blender started to crash when I added a second one
If I make geometry less dense, it will look not like gas, but like LED strip with a lot of glowing dots. Or if I make voxel radius bigger, it will be too thick.
Also I set point density resolution to 700, because if I set 300 for example, it will look to blocky.
Curves are pretty close to each other, but I also will need to use more complex geometry for some volume objects, because if I use boxes, they will intersect in some parts, and it creates visual artifacts.
I see, a resolution of 700 will wreck the performance for sure. Volume resolution affects 3 axes, so you have to remember the actual resolution is the cube of the value you enter, so an increase in voxel resolution has an exponential effect on performance.
A single volumetric cube with a 700 resolution contains more voxels than 12 cubes with a 300 resolution.
Maybe you can use fewer voxels by flattening the voxel box (and apply the scale) to make it better fit the bounds of the curve? If the box is flat in shape, that should in theory save lots of voxels that would be wasted in empty space. I haven’t tried it with point density, but a flat domain does bake faster with smoke simulations, as the resolution you choose is applied to the longest axis and is truncated on the others.
The geometry is most likely fine, unless you tell me you have millions of vertices in those curves.
Have you tried setting the volume interpolation to cubic? This is the higher quality option for voxels and it makes them look smoother.
There is a setting for it both on the point density node and one in the material settings. I haven’t tried both together, as just the one on the texture was enough in my example.
Using more complex shapes for the volume should be fine, but if I understand correctly, Blender still uses the rectangular bounds of the object when creating voxels and that’s what matters to performance. The voxels exist outside the custom shape, they just aren’t shown.
–
Are all the curves going to be seen from close enough to require this fancy treatment? Just asking, because if one of them is far away in the background, you could surely get away with something simpler, like using a flat texture on a plane.
I applied your advices for optimization and now it works like a charm!
It was like that in point dencity node from the beginning because I just copied your settings:) After your advice I also tried to set it in material settings but it doesn’t seem to make any difference.
For some reason visual artifacts from intersecting volume objects disappeared so I just continued to use boxes.
Yep:) I used less complex shader (which utilized Layer Weight node) in my previous neon projects because it was shown from a far distance and looked fine. But in this one tubes are very close to camera so I needed something a lot more realistic.
I know that I said it a few times already, but thank you so much!
You helped me so many times that I feel a bit ashamed that I can’t do anything in return.
I was trying to apply a displace modifier to a curve object and animating it. So the gas inside the tube will slightly move along the video. But when I rendered animation, I noticed that it doesn’t look like intended. It turned out that curve deformation of rendered volume refreshes only when I re-enter rendering mode, and when I play animation, for some reason it only deforms the whole volume object but not a curve. I didn’t find a setting responsible for that.
Here’s an example of what I’m talking about (I exaggerated displacement so it would be visible on video) (and it works the same way on render as in viewport):
I did notice point density has trouble refreshing when changing the mesh. It should be possible to do it by displacing the point density texture itself (alter the texture coordinates) instead of displacing the geometry.
Actually, I was just experimenting with this exact concept to make large scale fire using point density, no smoke simulation needed.
Most of the detail in this image is done by displacing the point density’s coordinates with noise textures. The voxel resolution is actually only 100 for the whole thing.
You combine the texture coordinates with a noise texture using the “linear light” blend mode.
It’s important to use the color of the noise and not the fac, because each color of the noise controls the displacement on one axis (red=x, green=y, blue=z), so if you plug a grayscale value, all 3 axes would get displaced at the same time and the displacement would all be on a single diagonal.
If you wanted to displace one axis but not the others, you would have to replace the unwanted colors with a 0.5 value. This can be done with an rgb curves node (here flattening the z/blue axis so it’s not displaced).
I accidentally noticed that my method with curve displacement works but only if “Persistent Data” is unchecked. Is there a way to select which data will be persistent during render and which not? I actually prefer this method with curve desplacement, because it gives you an ability to see how curve will behave during animation, without rendering it. But I don’t want to increase render time by disabling “Persistent Data”.
I also tried your method and for some reason, when I plug anything into vector input of Point Density node, volume just disappears. Initially I fully copied your settings, but then I tried to tweak some stuff, or just plug only Texture Coordinates node there - nothing changes, it’s just not there unless I unplug everything from Vector input of Point Density node.
Persistent data has always been a bit flawed. Not that I blame the developpers, there are just so many possible data changes in Blender. I guess this could be filed as a bug if you wanted.
Is everything truly the same? My point density material is set to world space (on the point density node). The object has all transforms applied, including location (this puts the pivot at 0,0,0). The source of the data is vertices and I am plugging the “object” coordinates into the vector.
I would first try plugging only the coordinates without the other nodes to see if they work. If you did everything correctly, plugging just the coordinates should change nothing in the volume.