Why Cycles (or any other pathtracer) lacks the ability to render vertices?

I think I would ask this here because here I may be able to get answers from people involved in Cycles development.

You all know Krakatoa, it has many features but it´s basic one is that it render points, just that points,and it gives those points some spherical normals to receive light and they cast shadows, that is the starting point, after that it has many many features.
Now the question is, why Cycles (or any other pathtracer) is incapable of rendering points in space, I´m not saying they should have the same feature set of Krakatoa, but being able to render a 2 million particle system with points it’s a major leverage, yes you can use an object and render the points as boxes, but this means it has a lot more to compute than if it were just a point with some properties.

So that’s it, why Cycles is unable to render points?

Please refrain the impulse to answer “do it yourself” because here, while it would be great to have such feature, I’m not asking for the featur itself, but I’m trying to understand the complexity of implementing something like that, and why no one has given an alternative to Cycles (the only one may be Mantra from Houdini I think).

Cheers!

Why not use point density?

Yes , as Nirved pointed out, it’s possible to render vertices inside a volume (an object with a volume shader) , I guess it’s faster to render them as particles, like in BI , but there isn’t such a feature yet.

why Cycles is unable to render points?

I think by nature Cycles needs “real” objects to render them, that’s why you have to turn particles into a cube or sphere. Or into a volume like a kind of smoke.
To render only points would mean cheating and that would be a “post-process” rendered separately, which means it would be hard to have particles being reflected into other objects or cast shadows.
How it’s working in krakatoa ? what hapend when you render let’s say a smoke in krakatoa + geometry in Vray or Arnold ? Does the renderer handle the smoke or does krakatoa render smoke separately on top of the Arnold render ?
What append then to reflections or shadows ?

Moved from “Latest News” to “Blender and CG Discussions”

Doesn’t that give you some kind of nebula? Or does this gives you sharp point representation?, I may be confused about point density…

Regarding Krakatoa, usually you render the particles completely separated from the Vray or Arnold passes, on the other hand you can render particles as points in Mantra, so it should be possible, the thing is that you have to give normal information to the particles (you being the de eloped I mean Heheh), in that case it could be similar to have the particles as spheres but without the geometry overhead of the boxes or the spheres because there is no geometry at all, I’m not sure how other Ray interactions are solved, but they can be solved for sure since Mantra does it, in fact, now that I think about it, Vray may be also capable of rendering points.

Bear in mind that Vray, Arnold, Mantra and Cycles are all raytracers while Krakatoa is a raster render engine, I’m not saying that Cycles or any other ray tracer can compete with Krakatoa face to face, but being able to render points as points may enable to render particles more efficiently in some cases I think.

Cheers!

Vertices by definition have no physical dimensions, albeit they have a postition in space as defined by their X, Y and Z coordinates. A physically based renderer obviously cannot render something that lacks dimensions - and therefore has no surface, as is the case with vertices. You can get around it by using parenting and Blender’s duplication feature in the object settings, eg. using an icosphere as a child of the vertex cloud.

But in this case you get a massive overhead by the geometry, am I wrong?

Nope, if you use instancing.
To answer your original question, afaik pathtracers basically render triangle faces, which are the minimal requirement for informations such as surface and normal. Volumetrics are a different beast, but they again rely on geometry defined by triangles.

And how big should a vertex be shown?
My thought… A vertex (a point) is infinitely small in all three coordinates, so there is nothing to show. A segment formed by two connected vertices remains infinitely small in at least two coordinates, yet nothing to show. A triangle (a face) begins to be the smallest figure that can be shown ensuring that the dimensions are not infinitely small in at least two of the coordinates.

So my guess is that any program that shows to the user a vertex, is not really showing a vertex but something associated with that vertex.

But then, how does Mantra or Vray render points, it seems also that I was wrong with Arnold, it can render particles as points, check this:

The thing is that as everything, it can be tricked, then you think we could be able to render a 25 Million particles simulation in Cycles?

Here is an example of what I mean, and yes, it´s required as points, not as a volume, think in sand as an example.

On the other hand, how can you use instancing? Isn´t this automatically done when you choose to render particles as an object?

Cheers.

Isn’t him after something like RiPoints (https://renderman.pixar.com/resources/RenderMan_20/appnote.18.html) ?
If so, I guess is more than a fair question.

L.

That is what Krakatoa does, define the point size, the size is defined by a parameter and directly related to the resolution where it is being rendered, so is not the same to render 1 million points in 720p than the same million points in 1080p, you need more points in 1080p to achieve the same amount of density and visibility, so usually a point can be understood as a pixel, but with a lot of buts behind, so you can define how it looks in size and also in shading, because you can define it´s translucency and it´s opacity and several different things, that is why krakatoa is so powerful.

In the end a point is a pixel in the screen with normals and position in space.

Cheers.

More or less with the exception that RiPoints can´t be shaded while krakatoa points can, but it´s more or less that. :slight_smile:

Cheers.

What’s most likely done in those renderers is that they pretend that the point is a perfectly spherical mesh. A sphere only needs a point and a radius in order to perform the necessary operations with the light rays.
As far as I know, no one has implemented it in Cycles, that’s why it doesn’t exist. It is a rather exotic use case, that’s most likely the reason why it does not exist.

I know that the main reason is that no one has implemented it hehehe that´s obvious :wink:

But I don´t agree with the asumption that it´s an exotic use case, Kraktoa tech is used for rendering hair very fast and very efficiently, points can be used to render foam and any other kind of effects that are particle related, and can be used to ender point clouds obtained with photogrammetry or lidar scanner, it can be used to render some types of more solid smoke, particles in the environment, sand, snow, etc… IMHO it´s not exotic at all, it´s used a lot in animation, fx and advertising, and now is also being used in viz to add FX that are more and more required by the clients, like the already named water with foam.

Cheers.

EDIT: check this:

>> More or less with the exception that RiPoints can´t be shaded while krakatoa points can, but it´s more or less that. https://blenderartists.org/forum/images/smilies/sago/smile.gif

>> Cheers.

True indeed, then RiSPhere could be better alternative :wink:
https://renderman.pixar.com/resources/RenderMan_20/proceduralPrimitives.html

BTW Fweeb sorry, I though I was posting in Discussions by mistake.

Cheers.

Yes, maybe, I don´t know the intrinsics of Renderman, the thing here is that points are represented by pixels with normal and depth information, this is the starting point, after that you can do bells and wishtles over those points :slight_smile:

Cheers.

Lots of incorrect information in this thread.
First of all you can’t raytrace a point. No ray will never EVER hit a point. Mantra, vray, maybe Krakatoa are not pure pathracers and might have some rasterization element to them and such can render points. Normally they’d make the points x number of pixels in radius. RenderMan, using RiPoints in Reyes, used screen aligned disks, when they switched to a pathtracer with RIS, I believe they use analytic spheres (not polyspheres) for points. For the record ripoints can be shaded.

Ray tracers/path tracers sometimes trace against analytic geometry, like a sphere, which has a mathematical formula which is used for intersecting a ray, rather than tracing against polygons. Anyway the point is that cycles and other pathtracers tend to use tiny tiny spheres for points, either as instanced polygon sphere which take no memory, or analytic spheres. This is the correct way to do it.

(long answer basically what Dantus said)

I don´t understand why you say there are lotz of incorrect information, in any case what you described is what I´ve been saying, just I did not knew how it should be done, unsing an analytical sphere is like giving the point normal information, just that sphere should have a maximum size of 1 pixel in screen space, so that was what I was referring to, it´s great to know that it is possible to solve this with a path tracer, no matter how a scanline render engine does, what matters is how could this be solved in Cycles :slight_smile:

Now the question is if using an analytical sphere could be faster than using geometry, I assume is easier for the engine to predict the sphere behaviour and other situations, but I don´t know if that make sense.

Cheers!