Hey guys,
I’ve just had a crazy idea, and I would like some comment by people with more experience than me.
If I’m correct, meshes are not much more than arrays of vector3s. Nurbs on the other hand are 3D functions based on the spline function that make the face. This makes nurbs much faster at high resolution.
So why don’t we use this in a renderer? Characters for example have a lot of smooth and curvy parts, so it should be relatively easy to calculate a 3D function which can describe an entire character. Now, I see that this is probably hard to edit by hand, but if I had a scene which normally takes 3h to render , I could now run a script to turn the mesh into a function and then render it just like a nurb surface. The time saved in the render process (especially if rendered repeatedly) would probably outweigh the time it took for the script to turn the mesh into a function.
Of course I’ve done some research (google tbh) on this, but the only thing that even relates to this kind of thing is a spreadsheet about a technique to fix holes in a a scanned model, and since such a scanned model would consist of around 500k verts, it would take ~3tb, which makes the technique unfeasable. However, if we had a character of 10k verts, and are likely to want to render it with a lot of subdivision and probably a lot of times with trial and error, this may just be worth it
What do you think. I’m entirely optimistic that this is at least worth talking about, even if it may turn out to be a future prospect only.
Are you talking about converting a model to a single formula? or do you mean converting a model to nurbs.
Because some time ago, I was actually wondering if it’s possible to create an equasion that would define an entire universe at a specific point in time. Or an equasion / dimension. How 'bout them apples!
@ McBuff : entirely serious I was talking about a single formula. Google this: 1.2+(sqrt(1-(sqrt(x^2+y^2))^2) + 1 - x^2
and you’ll see what I mean (you need to specifically use google though, and it isn’t the best example, but I can’t remember the more complex ones :S).
I think a universe may actually possible, though making an entire universe is a bit far stretched I think, since the universe is rather large. Furthermore it’s probably simpler to create equations for planets with their center where its’ supposed to be, rather than finding a formula which describes it all at once.
I think it’s more logical to work with the smallest building parts there are. The energy that makes up atoms. It’s more a matter of simulation rather than storage. You can get every point in the universe a planet passes through just by knowing the equasion for it’s trajectory.
But i’m not trying to derail this for my own purpose. The idea to compress a mesh into a formula is interesting to say the least. But that formula would either take very few arguments and a lot of simulating OR use more arguments and therefore you’d be swapping performance with memory.
If you have a function of f(x,y,z) all you have is a function to fill a volume with, not one to render that object to a 2D image.
What you really need is a function to intersect a (view) ray with. Such a function would be complex for a complex shape and therefore would be expensive to evaluate.
If all you’re doing is rendering very (mathematically) simple shapes, such functions will be more efficient, but in the general case, triangles win. There’s a reason we use them everywhere
Nurbs are actually more expensive to render than polygons and are tesselated (converted to triangles) at rendertime. Nurbs were popular for organic modeling before we had SubDivision Surfaces since it allowed the artist to work high resolultion surfaces with a relatively small number of control points. Nurbs patch modeling is a pain however, and requires things like continuity managers and such. No one ever modeled a full detailed character from a single nurbs patch. SubD Surfaces fixed the control point issue without all the down sides to nurbs, and is faster to render. Its not a surprise that since we have SubD, hardly anyone is missing working with nurbs.
Characters for example have a lot of smooth and curvy parts, so it should be relatively easy to calculate a 3D function which can describe an entire character
I think you are grossly underestimating the complexity of such a feat. The equation would be enormous and would have to be calculated for every sampled point just to get a position. Then compare surrounding samples to get a normal. OpenGL viewing would be incredibly slow and i don’t know how you would go about UVing or deforming such an object…
Computers tend to enjoy static data and polygons offer just that Even nurbs models are essentially a collection of static points and the nurbs “part” simply handles surface interpolation.
Why I answered this way? 10+ years of experience developing render engines.
If you want to get an idea of the possible complexity, pick up one of the books that are written about NURBS, read and fully understand it. Then realize that all of that math you just read was only for rectangular surfaces and you already have to stitch multiple of those together to build a pyramid. Now imagine the complexity of a formula to describe, let’s say, Suzanne.
relatively as in : relative to (as compared to an abstract mechanical mesh) in the sense of performance. Of course I don’t know how to do this with math (though I’m sure there are people who actually find this easy) but with some research it should be possible to write an algorithm which approximates the shape.
I do agree that the fact that it doesn’t exist yet often speaks for itself, but this time it seems there has been pretty much only the one research (the one in the top post) project about it, and that project achieved (even if a bit limited) success.
@Zalamander
The surface thing is a good point. They did discuss it in the article, but to be honest, I couldn’t follow at that point anymore. Maybe it makes more sense to you
Another thought I had: what if we split the mesh into a lot of parts first, and then created a function for each. That way many of them could be very simplistic, and it’d save a lot of pre- and rendercalculation, ergo memory and performance. Since they no longer need to go 360 they could now describe the surface rather than volume.
As for the raycast collision, that should be rather feasable. I mean, even I can take my calculator and find out the point of intersection between a vector and a plane. furthermore since the mesh would be broken into many parts it would make it possible to only check relevant parts of the mesh for collision (like, via calculate global to screen space or the like)
Note:
This is of course purely theoretical at this point. Just thinking about copying uv space etc to this new technique is enough to give me shivers
Describing a surface with a mathematical expression will remain useful only when that surface does not have a lot of high-frequency detail. As the detail increases all its advantages disappear. You quickly get to the point where adding a single new detail point adds more complexity to the formula than simply giving that detail’s explicit location.
We are basically doing that already, by putting the triangles into a BVH for rendering. That way you achieve logarithmic complexity, which means that as the amount of triangles grows larger, the intersection speed will increase only minimally (after a certain treshold).
Whatever you have in mind is unlikely to be significantly faster than this (already simple and general) method.
The best way of thinking about this is to simplify the problem to just 2D.
Start with a kid’s line drawing and it is sure possible to convert this to vectors. But then start adding details to it, like freehand painting. Now think about a real-world image taken from a camera. Just because it is theoretically possible to describe what you see using functions does not mean that it is actually possible. Any description of that real-world image would be more complex than the image itself. This is the same problem you’d see with arbitrarily complex 3D surfaces. Adding the next bit of detail adds more complexity to the function than the amount of detail itself.
I find this sentiment strange. Subdivision surfaces are expressible as functions, with the cage mesh as an input. Granted, it’s not a function that lends itself to direct ray intersections, but still. A subdivision surface is expressible as precisely such a function, and the heavy lifting of that feat has already been done for us by Catmull et al.
The same can be said about piecewise nurbs surface descriptions. Even triangle meshes could be expressed as a function.
I mean, honestly, what do people think these kinds of surfaces are, if not mathematical descriptions (expressible as functions if you like)?
Unless I’m missing something here, and people are thinking of a very particular type of math function. Admittedly, you may have great difficulty modeling arbitrary shapes with nothing but a single (very large) polynomial function, and I doubt the interface to manipulating such a function would be intuitive for artists.
In any case, having said all of this, triangle meshes do tend to be faster to calculate intersections for than e.g. subdivision surfaces or nurbs. But the trade-off is accuracy and memory usage. Raytracing nurbs etc. directly saves a lot of ram compared to rendering a tesselated triangle mesh, and moreover is a perfect representation of a curved surface, so you will never see artifacts from a faceted approximation.
This is a really interesting paper on raytracing curved surfaces:
It’s interesting to note that some scenes in the paper actually render faster with the bezier patches, compared to triangle mesh tesselations.
He asked for a method to describe the entire surface of an already-existing model using a single complex function. He hoped that would make it render faster.
You can already use the add-on Add mesh/extra objects then choose math function.
I don’t think you can model an entire object with that, unless the object is very simple.