Unlimited detail, to good to be true?


What do some of the more technical members of the community think, coming soon or hidden problems.

“unlimited” is nonsense of course unless it is fractal based or something but this would be kind of limiting artistically. to me this just looks like an ordinary voxel engine and some hype.

The video is disappointing - its blurry!

I think unlimited is just referring to the company size, actual limits could be screen resolution or art input complexity. As far as I know point cloud rendering is different to voxel rendering, and that although point cloud rendering has been done previously it is being done differently to previously tried techniques, looks like they’ve traded processing for memory in terms of requirements.
It certainly seems hyped up but as far as I can see on the website they’re not trying to sell to the general consumer .

Isn’t this about the 2nd or 3rd time this has popped up recently. Search (a useful tool) for the earlier posts to found what people thought of it rather having to repeat it all again.


unlimited does not exist in reality (aka physics).

My bad Richard I hadn’t seen those, an I confess I normally only search first if Im looking for an answer.
True but the physics nomally uses a simplified unrendred mesh anyway.

It’s not actually unlimited. It’s just, like he said, enough to fill our monitor. At a certain point, (no pun intended) you have enough detail on screen to be identical in appearance to reality. Because reality has infinite detail, you effectively have unlimited detail. Our eyes/monitors just can’t pick up all that subtlety.


Unlimited referes to the amount of detail.

The limit to display pointcloud data is obviously a pixel on your screen.
Lets take a house.
From far up in the sky the house is a pixel
Get closer one brick is a pixel.
Get closer and the jointings get a line of pixels and the bicks get bigger.
Get closer and the stones in the grout between the bricks get a pixel.
Get closer and the sand between the stones in the grout get a pixel.
Get closer and there is a bacteria sitting on the sandcrumb the size of a pixel.
Get closer and the eye of the bacteria is the size of a pixel.

“infinite” detail.
Try that with a mesh, or with textures. Only pointclouds can do that and with a optimized octree parsing the cloud for the minimum of information needed its just fine.

The only limit is the time to create geometry and the storage space available.

One of the comments on the PopSci page has a valid point when the person who wrote it says most of it is the same object over and over again.

Why not combine this with algorithms for variation to make things really pop, as in every detail looks different throughout the scene, or would that be way too much to ask of today’s computers?

As with such details in polygons, once procedural data (using it in geometry, textures, and displacement) starts really being combined with hand-made geometry and textures then game studios can really save on file-size (so games don’t go over 10 gigs for example). Levels in the BGE for example can already take advantage of similar file-size saving techniques (geometry-wise) by using modifiers (like combing subsurf with procedural displacement), 2.5 BGE levels can save on filesize in different ways thanks to new modifiers like solidify and spin.

Comparing pointclouds to polygons is like comparing circles to triangles.

Yes, you will get more consistent details over changing distances with pointclouds, but the CPU will always have other tasks to contend with, like physics, networking and logic.

Polygons would persist in any computer of the future, because they are a good tradeoff. People can look at polygons and not find it too unbelievable, and computers can fill them very, very quickly.

There is always more work to be done elsewhere.

Pointclouds are merely another way of representing geometry; they have differing behaviours in rendering, but they are not a magical solution to any particular problem. Polygons trade the quality of the shape for rendering speed, and there is no way to disregard this. Our processing capabilities will always ultimately be limited, and polygons are a cheaper way to fill pixels, so long as they are larger than a single pixel.

I’d prefer some form of global illumination before something that provides perfect level-of-detail. And let’s be honest, who really gives a stuff about LoD? You’re not going to pause firing bullets at someone because a far-off palm tree suddenly struck you as being rather fetchingly rendered. (Future gamers can one-up me if this is so, however)


I have this horrible feeling of having heavily contributed to the entropy in the universe.

The tendency of manycore systems laugh at that. Hexacores already on the march for 200 USD, octcores in the start and 12 cores soon to be relased.
You´ll have plenty of cores left for stupid phyiscs, network code and game logic.
Rasterization and polygons where just invented because the computers lacked the computing power.
I´ve been there when OpenGL was invented, Hardwareshaders where introduced and all the shit to catch up to the lack of computing power.
And it was done before, see games like Comanche or what was it called? Outcast or something… but the puters where not fast engough to handle it.

And it might baffle you, but you can actually use 3D graphics for other things like games =)

Take medical applications, like visualizing a CAT scan of the brain in real 3D before a braintumor surgery for the surgeon. Those already use pointclouds for quite some time, and the limit of the geometry should be the resolution of the scanner, not the polygon resolution of a mesh.
Or vizualizing data from a electronmicroscope which can do 3D measurements too. Zoom from the bacteriaswarm seamless into the molecular structure of the bacteria?

Personally I think those people apreciate a good and fast “voxel engine” and I also think it will has a bright future.
Look what happened, bumpmaps, to increase the detail, normal maps to further increase the details and what is happening at the moment?.. DX11 with hardware tesselation to increase the details…

I am looking forward to see voxels raise from the ashes again…

This whole infinite level of detail malarky is probably best suited to scientific and medical applications to be perfectly frank. Whilst games and movies will always want gorgeous visuals I don’t think anyone will ever want to see that much.

Anyway if you wanted perfect surfaces you could always use mathematical defined curves and vectors and extrapolate them to get the level of detail you require surely?

I’d loathe to manipulate a cloud of points.

Don’t like the high detail sculpting workflow? :confused:

IMO (overpridly considering myself as an artist) i would be really enthousiatic to have tools enabling me to model any sort of thing with a total freedom over details complexity… :yes:
…Only wonder HOW it all gets modeled and textured…

...Only wonder HOW it all gets modeled and textured...

And how it will be animated/rigged/skinned. Setting corrective shape keys or face expressions in point coulds doesn’t seem very controlable.
I believe polygons will remain. I think the thing with polygons is not just about detail, memory and CPU… they are useful to work with. They give a controlable structure to the shape of the objects.

what most people seem to be missing is that they claim to be able to convert from high poly polygon models, and retain animation data.

Well, sorry to say, but 3D artist in some areas of expertise would become obsolete.
Sub-D modellers. Mostly obsolete
Texture artists. Mostly obsolete
Normal Maps? Bump Maps? Tesselation? Obsolete.

As with pointclouds you have no need to optimize geometrie, or save data many peeps would be jobless or got to reeduacte.

If you need a car then for a game, you simply 3D scan it. Don´t even need to texture it.
Goes for many real life props.

The only thing it is intresting for is for sculpting. The artist can decide what the minimum and maximum detail is he wants on a sculpt, limited by the amount of data storage, not by the limited power of the graphics card to display polygons.

My guess is, it will be the era of scanning objects and alter them via sculpting, or merging pointclouds. Need a minotaur? Get the scan of a bodybuilder and a bull and happy merging and sculpting =)

Let´s speculate a bit within the boundaries of the technical doable at the moment:

I could imagine though, that there will be polygonal modellers that deliver base meshes for a house for instance, which gets converted into a pointcloid with a selected LOD, which is no problem at all, and then the sculptors need extensive tools with tons of brushes, and just fix the house up like you would do in real life. Walls need a plaster brush. Plaster needs color, just paint it in the 3D viewport. You need grass outside the house? Grassbrush, generating grass from a seamless patch of a grass 3D scan, randomized ofc. Materials properties? Directly linked with the texture. No more material per polygon, but material per point. Plaster consists of Sand and stuff? Plaster brush to the rescue, select color, material props are stored in the brush and stored in the points of the wall as you paint along. Generic photorealism by numbers.

It´s a great tech, which needs a different type of artists. So some will adapt, some will die out. It´s like the guy lighting the gas street lamps had to start to work in a lightbulpfactory :wink: A shift of professions.

Personally I think economy and conspiracy :slight_smile:
There where many many good approaches on pointcloud engines over the years. They all dissapeared completely. Purged from earth after they hopped once through the eZines. I guess everyone has his price and the big graphic card producers make sure to buy the technology and put it in some place safe. Algorithms to parse the data of pointclouds are surely buildable in hardware too. Who knows, maybe the graphic card of the future is a high end data analyzer. But why slaughter the cow as long as she gives milk?

I was thinking more in a real-time animated environment. What I see they write in their page is that they can convert polygons to this new technology. I understand that way the animation can be kept (baked?), but what I can’t see yet is how a character can be animated in real-time as for a game.
After the charecter is converted from polygons I don’t see how the bones can accurately drive point clouds if they are not based on some kind of structure like polygons are. At least I didn’t see that in the videos or the site explanations. I believe they wont convert high poly objects to point cloud for each frame, so I guess we will have to wait and see how they solve that.

I think it’s not an either-or situation, I guess it could be a mix of polys and this new technology.

<quite a bit off-topic>
Arexma, I’m also somtimes concerned about how technology can tell us from one day to the next that we don’t have a job anymore, but I always think that this is an artistic kind of job, if you want “a Sintel” there is no “Poser” or “Terragen” that will make her the same way you want her to be.
The feel of a specific character or a specific world I believe will need for a long time a human artist behind it. There will be surely a lot of automation in the future, but those seem to tend to limit theirselves to known/general things randomized in some way. I think we are way too far from replacing creativity that gets to the heart of people.
</quite a bit off-topic>

without this, blender is a totally unusable program :frowning: