Discussion: Euclideon Technology

There’s a relatively new company called Euclideon that is going around boasting “infinite geometry” technology. It says it has created the tech to obsolete polygons, instead using “atoms” (voxels, point clouds) to render meshes. Furthermore, it says it can support an infinite amount of detail and all in real time.

Supposedly, the Australian government has granted the company 2 million dollars to continue its research and development.

I’m a bit skeptical. Very skeptical. I’ve wanted this technology for so long, but now that it has “arrived,” it just sounds too good to be true. The processing power required for this would be tremendous, it just does not sound remotely possible on today’s hardware. The company itself is very hush hush, saying the tech is already in working development, but they refuse to show the public enough proof for credibility, saying that they’d prefer to wait until their product is finished before they prove anybody wrong.

I know many of you guys here at BA are game industry professionals. Thoughts on this technology? Do you think it is fraudulent?

Meh. That is about a game engine, and this is the game engine forum, so I’ll post both places and cover all of my bases. (Rhymes, people, rhymes make the world go round. LOL).

As to the original post, it’s interesting, but I don’t like that they seem to just ‘show up’ with this technology and say, “Hey, look what we can do! Something revolutionary that will change the way games over the world are played and rendered! … Okay! See you in a year!” - They disappear more than they should to have something so ‘amazing’. There must be a catch. In addition, I don’t see why this is necessary - can you imagine the amount of time it would take if you could model something that could be as many polygons as you wanted? Games already take years to make, even with polygon budgets - if people could spend a whole day on a leaf, it would take a ridiculous amount of time.

EDIT: Oh, and saying that you have a technology that can give ‘Unlimited Detail’ without explaining how or why it’s taken so long until now for someone to think of it doesn’t sound so legitimate.

EDIT 2: So, if game devs are going to release games on time with this technology, they’ll make games faster and end up completely not using the ‘unlimited detail’.

The polygon budget is one of the main reasons production takes so long. It takes a lot of time and skill creating an optimized, low-poly mesh, and most of the time there exists a high-poly version first. What this technology proposes is that you just use that high-poly model, without hand-optimizing. So yes, it would save time.

I’m not convinced animation and physics are solved with this octree rendering, so a playable game is a long way off; but the fact that John Carmack wants to explore this in a few years gives me hope.

The video does look fishy, like one of those free energy scams (we just need a little more money for research…), but whatever the deal is with this company, the technology seems to be possible.

I don’t know. I read on the other thread above that it works by screen resolution, I think, which sounds pretty interesting - rather than rendering thousands of faces, just render 1 pixel for each pixel in the screen - that would be faster. Perhaps time would be saved in this case.

I remember seeing the original demo video and while the theory behind how it works sounds good on paper from what little I know about it it seems like there would be a massive strain on processor(s) calculating that many particles(or from my understanding based on the video calculating what’s visible and what is occluded on such a large scale) so much so that it would seem to me that unless they have the best mathematicians and programmers in the world(possibly the universe) their algorithms on current affordable hardware in my mind don’t seem capable of scenes of such epic proportions… But I am neither a programmer nor a mathematician so if anyone cares to prove me wrong I’m all ears =P

So after I hastily posted this, I went back to the other thread and did some reading. I came across this one-time interview with Euclideon’s founder: http://www.youtube.com/watch?v=JVB1ayT6Fdc&feature=player_embedded#!

It’s a 41 minute video, but it did have a way about it that lessened my doubts… Not completely mind you, but there was an answer for every major doubt that I had in my mind.

Apparently the tech is all code; they are not even tapping the potential of the processor or gpu as of yet. The fps is about 20-25 for that demo clip on a decent computer they give specs for. They say that they are not even raytracing, that they’ve created a new algorithm for lighting. I dunno, the whole thing is fantastic, it’d be one awesome thing to have for sure… I’ve been dreaming of something like this for a great length of time now. Heck, I was/am excited for Bmesh, something like this will obsolete polygons.

I was actually curious to know about the game developer’s perspective on it, thanks for the input everybody.

I saw this a while back, I think that this project is under develpment for years on Australia the government recently donated 2 milhon dollars for the developers I don t know, some time ago people are saying that this as fake, but for me is quite plausible, they call the poligon struchture ,if I remember right, atoms.
well i think that we ll have to wait an year at least to see such a thing running on our computer, but, this technologie is very intresting.

Around Minute 32 of that Interview Video they focus a little Bit on the actual real Objects being scanned in to be used as 3D Models. That reminds me of the old, traditional Oddworld Games: There the Team did not model the Characters in any 3D Modelling Tool, instead they formed the Characters with real Clay (or something similar, I mean), because when you model an organic Shape by hand with real Materials you get a quite better Feeling for creating a believable Shape, they said. Of Course, reducing these handmodelled Shapes to lowbrowily low-poly Models takes away wuite a Lot! So the Eucledian Technology might indeed enable not only to model Characters with real Materials and scan them in, but also to retain their Shapes practically perfectly, and then I could imagine that modelling Things by Hand might even become pretty common. I would find that beautiful, because each and every Time I see the original Models of the Oddworld I am stunned – they look even way better than in the Games and with the Eucledian Technology, they could look as perfect as in Reallife! : D

And yes, I am a Believer!

It looks good, but this is just shifting the problems of game creation: scanning in a rock is easy, but it will prove to be very hard to make large structures without spending years on the details. In order to get the best out of this engine you would need a procedural detail generator to automate the ‘greeble’ process to make all this worthwhile.

And how would these models be made, out of interest? Most commerical 3D suites deal in polygons and not voxels. The company would have to make pretty robust conversion tools and importers too.

I don’t mean to sound harsh, but please, watch the Videos before saying something imprudent. They already have done a Polygon Converter, just because of what you said, it is the most common Way to make Models. They already have worked out a Polygon Converter. Besides, a Large Structure does not need to be modelled large – think of good old Movies when Miniature Models were widely spread. And procedural Greeble Generators are nothing special: Just use Blender, Displace Modifier and Texture Option (for Example a Noise Texture) and there you go with Greeblesomeness.

This just means that the workflow for a large model has grown another step, and made asset building that much more expensive, time consuming, and for some studios, uneconomical. Ironically, if this tech is sucessful, it may force some to regress to simpler forms just for cost, since studios would have to either protract development time, or hire more artists.

I will be interested to see if people like John Carmack use this method for future iD Tech engines or if it makes its way into others like the Unreal engine (or even middleware like Unity).

Also, automatic greebles are fine, but you really have to have a good handle on art direction otherwise it all begins to look false. I have seen many large models fall down on minor macroscopic details. The most difficult skill of computer graphics is creating a sense of scale, especially in video games where resorces are finite- as more detail is added, lighting becomes more important, and it just adds to the complexity of said assets. Simply using displace and noise will simply not suffice.

Plus, some people don’t always have time to watch long videos (such as myself having to look after two children). You could have quite easily said this question was answered there by Euclideon, rather than being ‘harsh’.

Yeah, I’m not sure of how much faster or better this would be. I mean, another console iteration and we’ll have almost perfect graphics. This isn’t PS1 / Sega Saturn days when just 3D is amazing - realistic graphics are the norm now. Would this technology look significantly better than graphics now?

Edit: I, too, would like to see what a game engine would have to say.

It seems to me that it’s always the next generation of consoles that’s good enough. There’s lots of room for improvement.

Edit: I, too, would like to see what a game engine would have to say.

Here you go, Carmack three years ago (source):

We’re working on our RAGE project and the id Tech 5 code base but I’ve been talking to all the relevant people about what we think might be going on and what our goals are for an id Tech 6 generation. Which may very well involve, I’m certainly hoping it involves, ray tracing in the “sparse voxel octree” because at least I think I can show a real win. I think I can show something that you don’t see in current games today, or even in the current in-development worlds of unique surface detail.

This will make asset production cheaper, not more expensive (see my post above).

Ironically, if this tech is sucessful, it may force some to regress to simpler forms just for cost, since studios would have to either protract development time, or hire more artists.

Again, I think the opposite is true. But studios protracting and / or being creative with lower fidelity will not necessarily be a bad thing. Studios do this today, a lot of the industry makes its living on less-than-AAA games.

ETA: Machinarium, Katamari, Owlboy, Limbo are a few examples of how to kick triple-A-hidef games’ ass with cheaper art but more style.

Also, automatic greebles are fine, but you really have to have a good handle on art direction otherwise it all begins to look false. I have seen many large models fall down on minor macroscopic details. The most difficult skill of computer graphics is creating a sense of scale, especially in video games where resorces are finite- as more detail is added, lighting becomes more important, and it just adds to the complexity of said assets. Simply using displace and noise will simply not suffice.

Yeah, this tech would have its own set of challenges, but thankfully it would still be down to good direction and a compelling vision to make a visually interesting game.

And frankly, I’d rather see any added horsepower go to larger worlds, more interesting simulations, better AI. Maybe voice synthesis so an RPG like Fallout wouldn’t be constrained by the limits of the lines they’ve recorded? Hey, I can dream…

ThatSoundAgain: Its interesting Carmack has this technology in mind, perhaps in iD Tech 7 we may see something like this.

I think we are coming at the same question from opposite sides re optimization. These days optimization is not that important as models are just built in a certain way (new tech like DX 11 tesselation helps as it negates the need for normal maps (thus detail),- although again, Carmack is not convinced DX 11 is going in the right direction, he always has a soft spot for OpenGL!).

I suppose the level of work would depend on the game, but a FPS or RPG would make heavy use of art assets that are close up to the player and under constant scrutiny. I think its still a case of art overload if every object has to be modelled high poly- detail still needs to be modelled- take for example this helicopter (image from Wikipedia):

Now that we could model everything exactly, it means greater care must be taken in making sure the details are correct (look at the high poly forums and follow people making cars and planes from blueprints: it takes ages to make then to that level of detail). Although you dont have to optimize it, you still have to build it! Take the latest Gran Tourismo as an example- the development time mostly went on modelling, I believe.

But hey, time will tell. If this technology takes off it will be interesting to see how the skills mix in game studios changes, if low poly artists become a thing of the past (assuming this tech can be transferred to mobile devices and web browser based players, which I think are the future of games platforms). Like you suggest, will studios even use the full potential of this? These days single player, intense FPS or RPGS are getting rarer ( as I sit waiting for my copy of Deus Ex to arrive:evilgrin:), as there is no money in them in comparison to games like WoW.

Attachments


Totally geeking out here. This is great.

I’m agreeing with the fact that there comes a point where you want your game to take on a certain art style and there is such a thing as too much detail. Imagine if Nintendo, a company whose main demographic is in juveniles, started producing more realistic versions of their cartoonish mascots. There is a reason nobody has done it yet, it would be awful. I for one tend to appreciate cel shaded games, they have a certain look to them that can be quite pleasing to the eye if done correctly. I wonder what this sort of technology would imply for games like these, where detail is not needed nor even wanted.

Re-reading what I wrote, I think you are right this technology will be restricted to only a subset of games, mainly as it allows greater fidelity in gaming environments. So it would be overkill in puzzles (Angry Birds modelled to the last atom, anyone?) is essential in Crysis 3 since FPS / FPRPGs live or die on visuals.

So in essence, nothing has changed really. This new rendering technique will drive forward engines like Unreal, iD Tech etc but this is how its always been- its only really been games like Doom, System Shock, Crysis (a few of many) that have pushed the boundaries of graphics.

What would excite me would be the possiblity for using this for architectural walk throughs / VR headset simulations- imagine the Sistine Chapel in an interactive walkthrough wih unlimited detail (and being able to ‘fly’ up to the ceiling!)- it could realy add a visceral tone.

Yes and that description makes it sound plausible, but surely a high enough resolution screen would make it’s self obsolete?

e.g. While a traditional pipeline chuggs along at the same speed on all resolutions, whereas they get more intensive the larger it gets.

Also, (only from reading this thread) if they haven’t implemented ray tracing, then… i.e. is true ray tracing possible like this or would it be approximate? If it’s true then it probably isn’t going to decrease the pipeline that much in reality…?

He’s experimenting with it for Tech 6, which he has also hinted will be his last engine. Bear in mind that Tech 5 is imminent, but that it has been 7 years since the last one - which puts any Carmack engine with this approach in 2018. If it’s feasible at all, there’ll be a lot more cycles to work with at that point.

I think we are coming at the same question from opposite sides re optimization. These days optimization is not that important as models are just built in a certain way (new tech like DX 11 tesselation helps as it negates the need for normal maps (thus detail),- although again, Carmack is not convinced DX 11 is going in the right direction, he always has a soft spot for OpenGL!).

Yeah, I do have some experience but am not claiming to be the expert. I only wanted to challenge the notion that more polys == more work. It’s not linear like that.

I suppose the level of work would depend on the game, but a FPS or RPG would make heavy use of art assets that are close up to the player and under constant scrutiny. I think its still a case of art overload if every object has to be modelled high poly- detail still needs to be modelled- take for example this helicopter (image from Wikipedia):

Now that we could model everything exactly, it means greater care must be taken in making sure the details are correct (look at the high poly forums and follow people making cars and planes from blueprints: it takes ages to make then to that level of detail). Although you dont have to optimize it, you still have to build it! Take the latest Gran Tourismo as an example- the development time mostly went on modelling, I believe.

Yeah, we might be going back to the time in movies before CGI - if you wanted a helicopter in your shot, you needed to rent / borrow one.

This, and just the pressure for higher detail in traditional engines, might lead to more standard assets, object libraries and the like. Need a helicopter in your game? Buy a stock one built from actual blueprints, repaint it and fit it with a few extras. Need London in your game? Buy the hi-def topographic data from a traditional map company, slap in some extra set details. Or rent space for your players in a continually updated / streamed super-Google-Streetview type system.

That would be awesome, and might even lead to the sort of scrap-heap creativity that the original Star Wars and Blade Runner prop designers used. Put parts of a vacuum model and a car transmission together, add stock, spray paint it black = blaster rifle.

But hey, time will tell. If this technology takes off it will be interesting to see how the skills mix in game studios changes, if low poly artists become a thing of the past (assuming this tech can be transferred to mobile devices and web browser based players, which I think are the future of games platforms). Like you suggest, will studios even use the full potential of this? These days single player, intense FPS or RPGS are getting rarer ( as I sit waiting for my copy of Deus Ex to arrive:evilgrin:), as there is no money in them in comparison to games like WoW.

Yeah, I think different tools are needed. Procedural generation, simulated concrete pouring & setting, simulated paint / sand / weathering. Think of how awesome it would be as an environment artist to WASD around in-engine. Ask for a stone wall yea high, then using your interactive weathering brush to paint the passage of time. The simulation of decay would take place before your eyes until you were satisfied, then it would be time for the moss brush. Sprinkle some simulated moss colonies on there, grow them a few generations, presto. Then fast-forward through a simulated day-night cycle and different weather to see how it looks in different conditions.

I’d much rather to that than fiddle around with unwrapping and manual light baking.

Well, I wouldn’t say that toon shaded games and the like will not benefit from this technology. Euclideon says that they have not even tapped into processor or gpu and their high quality demo runs at about 25-30 fps on a decent, not newest or fastest, gaming laptop. If speed is really based on the code itself, there are huge implications for publishers and game developers. The amount of people that can now play your game increases enormously and you can add as many (we’re not being conservative here, but you get the point) objects as you want on a screen without slowdowns or lag.

I too would love to see high definition VR work being done with this. That would be amazing…

Astute observation, totally agree. Technology seems to be pacing rapidly at higher and higher resolution, which ultimately means higher dimensions. However, considering that they aren’t even utilizing the processor or gpu, modern or not, I think that future builds will address this problem. In fact, I don’t see them releasing the project to the public view until this problem is addressed and if they are legit, they’ve already figured out a way past it.

Oh and the interview says they are not using raytracing because it’s very computer extensive and slow (true). They were very hsuh on what they were using, however.