realtime raytracing?

How long do you guys think it’ll be before something like cycles render preview can be used in a game engine?

I’m requesting a new feature or something crazy like that, I was just wondering as I used cycles in the CPU only build, and at a vsmall resolution it was rendering in a matter of mili seconds…

this may have seemed crazy a while ago, but with the ever - growing computational power of cpus and gpus, do you think one day we could blow up refractive glass and fire bullets at reflective cars?

10 years maybe?

Well, with the estimates that both AMB and Intel agree on, CPU power doubles every 3 years. within that 10 year time we will have CPUS with approximately 8-9 times the power. And ith GPU included, it should almost do it. almost.

IMHO I think it will be more like 15 years before a real and proper Ray tracer is seen in a game. Maybe 10 years if Crytek gets real clever.

We live in an exciting time my friend, a very exciting time indeed.

We will live to see the Death of Justin Bieber (or atleast, a portion of us on this forum will, most likely if he goes the same way that every child star does…) Real ingame raytracing, 1 terabyte RAM, Semi transparent screans (which are already made, just a tad o the expensive side…but they will get cheap, like everything does) Everyman spacetravel… the times we live in are both the most incredible, yet some of the worst.


So yeah, expect real time raytracing on your iPod in the next…20 years.

unless we’re fried from global warming that it :wink:

10 years is the time I’ve been hearing about raytracing in games.

Crytek will manage something, they always do. (first ever game to have 1,000,000 polys rendering in view was…CRYSIS!)

Although, it will most likely not be “true” raytracing. Most likely some workaround that looks good and performs well at the same time.
As I said before, 15 years for everything to really get “proper”.

It’s easy to come up with a game having 1 billion polys in view. You can do just like Crytek and hope some lucky souls out there can afford some Pixar cluster. :stuck_out_tongue:

I truly think we’ll be seeing some raytracing in the next generation of game machines. Per material, for true reflections and refraction so it’s used just for the few elements that need it, while everything else still scanline. Soft shadows, caustics and GI? I don’t think so…

I think Caustics will be a long time from now. A VERY long time. And in a way, I want that. Why? Because buying a new graphics card every 2 years is expensive enough. Needing to buy one every five minutes is gonna be a pain…

Intel developed a videocard capable of realtime raytracing in the GPU. But that impressed nobody, since, well… Who wants that?? (Where i heard that before?? :wink: )

That’s why Larrabee was cancelled for consumers… They didn’t want realtime capabilities, but faster FPS and better images for their FPS and MMORPG… And Larrabee sucked at both.

Pathraced games… probably when CPUs reach the next paradigm shift after silicon - graphene perhaps?

hmm… maybe they’ll create a combination of multi pass rendering and raytracing like namekuseijin said.

but how awsome would it be to see things like glass and proper reflections in games?

this thread brings up an idea i had: we’ve all seen the graphics that videogames have these days - all rendered in real time (at 60fps depending on your hardware/settings), would it be possible to make a render engine based on the videogame engines that renders a reasonably high quality image in a fraction of a second? it wouldn’t be real time raytracing and it wouldn’t be good for all images/animations, but for pre-viz/testing purposes, it could be useful… just a thought.

You mean like Blender’s GLSL in the viewport or Blenders Game Engine? :smiley:

I thought GLSL was for the game engine only…?

you need to poke around blender more, set 3d view to textured mode, look in 3d view n panel display tab and under the shader pull-down change it to glsl

ahhh its already here? :stuck_out_tongue:

It doubles every 18 months. Let’s say that todays CPU and gfx take cycles 40 seconds for an OK render. There are 6.6666 half-life on that render in ten years. One frame will then take 0.4 seconds.

maybe the solution isn’t brute force processing power, but new and more effective uses for it?

there’s always a better way to do something…

It´s both obviously. What´s better to have brute power AND use it efficient.
And it´s currently called GPGPU.
A CPU sucks at floating point matrix operations and especially multiplications.
A GPU is more less designated to do it. We got unified shaders now for 5 years only, yet we already found out they are the better chips for the task.
Given the 5 years it´s available it´s just in its infancy.
Being just in its infancy it made enormous effords.
Look at Octane, Cycles, Bunkshot Speed, Raytraced Quake, Larrabee… it´s astonishing how much knowledge was generated in the last 5 years…

I think it´s just limited because no one cares, or want´s to or can enhance it.
Actually I was looking into it today, as I was looking for OpenCL or CUDA codechanges to accelerate glReadPixel() and glReadBuffer().
I think though a synergy of OpenGL and a raymarcher or flakerenderer of some kind would be better.
And stuff like realtime GI would help you nothing when it´s not supported in the final render :slight_smile:
Oh and AA in the viewport -> Quadro or FireGL troll

Easy enough, even in Java:
Path Tracer:

Or something like Lightwave’s VPR:

There are plenty of cool options, it´s just a question of implementing it :wink: