The beauty of micropoly displacement

Well as long as your not stuck with having to have a Geforce 7950 to draw the 3D view because it would have tons of shading and you can’t turn it off.

Or just make deep structural changes so he can rewrite large portions of the renderer to make it like Mental ray and Renderman put together:)

The lighting would be seemless. Just like you have a texture view draw mode, Blender would have a newAdvanced OpenGL view as well.

Then learn the code and start coding. Okay, we need.

-Realtime texture and bumpmap painting.
-Micropolygons
-Fast algorithm for delivering unbiased GI and SSS renders.

(when the technology allows for it)

-Port that will allow complete control using brainwaves

That’s another thing to patiently wait for. But before micropoly
displacement in opengl, it would be nice to have an accelerated GL view
capable of showing some textures at once (color+bump or color+specular, for example).
And wouldn’t need a 7950 but just a 5700-6200… after all to do bump
map painting you really need to show also colormap (or you’ll need always to paint
bumps before, to make them “syncronous” with color map)

also need Multi sampling Antialiasing so it can replace rendering !

i have the feeling some people including moderators are not quite serious.

displacement are not the all wonder weapon. everybody should know that.
so is GI not the wonder weapon. but are complex light rigs ?

if you go http://www.evermotion.org/vbulletin/showthread.php?t=55922
and take a look at the rusty metal bars you see were true displacement will
replace every normal or bump mapping.

it is not the visual look of the shader but how much geometry is being pushed being more accurate and realistic while having a low res geometry below.

hmm, I’m still not convinced that there;s displacement in the image! if there is, it is no big deal- the image is nice anyway.
anyway my problem with micropolies are that they are too darn small- they get all over your bedclothes, and it’s a hell of a time cleaning them up.

That’s another thing to patiently wait for. But before micropoly
displacement in opengl, it would be nice to have an accelerated GL view
There’s already real-time subpoly-displacment. A alithogram has been licened by Nintendo, evedently for their new Wii console.

I suppose I’m trying to act wise, but here’s a little thing to remember:

You get out what you put in, and this is culmivative. The God put in stuff like photons, atoms, though, time, effort, skill, and heart. The programmers put in resources(time,money,thought,heart,whatever), then the end user puts in resources(time,money,thought,heart,whatever), and you get out what everyone chimed in.

Can we see a link or screenshot. It must be a fast algorithm since the Wii is the weakest console in this generation when it comes to processing power.

henry

in case you would pay attention to the fine details you will see it is there.

it just captures nature.

http://nintendo-revolution.blogspot.com/2005_12_01_nintendo-revolution_archive.html

If you go down the page, theres a nice little article of a speculation.

Here’s another:

Here’s a patent:

Alright , it’s pretty… It’s an option, it has been for a while even before nintendo made their version. I once showed a programmer some realtime iTunes music screensaver that played to the music with glowing water and particles. Was told it was cute and Blender could do it but does not need it.

I put in a lot of time trying to learn C and Opengl and the likes but it and Blender are so different in teh source… But it goes deeper than that…

Bah, the point was displacement mapping… That picture at teh very start of teh post, it’s not that good, it has great textureing but a notch or two more of the displacement slider would have made a great deal of difference.

Real time version of it?? Yes possible. Anyone going to do it ? No not a chance. Go look at opengl.org and see teh many things our current 3d view can’t show but could.

I also think the image is done with photo texture mapping, not displacement. The shadows in the big cracks in the wall don’t match the other shadows in the scene. They’re in the photo texture, not rendered (IMHO.)

its really not possible to say how wonderful / rubbish it is without seeing a clay render without any colour textures.

jeremy

read his comment.

maybe this image can show better the invaluable advantages of micro poly displacement:
http://cube.phlatt.net/home/spiraloid/tmp/carn.jpg

What I like:

  • you don’t have to increase polys to gain detail
  • you don’t have to model the hires version to do the texture (like in normal mapping)

think of a bricks wall… you can have each brick pushing out its shape thanks to a single grayscale mapand the base mesh is a simple plane :slight_smile:

The problem is that isn’t easier implement good micro poly displacement.
What we call micro poly displacement is simply dicing,when we are going to render, the surface until the polygons are small less than a pixel(like in a renderman compliant renderer,as in the disney image you posted based on siggraph course).But this work well only on renderman render(they have a specialized pipeline),that are different from the blender render.In this case we must better call it subpolygons displacement,the render dices the geometry at render time based on some kind of criteria(curvature,or texture differentials),this is slow,not so fast and convenient like in a renderman renderer,but obviously better than what we have now,
In mho,if micro poly isn’t possible we should look at new bumpmapping techniques,like parallax mapping(and some new more advantaged techniques that can even compute shadows)

You don’t even need a map. Procedural displacement shaders are capable of generating the “push” randomly, so you’ll wind up with unique looking bricks, rather than a bunch of bricks that are pushed out the same as the map.

ShortWave

The problem with that is you have limited control over the bricks. And an image map could be created thats semi-random.

Aren’t they just saying they patented an hardware/software pipeline
capable of real-time tesselation for “standard” displacement maps?
What’s difference there is between that and QuakeIV models (which
use a normal map and a displacement map) to patent it?

EDIT: also anyone in this thread may find interesting the alternative ways:
a) something like parallax for Cinema4D (2 bumps map, one for high frequency
and one for low frequency,should be the same way used by mudbox
but I can’t be sure, my supposition):
here foolow the link to:
http://206.145.80.239/zbc/showthread.php?t=20310
b) APS (adaptive polygon subdivision) :http://www.newtek.com/forums/archive/index.php/t-51617.html
that should be much more memory-efficient than bare polygon subdivision/tesselation.