Billions of poly's? In the viewport?

I only used the Zbrush 3 during a trial version time as i never purchased it after, and during that time i have been really amazed to see my weak system allowing me to sculpt at a few millions of faces incredibly smoothly, while the same system has lot of slowdown at “only” 500k in Blender sculpt mode and is crawling if i use such facecount in object mode.

I don’t know what kind of magic coding they’re using, especially considering they don’t even use the GPU, but it’s really imrpessive, can’t imagine then what kind of face count a strong computer can achieve with that application.

Oh indeed - And set-ups can become extremely complex, unfortunately! :stuck_out_tongue:

I do think you’re right though that this is, perhaps, aimed at customers who will be constructing small(er) productions, say for advertising, or low-budget film effects / animation. It does look that way from the video.

Slightly off-topic : I’d love for Blender to go in this direction, not entirely follow suite, but perhaps have a few more features in it’s outliner to easily organise large data-sets, seamless multi-object editing / importing / revisioning, plus meta-data tools that can easily tie back into a pipeline - I think the new linking / appending system may also bring a few much needed tools for this capability too!

well, tesselated procedural generation of detail and grebling is something nextgen games should have from the get-go unless companies are into hiring an insane amount of artists to model all that by hand…

A feature Blender should get to compete is automatic LOD via tesselation or otherwise. There were a few scripts for that flying about in blender 2.4, but 2.6’s instable python API put all that to rest it seems…

Though I was more thinking of the kind of setups we mortals can’t even imagine - like Stargate Studios fully lit- and textured models of Manhattan, Las Vegas and San Fransisco - that kind of complexity, hehe… ;D

But I for one will keep an eye on ‘Clarisse IFX’, seems like it’s gonna be released soon, I’ll definitely check out some tutorials just to get a grip on what it’s supposed to be good for… :slight_smile:

Considering you´re always such an asshole about features that might improve Blender, Endi, I hope you´re using Blender 1.60, beacuse that should be sufficient for anyone, right?

that made my day.

@Justin

It’s a bit interesting that someone as creative as Endi is being as repetetive in his comments as he is. You think he should have come up with something new after a while, right…? ;D

IMHO, after bmesh, viewport is the next goal for blender. Working on complex mesh or big scene can be very difficult in blender.

A strong viewport can make difference and every blender module will gain benefit (like sculpt, paint, animation etc).

Strong viewport performance is very welcome, both in industries and by freelances. I well know Softimage, one of the best (or the best) software out there, you can import directly dense mesh from zbrush and edit it (in one tutorial vitaly bulgarov show how can be a really useful option). softimage workflow and power is really great. the only downside is mental ray, an old, slow dinosaurs (with beautiful lights) and ICE is a bit complex (but a great tool flexible and powerful)

Considering you´re always such an asshole about features that might improve Blender, Endi, I hope you´re using Blender 1.60, beacuse that should be sufficient for anyone, right?

well, if each one thought like you today we will have not 2.63 (no cycles, no bmesh but only an old piece of software). Fortunately blender foundation has a different vision, very different (seems the opposite) from some blender fanatics, so I hope a robust wieport will arrive as soon as possible.

i don’t think you quite understand the comment he was making…

I’ll be at Siggraph again this year and would be willing to check this out, perhaps give some first hand impressions (if they have a live demo set up).

Also regarding the viewport in blender, I would love to see some form of viewport 2.0 (maya) type of implementation (real time rendering). (viewport 2.0 isnt an autodesk creation, but some tech they bought out from another company and implemented into one of their packages).

Isn’t the “Viewport 2.0” essentially just some game engine features running in the viewport?

Should be easy enough to do in Blender once we get a bit better viewport performance I think.

Pretty much. Autodesk, in their usual fashion bought out illuminate labs which had middleware known as beast and turtle (used in game engine rendering and lighting), then they took it and shoved it into Maya under the guise of viewport 2.0, or at least thats what I was lead to believe.

But anyways, Blender could most certainly push a view port 2.0 aspect if they stop trying to focus film focused features and looked more into their game engine tech and the type of work flow associated it, as well as previz.

i dont know what is status currently but very soon we get speedup too for 2.64 with Cycles when Brecht is back on work.

One thing that makes clarisse so fascinating is is layer based approach that only renders what changes, so you don’t have to re render the entire scene when you change the color of something like in cycles and many other pathtracers. The render you see on screen is the final render, so you don’t have to rerender the project after you finish. Also I think it is easy to make changes to an instance and still have it treated as an instance. There is an article about it of fxguide. http://www.fxguide.com/featured/clarisse-ifx-a-new-approach-to-3d/ it’s a complete different way of thinking about working in 3d.

If that is true, then that is technically incorrect. Even if it’s only by a minute amount, changing the reflectance of something affects everything else in the scene. One reason to use pathtracing in the first place is to get an accurate simulation of how light interacts with the scene, apart from that there’s no technical reason we couldn’t just “lock” pixels and only retrace those pixels that directly hit a given object.
If however you’re just talking about compositing layers of different scenes, you can do that already in blender, but it’s probably not as convenient to use.

@Zalamander

But couldn’t you keep track of the object-to-object influences in a render, then just set a value on how much influence an object need to have on another for it to be recalculated? As you say, it wouldn’t be mathematically ‘correct’, but it could kinda work…? Perhaps… ;D

And suddenly I wonder if this makes any sense outside my own mind. xD

I think so, or they use some other technique. The article explains it all, much better than I could explain it. It’s a new way of working, and I am guessing that each object knows the influence it’s having on other objects, just like if a shadow pass was added to the rest of the scene. This isn’t trying to be the next maxwell and luxrender. You can see from the video that only certain parts of the render are grainy. I image the concept similar to light groups in luxrender, except you are dealing with objects.

In the video at the end of the article, Mike Seymour gives an explanation of how the program works.

You could store the contribution of a given object to the final image in a seperate buffer, that’s how passes or lightgroups work, but that takes up a lot of memory and it’s generally slower. I can see it work for layers of objects, though.

From that video (or article) I don’t see anything that suggests changing one material doesn’t cause the the entire image to get re-rendered. At 1:59 you can see him remove the “reflection” texture, which is a “fake” postprocess that you often need in compositing, but after that every change in material properties triggers a full re-render.
The renderer is fast enough as it is, so I don’t see the need to complicate things by adding such a selective re-rendering feature anyway.

I guessyou might be correct Zalamander. I think the layers approach applies to processing. But I am thinking you can separated different elements into different layers and render them separately that way. It seems useful for matte painting etc.

@Sainthaven, are you stalking me from thread to thread pushing your “Maya” ideas???..blech :slight_smile:
you could do viewport handling and draw modes based on layers and or shaders, could you not? each layer a different draw mode etc…or set visibility options based on layer?..I do not mean hidden or not…I mean hidden, wire, BB, shaded etc…