Blender 2.65a :)

Blender testbuild for next release available, 12 December, 2012 - The next 2.65 release is probably in 1-2 weeks. Help us getting it stable by testing the official binaries!

http://download.blender.org/release/Blender2.65/

Cheers, mib.

Is there any difference between the test build and buildbot builds, apart from the fact that buildbot is more current?

Hello,
I’m very much willing to give a try. But, of course, i have to know if the rendering will worth the try if no Cuda is there. I have CPU rendering with Radeon GC.
Before downloading this build, and because of what i read about Cycles, i’d appreciate your thought on that point…

P.S. Ace Dragon just mentioned the future of CPU handling as an on going development, so, maybe that that could be some positive hint to blow away my fears… Could it??

You can check the Cycles progress in the 2.65 development there :
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.65/Cycles

I love the viewport AA - much easier on the eyes now

thanks devs!

I got serious troubles with recent builds and with this RC1. Blender doesn’t even start and crashes while booting.

BUT, this only happens with Nvidia drivers after 300.xx series. If i switch back to 296.xx everything works fine.

Win7 64bit - Nvidia 570

work in progress release notes
http://tinyurl.com/cmxl679

I’m a little bit concern about Motion Blur in GPUs. Brecht said that he would have it ready for 2.65 but we are close to the final release and still no MB for GPU rendering :frowning:

The reason why Motion Blur is CPU only right now is because Brecht has had some difficulties getting it to work through CUDA.

He has stated that one of his todo items is to find a way to get motion blur to a fully working state for the GPU, so he’s fully aware of your concern.

I would love to see displacement fully implemented in this version.

What do you mean? Displacements work just fine.

I meant, render time displacement in cycles, as it does not support UVs.

Dicing, by definition, ruins UVs. There is really no reason not to use the subsurf modifier. Every production path tracer there is uses the same technique, except for maybe Thea, which uses some special rasterization hacks in their methods.

From a conversation I had with one of the core engineers of Arnold:

We don’t have any method of adaptive subdivison or decimation. It really doesn’t make too much sense in a pure path tracer. The workflow at Sony and other studios using the SItoA plugin uses the built in SubSurface modifier of SoftImage. It allows you to set different SS levels for viewport and rendertime. We do have a method that will reduce levels based on distance from the camera, but no one is really using it. The preferred workflow seems to be to set the level of detail based on how it needs to look when the object is closest to the camera and then just leave it alone. The memory savings in our system are almost nil using the camera distance parameter compared to just leaving objects at maximum resolution because of our optimized BVH code. Because a path tracer stores geometry as individual triangles instead of something like bezier patches, it doesn’t make any sense to dice them at render time.

If you’re talking microdisplacements, I’d point you at this post I made in another thread.

Microdisplacements aren’t very helpful in a path tracer. The main advantages of microdisplacements in a REYES renderer are high detail and low memory usage, but this is only possible because REYES only handles primary ray intersections which allows for geometry to be purged from memory as soon as the tile using it is done rendering. Because Cycles (and others) calculate secondary GI bounces, geometry must stay loaded in memory for the entire rendering process. Even if a method of splitting triangles procedurally at render time were implemented, the memory usage would be almost identical to simply setting a subdivision level at render time by hand. It is for this reason that even production renderers like Arnold use a system almost identical to the SubSurf modifier in Blender (with different settings for viewport and render) in their workflow.

Where do we can turn that on?

Preferences - System - MultiSample (about in center, have to restart blender to take effect)

see here

It all makes much more sense now… Silly how sometimes these things are just so much simpler than you think… Thanks for the info!

Ah, yes, thank you. Really nice feature that is. Now Suzanne looks a bit more prettier :slight_smile:

Keep in mind it slows down viewport performance, so disable it if necessary…

2.65 mostly looks like a cosmetic update with still no knife properly working :frowning:

What isn’t working properly with the knife tool?