Focal Blur????

How I can make a focal Blur without use Yafray or zblur?

one way is to make a render with RGBA/premul clicked to PNG format,…put it in photoshop,…add gaussian blur,…load texture onto plane,…put plane in front of camera.

or use Blender Instinctive which has a DOF feature built in

Actually, I believe I read a tutorial that showed how you can do focal blur in Blender - or maybe a misunderstood.

The idea was that you place an Empty at the point you want the sharp focus. Then you set the camera to track the empty using Cntr-T.

The idea then is to add a bezier circle to the scene and get the camera to follow the path of the circle. You would make sure in the IPO window that a complete lap of the circle should be done in one frame.
Then you set motion blur with a factor of 1 and render.

The wider the diameter of the circle your camera journeys around the more the blur I take it. Of course, as the camera is always locked to the empty that point should always be sharp focus but it would get more and more blurry.

I’m not sure if this is just a really poor fake of focal blur or whether it’s very usable as I’ve never done it myself. Anyone else know?

Caleb

chek aut specials>>deep matters right here on elysiun.
http://www.elysiun.com/specials.php?id=5

Hey that’s a cool link.
Thanks for that.

Caleb

I don’t need for there to be “yet another version of Blender,” which has features that aren’t available in the mainline release. That feature needs to be in the public release.

I’ve been able to emulate some degree of DOF using composite layers, but a true DOF feature, in the mainline Blender product, is sorely missed.

We are all in the same case, we’d like it in BF. The fact is someone (Alexander) developed its own blender version, including DOF, coloured groups, recursive envmaps, softbodies, undo and many more, nearly unoticeable. Some features has made into the Blender Foundation, some hasn’t. I guess it has to do with code stability or maturity, but these features exist. We are just in need that a good-willing Foundation coder imports the codes from Instrinctive to BF, and perhaps to some other to consolidate this code or expand it.

Soft bodies has been made available as a patch for 2.33a but I wonder why it didn’t made into 2.34, as it would have been a spectaclar feature. I’d have expected that the DOF would have been immediately imported in BF, as this is a very popular request, but it still hasn’t and I haven’t read of plans of it being imported in the blender.org forums.

As far the “yet another version of Blender”, as Instinctive is very innovative, I don’t think that you should despise it, sundialsvc4… The one-law, one-vision development model belongs to closed and proprietary programs, not to free and GPL’ed ones. Blender is among the later, for the better or the worse. And I don’t think that Instinctive is among the worse because it choosed to remain with the old UI most of us started with. This seems to be the main reason why people don’t pay attention to Instinctive, and it’s a pity because it’d make Instinctive features more quickly in BF, I guess.

As I’m not a coder myself, I really appreciate each effort put by any good willing coder, and would praise any “yet another version of Blender” that may arise, because they are the future of the actual BF’s blender.

So I’m in need for “yet another version of Blender”, because it helps me achieving things I couldn’t do before. By despising publicly Instinctive, you discourage its coder to publish new releases, with yet new features that perhaps could go a day in BF.

Perhaps do you use other 3D packages and use to pay for cool high-end plugins to achieve some cool effects, but it’s not my case. I don’t have money to spend into a high end 3D package, nor the temptation for because 3D is a hobby, not a job for me. So when a coder works hard to introduce a new feature, even if it is within a very obscure mutation of my favourite soft, I thank him.

I’m sorry if I sound upset, but I think the “yet another version of blender” was not welcome at all.

This said, I respect you, sundialsvc4 for the insight you put in each of your useful comments of the Finish and WIP forums, so please understand that there’s nothing personal or agressive with what I said here about your comment :slight_smile:

Natanael

Just wondering why you don’t want to use the already available features like zblur?

Ken

Thanks people!

Nice tip, Modron!

olivS, I use M$ Window$. Instinctive only work in Linux. :expressionless:

Caleb72, I will try it.

Kencanvey, my pc is very slow to use Yafray, and I don’t have success to configure the focal point in Zblur using the Empty.

Just to clarify, the instinctive DOF blur method is AFAIK exactly the same as the zblur plugin (the code was ported in from the plugin), but with some extra stuff like ‘camera autofocus’. You can get the exact same results with the zblur plugin.

Having said that, doing DOF blur as a post process, as this technique does, can cause nasty artifacts around the edges of blurred objects when objects close to the camera are overlapping objects far away from the camera. There is also no ‘bokeh’ effect, since the blurring is only done on a 24bit low-dynamic range source. The advantage of that technique however is that it’s much faster than other methods like yafray’s ‘true’ raytraced DOF blur. It’s just a matter of trading off between quality/correctness and speed, really.

I don’t need for there to be “yet another version of Blender,” which has features that aren’t available in the mainline release. That feature needs to be in the public release.

I’ve been able to emulate some degree of DOF using composite layers, but a true DOF feature, in the mainline Blender product, is sorely missed.[/quote]

I’m with you on this one Sundial. I saw what OlivS was saying but I prefer just one Blender release. More than one means less cross-platform and version compatibility. There will also forever be one version that doesn’t have features another has, which is kinda pointless.

DOF is one of the most important parts of rendering for realism. How hard is it to code? It seems like such a simple principle. Even the fakey zblur could have been hardcoded into the final release. I don’t think people would mind experimental features in final releases - at least they would get thorough testing.

Anyway, I’ve had a little success using the empty idea but I can’t get it anisotropic - it always blurs along the direction of the camera movement. I hate using that method too because Blender renders about 8 images to blur 1 frame.

I’m not implying that I know better and as a matter of fact I am not a coder, but I think the way real DOF is being achieved with OpenGL, is actually in a pretty similar way…

It renders a series of images, or samples (8, 16 or whatever, depending on the level of DOF you want to have) and then feeds them in an accumulation buffer… The final render is actually the “sum” of those images… (Please someone correct me if I’m wrong…)

The “tracking the Empty” technique is rather another way of doing the same thing, only you do it “manually”…

I don’t really know how easy or difficult it would be to hardcode real DOF in Blender, but I think it would be a similarily slow process (don’t have a problem with that if the aim is photorealism ofcourse)…

Alright, I wrestled with quite a few different solutions for simulating depth of field, and here is what I eventually came up with.

  1. Tracking an empty. You can, if you are particularly maschocistic or using a renderfarm, attach your camera to an empty and make it go around in a small circle while pointing at an empty that will define your focal point. Everything that is not on the same plane will be blurred to varying degrees. This is a very tedious, yet quite accurate and visually pleasing method, but remember that in order to acheive a good DoF effect with this method, your render time will increase 16 times. Yes. You will have to render the scene sixteen times. If you’re doing an already processor intensive still shot, this is simply not a feasable option. I sometimes make renders that take all night, I’m not willing to leave it for three days, it is simply not worth it. This method is fairly pointless.

  2. Zblur. I don’t like zblur. I don’t really think that it looks especially “camera like” - I haven’t found the results satasfying, and the experience of wrestling with the plugin is even les sfun.

  3. Photoshop. I love photoshop. You can fake DoF in photoshop, or gimp, I assume, in minutes, and get a result that is pleasing, easily adjustable without having to re-render the whole damned scene, and quite accurate. There are multiple tutorials on how to do tihs, but the general gist of it is as follows:

A. Create a new alpha channel in the channels tab. Using a combination of gradients, painting, the magnetic tool, and others, define the area that you want to be in focus in white, while you gradually fade to black, which will be the most severely blurred.

B. Go to the “lens blur” filter, not gaussian blur. Gaussian does not look like blur from a camera lens, and will not look “photorealistic”.

C. Select “alpha 1” or whatever you called your new channel as the source.

D. Define the point at which you’d like to be entirely focused by clicking on it (this point should also be entirely white in your alpha channel)

E. Adjust the “radius” slider until you are satasfied with the intensity of the blur.

Using Photoshop or another image editor enables you to create stunning and realistic DoF effects that are infinately and minutely manipulatible, quickly, and is much, much more effective than any other method that I have found.