Wide lens/Fish-Eye

Hello. This is a guide of how to make a fish-eye/wide lens effekt in blender. Its not that hard but it hase some strange factors in it.


Camera placment

First thing to do is to set up two cameras and a sphere, as shown in the picture below. Remove the standard camera and make a new one from either side or front.

Copy (Shift + D) the current camera and rotate it 180 degrees so it points in the other direction. Then you make a sphere that stands out just a bit from the one camera. To easily make a animation out of this parent the sphere with the cameras. Mark the sphere first and the first camera and parent (Ctrl + P) and then do the same with the other camera.

Camera.001 will look like this now:

Cube Material

To archive the effect, we have to have a material on the sphere equivalent to a mirror. It may also be a smart move to turn off Traceable on this material so the sphere wount cast shadows into the scene.

Set up a scene

Set up a scene for the camera to render. I used a regular cube in this tutorial and rotated it on the plane.


Camera.002 is used for a preview camera, as the rendercamera (Camera.001) is directed into a sphere. The preview camera is usefull for animations where you can see somwhat the renders camera.001 gets.

Its better to have this camera than have nothing to preview the scene animation at all.


Results of Camera.001:

Animation with this lens effekt:

<param name=“movie” value=“http://www.youtube.com/v/WJFpqePlLac&rel=1”></param><param name=“wmode” value=“transparent”></param><embed src=“http://www.youtube.com/v/WJFpqePlLac&rel=1” type=“application/x-shockwave-flash” wmode=“transparent” width=“425” height=“355”></embed>


The distance between Camera.001 and the sphere decides the strenght of this effect. By moving the camera closer to the sphere the effect is less dominant. Moving the camera further away from the sphere increases the effect. It can be very handy to do some testrenders on this before you render high-res.

Sorry for the bad English “some” places.

This is an interesting method.

A somewhat equivalent effect can be created using ray transparency and IORs on a modelled lens.

Hey great idea, I’m into photography (obvioulsy) and this will help scenes look more “photographed”

Many years ago before raytracing we used a MUCH simpler and FASTER method of achieving the same effect in Blender.

Lower the value of your camera “lens.” Go into your render settings… You will use the x/y parts buttons… basically - if you have a resolution of 640x480 - change the 480 to 48. No go to your y parts and change it to 10. It will now render 10 strips of 48 pixels - the end effect being that the image (which still renders at 640x480) takes on fisheye lens properties…

Much faster than this other method - tho kudos for figuring something like that out…

You can also adjust the effect by changing the lens value - or differing “parts” values (with appropriate adjustment to the resolution).


I step on your tutorial and I think is great!
I made one tutorial of the fisheye effect but I used RayTransp instead of RayMirror.
I like your tutorial :slight_smile:

My tutorial is in spanish but I think you can have an idea of how to do it by the images.

Here is the link to the tutorial:



Clever idea!
You could also use Nodes to fix the horizontal mirroring - in your final render, add a scale node with -1 on the X scale and you will preserve the left-right motion.

Hello I wanted to tell you I fully translated to english my tutorial of the fish eye effect with Ray Transp. Here is the link: http://proyectossalvador.blogspot.com/2008/04/fisheye-effect-in-blender.html Salvador :slight_smile:

Hey guys. Thanks for all the comments. I completly forgot i made this tutorial. As you can see its not somthing new, and fellow blenderheads have already made this tutorial in blender. I saw this way of doing this effect from a maya forum. Again thanks for all the great comments.

Its a shame that Ray doesnt reflect Halo :frowning: because i tried to simulate night city. For the feeling i just generated tha particles with Halo material to get fast result of messy lights on a streets from a top view. Like this.


But Ray Trans or Mir doesnt render it. Or did I setup something wrong? Thanks

Am I missing something? That is not what Yparts does at all.
If you set your res to 640x48, it will render 640x48, regardless of what you put in your Yparts field.

Am I wrong? Were you joking?

Yes, but no, freen, yes but no.

You see, that does sort-of work if you use the “pano” (panorama render) button. But panorama isn’t REALLY fish-eye. Talking in terms of these tutorials, panorama is the equivalent of using a cylinder rather than a sphere, but it still looks quite like fish-eye to the casual observer, particularly in widescreen.

Yup, just read it explained by “broken” on a mailing list. He mentioned the pano (not quite) solution. His overall conclusion was that real fish eye effects (including particles and halos, etc.) aren’t possible using blender internal.

Given his stature as a blender dev, I believe him.

Freen - try the technique tho - I’ve found more than passable results using that instead of the extra render times of this other method. If you’re looking to re-create exactly the lens curvature so as to integrate CG and live-action - may I recommend the following - there is plenty of software out there that will take an image and warp it so that the curvature is non-existant… then composite it with your CG… If you want the curvature - you can add it back in using a post image program - plenty of software out there for that. Doing everything in blender would be nice - but name one single company that uses opne piece of software for everything… even a carpenter uses more than his hammer to build a house.

Yeah, but…
Warping the image post render is not ideal, as it will necessarily lose quality in the parts of the image that are blown up to larger than 100%.
I’ve actually used the raytrace method on a job (I was using 3D MAX at the time, but the principle’s the same) and, with much tweaking to get the right matte, got pretty decent results, but I still maintain that the best possible method would be a lens modeller within the render engine itself.

A community-member of blender3d.no introduced me to a new method that was able in 2.46. Its a node-trick with somthing called lens-distortion. Thanks to user Artisten.

I tried this but as the distortion is done after the render the result isn’t very sharp
Is there anything you can do to counter this, like have the image rendered at a higher resolution before it is composited??? (not that familliar with the node editor yet


The way I understand it, this will always be a problem with using the compositor to accomplish the distortion. By the time it gets to the compositor it’s all pixels.

…hence the raytrace approach…

In effect, using the node is just like putting a texture on a sphere and rendering that, and I right? Not an accurate effect – though I use it a lot; it looks great! :stuck_out_tongue:

Freen, I’m not able to get your technique to work…

Is it possible to have a different render resolution from the composite resolution - so you could render at say 1200x900 and then scale it down to 600x450 after applying the lens distortion??? I suppose you could make the movie at a higher resolution and then use the VSE to scale it

I thought about doing something similar, and made a scene where you take a aliased picture, and then render it again at a lower resolution with OSA enabled.

But the results aren’t too good, so I guess that we must increase the render resolution in order to get smooth results.



And here is the blendfile, with two scenes: