I think it would be really useful if Blender had a spherical camera type in order to be able to render lat/long format light probes. Blender can already render to HDR and OpenEXR format, it just can’t do spherical renders.
Here is a way to work around it for now, but the image cannot be rendered with anti-aliasing and cannot be rendered to an HDR format.
In top view, add a 64 x 64 grid.
In edit mode, move the grid up on the Z axis 1 BU.
In the UV image editor, make a new image that is twice as wide as it is high. (Say, 2048 x 1024.)
In top view, UV map the grid by using ‘project from view (bounds)’.
In side view, use shift+W to warp the mesh 180 degrees.
In front view, warp the mesh 360 degrees and remove doubles.
Delete the 2 pole vertices on the sphere that was just created.
Rotate the sphere 90 degrees on the X axis to turn it upright.
Give the sphere a shadeless ray transparent material with an IOR of 1.
Move your camera to the middle of the sphere.
Select the sphere and do a full render bake.
What I propose is a spherical camera object so that this type of rendering can take advantage of everything a normal render would have, like AA, node compositing, and renderability to HDR formats. In the 3D view, the camera object could look like this:
The vertical line represents the left and right edge of the spherical image, so the user can control where the seam is in their scene by rotating the camera.
you can just use a mirror ball(a sphere with no spec and 100% mirror) like you would with a real camera. then use a program like hdrshop to change the projection. I have done this a few times to make hdr.
a sphere camera would be nice and save some steps and workarounds.
I will try to find an example.
I am not sure if the following is correct, so someone correct me if I’m wrong.
All these setups are great for making an environment map, but it won’t be a true HDRI unless you can render the same image under several exposures and then combine them into an EXR or HDR file. You can export to EXR in Blender but it will only have the lighting information for the current exposure.
To get this to work properly and to get a true HDRI file you need to be able to render at various camera exposures. Then the lighting of the scene can be properly captured in the HDR or EXR file. At the moment, you cannot do that in Blender Internal (pick the exposure of the camera. You can in other renderers (unbiased typically).
If you were using Lux or Indigo, you could render the image of the sphere from two angles and at several exposures to create a true HDR image.
your right about exposure. It can be faked by adjusting the lights or the exposure setting in the world tab.
This is an old thread of mine that shows the results of this. CLIK
Hang on hang on…
Why do you have to use different exposures?
Blender renders images with 32 bit floating point colour anyway, doesn’t it? If you save in OpenEXR or HDR format, that info is preserved.
Something I’m missing?
Not really, that’s only if you use some kind of exposure control (like the basic tone mapping in world settings) to get your lights back into the visible range. The EXR captures what’s rendered. If you have >1.0 values in the render, then that’ll be stored in your file. This is the same with any renderer - usually if there are overbright areas, it’ll be tone mapped to get back within the visible range. If you’re making a light probe, you want to keep the overbright lighting, so you just don’t use any tone mapping.
[quote]At the moment, you cannot do that in Blender Internal (pick the exposure of the camera. You can in other renderers (unbiased typically).[quote]
Sure you can, you can use the exposure sliders in world buttons, or cook your own in the compositor. In other renderers that give you camera settings, those settings are basically just inputs into the tone mapper, though with variables that mimic how a real camera works.
Of course in order for this to work, you need to get those overbright values in your render in the first place. This means using stronger lights, generally with a more physical inverse square falloff, perhaps materials with high emit values to represent light sources, etc.
So for render baking, you should create a 32bit map to bake into to keep HDR? (like for displacement) or doesn’t it matter until you save the baked texture?)
I suspect that it throws info away if you haven’t specified 32 bit up front when creating the image to bake into.
The trouble with the reflective method is that not all things are supported… I was using this method on a current project and added lots and lots of foliage with the particles sustem… I had all sorts of problems in reflective surfaces when the alpha depth builds up beyond your depth setting you just get black… and render times got prohibitive as I had to render “twice up” to get around the lack of OSA…
In the end I settled for the old fashioned way of rendering along all six axis directions with a 90degree field of view to square textures, piecing them together into a cube map and converting externally… this was the only way to get ALL of the render features like a standard camera (working particles!) and has the bonus of being able to use OSA…
sadly the env map function (which renders a cube map for you )has similar limitations to the reflective method… and I couldn’t set the far clip plane as far out as i needed either…
at least an cube map can be converted to a lat-long map or angle map in HDRshop!
I agree that a spherical cam would be a great enhancement to workflow… as would the ability to output to “lightprobe” format, especially now this is fixed to take camera angles into account when used as a world texture…
This is a classic case of lots of cunning and workarounds can get the job done, but the workflow is a little… tougher than it could be!
It’d be great to “do it all” in blender… but for now I’ll have to stick with cube maps and HDR shop…
edit: — the PANO function almost does this… (splitting the render by the horizontal segments and rotating the cam in between) could it be extended to rotate on vertical segs too? I can see why that might be tough…) It would (of course) be much friendlier if you could just set horizontal degrees and vertical degrees independantly for a camera (eg wrap vertical 180, horizontal 360 for a full spherical render)
You can create cubemaps by setting the camera lens to 16 and rendering the camera at 6 different 90 degree angle positions (6 sides of the cube for front, back, left, right, top and bottom).
I did find a problem, though, when trying to do cubemaps in blender. I’ve described the problem in the above post so I’ll just to quote it again here:
It’s basically where the faces meet. Everything is fine if normals aren’t used. However, when normals are used in materials applied to objects crossing more than one face there are distinctive lines along the face edges.
Here’s a pic with the problem:
Might be a bit hard to see but look closely down the middle and at the bottom to see the lines where the faces meet. These faces should all blend together.
I’m assuming these defined lines where the faces join are due to the way the normal’s light/shadows are formed from the particular camera angle (the camera stays at the same x,y,z coordinates but rotates 90 degrees to each face).
I also noticed that there was the same cross-face problem with the old 2.45 particle system when I created grass that crossed between cube faces (haven’t checked with the new particle system but I’d assume the issue might still be present in it).
I also did a quick and dirty test to create a cubemap with an EnvMap texture and got similar results.
Anybody know why this is happening and is there a workaround? Don’t know if it’s any use but I have tested doing cubemaps in Cinema 4D using the exact same camera rotation method and didn’t have the same issue.
Anybody got an opinion on the cubemap issue where any material containing a normal that crosses more than one face is distorted? Any opinion at all would be great!!
Could it be possible to just create a ray mirrored sphere set to shadeless and set up a camera to record it like a real light probe , then adjust the exposure by changing the brightness of the lights in the scene. Finally just take the images into HDR shop to produce the hdr angle map as you would with real light probe photographs?
you could, but why? 3d camera’s don’t have the physical constraints of a real world camera, so its entirely possible to make a full 360 degree virtual fisheye lens. rendering a mirrorball would still give you sampling errors for the region directly behind the ball, so you’d have to render it twice, then merge them together in a paint package. makes sense for actual real-world light probes, silly for 3d virtual probes.
as someone else stated on this thread certain render features don’t reflect correctly, it’d be better to get it all properly ‘in-camera’, if such a term can apply to 3d…