Problem with halos in cubic mapped dome render

I’ve been using Blender for months and months to create shots for our Planetarium show, which should be finished soon. In all that time, I’ve run up against the same problem over and over again, and haven’t been able to work around it in some shots. I really need to find out if it’s fixable.

Because we’re rendering for a dome shaped screen, we are using six cameras in the formation of a cubic map, each with a 90 degree field of view, all perpendicular to each other. This gives us a top, bottom, front, back, left and right view of the scene (we use the bottom because the software we use for stitching can re-align shots in post if we have all six views).

The problem is that when you use halo materials and stitch all these views together, you get highly visible seams in the final frames. I have attached an image that shows the top, left and right views of a close-up shot of my spiral galaxy, and a bigger image of one of the seams. Even in the first image you can see the seams if you look for them. When it’s rendered out at 1600x1600 for each camera view and put up on an 18m dome, you can see them VERY clearly.

This happens because if a particle with a halo is close to the edge of the camera, one camera will render its halo, the other will not until the particle crosses the boundary between the cameras. Then it will be rendered in the second camera and not the first.

I have worked around this for the most part by using lots of particles and small halos. But in some shots we can’t get around it.

We could fix this if we could use a “fisheye lens” for our dome renders instead of a cubic map, but this can be tricky because it has to be an equiangular type of distortion to work (I’m not sure why that’s a problem, but it’s what I’ve been told by our technical guy). The other way is to have a halo shader that is calculated a bit more like volumetric lights are - ie. at render time rather than as a post-effect. That’s as far as my understanding goes.

Can anyone help with this? We could probably pay for someone to create a patch for us if it’s a difficult job.

Attachments



Instead of the cameras you could try using an Object with an EnvMap Texture. Once you’ve set resolution params and rendered you can save the product of that EnvMap ( see the tab right of FreeData) and use it.

%<

That sounds very promising! Thanks for the tip :slight_smile:

OK, I’ve done a test with a cube inside a geosphere with lots of vertices and a halo material, with a magic texture on it. The cube has an envmap texture. I saved it out, figured out how to split it into top, bottom, left, right, front, back, and then had a look at what happens to the halos across the seams.

This image shows the front part of the cubic map in the centre, with the left and right on either side of it, like my previous example. Strangely, the halos match perfectly in the centre of each seam, but get increasingly mismatched as you go up or down the seam. Do you see what I mean?

How could we fix this?

Also, if we can use this approach, is it possible to use all the composite nodes I’ve set up? I have a feeling I would have to render out each pass as an image sequence then comp those together instead.

Attachments


What kind of camera settings and angle do you have, it might be a perspective thing.

Oh! I just tried rendering out the same test geosphere I posted above, but with my perpendicular cameras instead of using the environment map. It did the exact same thing! The halos matched in the centre of the seams and got more and more mismatched going up and down.

Why would this be happening?

BlackBoe - My cameras are set with 16mm lenses, which translates to a 90 degree field of view. In 3DS Max we use the exact same camera rig with 90 degree FOV cameras and it maps perfectly. What’s different about Blender’s cameras?

I can’t answer your questions, but the man to ask is eeshlo, and probably Cessen, both very difficult to get ahold of.

%<

This example shows my six camera rig inside the same geosphere, but with just a shadeless material on it and a magic texture, highly tiled. No halos this time.

It shows a very, very slight version of the same problem, but the seams are hardly there are all. You can’t really see them when I scale the image down to 900x300 to post it here.

So from these tests it looks to me like:

a) My camera rig and the envmap both give the exact same results.

b) Halos are rendered across seams.

c) But Blender’s renderer distorts the image in an odd, radial kind of way that affects halos worse than just normal textures.

Fligh - thanks for the contacts.

Attachments


I would consider it a bug! Post it in the bug tracker - being so close to release time, there’s a good chance it might get fixed quickly.

Worst case scenario, you might try replacing the halos with blend textures alpha-mapped on billboarded planes, but that could be rather nasty…

Well, I’ve been away from my computer for the past few days, but if there’s still time I’ll report it today!

I’ve just done some tests of rendering the inside of a cube with a grid pattern textured on it. The first attached image is a zoom in on the corner of one of my camera node images. All renders come out like this, with the grid lines bent towards the nearest corner. The lines in the middle of each side are straight. (I’ve cropped this image from around the middle of one side to the corner).

The second image comes from an environment map I rendered in the same way. It’s not as big a problem, but it’s still there. This image shows the line that goes horizontally through the middle of the saved out environment map, from the middle of one cubic face out to the edge.

Attachments



Still trying everything I can before I post a bug report, I made sure to check if the same bug still appears in the release candidate (we’ve been using stable versions for our project).

The halo problem still seems to be there exactly the same, for both my cameras and the envmap.

Rendering with the grid pattern gives the same results for my cameras, but the envmap doesn’t show any distortion at all. It’s perfect for a plain grid texture!

Ooops!

When I did the grid test that I attached two posts ago, I had my cameras set on a 15.9mm lens instead of 16mm. I had been trying to see how varying the lens setting changed the results, and forgot to change it back to 16mm. So the big distortion in the grid texture was a mistake. I did get a smaller distortion when I tried it at home, though, with 16mm lenses.

In the release candidate (once I fixed the cameras) there is no distortion at all for the cameras or the envmap.

The halo problem is definitely still there, though, and is the same for both the cameras and the envmap.

I’ll post that as a bug.

Last post for today…

This is a render that shows the problem explicitly. A sphere with a cubic environment mapped reflection inside another sphere with a halo texture on it. You can see how badly the faces mismatch at the corners.

Attachments


/uploads/default/original/3X/d/e/de4482811009112c979edb3aaac639f33e79530e.jpgstc=1&thumb=1&d=1169700753

This image shows the exact same problem as your halos one, only its harder to notice it because of the texture.

To me it looks like you have the textures repeating like tiles, try using world space!
(I might be very new to blender, but it looks like I am right).

Hi Lisa,

Looking at the images in your original post, I’m reminded of a paper I read awhile back.

You may have already considered this, but I’ll throw it out there anyway. Since things like halos and particles don’t get rendered properly if the source is off the edge of the original image, have you tried over-rendering? Render the pre-stitched images larger than needed, then trim the excess edges off the pre-stiched images before stitching.

I’m not sure how to set it up in Blender, but the paper I read showed how to do it in Maya.

http://www.fulldome.org/images/stories/IPS2004/Overrendering_Techniques_Casey.pdf

-waystar

Thanks WayStar, I’ve thought about doing that but hadn’t worked out whether it was possible. I’ll definitely read that paper.

But I’ve changed my mind about why the problem is occurring. Sources off the edge of the image are fine in some places, but not in others. It’s some kind of distortion that happens with this kind of camera rig. I’ve reported the bug and Ton has asked for a .blend showing the problem, so he’s onto it!

Well, I got this comment from Ton:

I’ve checked on your .blend, and could track down where the problem is.
Halos in blender are like 2D “sprites”, based on a location in an image it draws a square sized (camera aligned) halo circle (or texture). That works pretty nice and fast, but it also means there’s no perspective distortion of halos, which should happen especially for the corners of an image. That lack of distortion is the error you see in the rendering of an environment map.

You can more clearly see the issue if you make the halos smaller, like 0.1 in your file. Then the “error” suddenly has gone.

There’s no real solution for this, apart from treating the halos as 3D sphere/volumes… which makes render very slow.

I also don’t understand fully what your content need is, but you might consider to not use halos in this case, or to use another way to create “domes”…?

My bug report is now “closed” so I can’t discuss it any further. I’m a bit deflated now. I don’t know what to try next :(. Is there a chance that over-rendering would work? I guess I can only try and see.

Well, I just tried over-rendering, and it doesn’t work. I was pretty sure it wouldn’t, given the problem.

Another couple of ideas we have are more complicated - one is to render a spherical panorama if that’s possible, then texturing that on a sphere (we only technically need the top half), then rendering a cubic map from inside that sphere.

Another idea I had was to render from lots more cameras in a kind of spherical tiling, if that’s possible. I would have to do a lot of maths to sort out how that would all stitch together. But I think that would be a very robust method if I could achieve it.

I’m realising now after working on the panoramic solution that it gives me the ability to basically do my second solution, but with just one camera. If I do a panorama with the most tiles possible, and then use that to texture a sphere, and render out a cubic map of the inside of the sphere, it just might work! I’m playing with this now, and using a camera with a FOV of 7.5 degrees, lens length 244.11, gives me no distortion at all on the halos in a panoramic render (for bigger FOVs the distortion does creep in).

All I have to do now is work out:

a) How do I create a panorama I can use as a texture on a sphere? My calculations for this don’t seem to be working yet.

b) How do I texture it onto the sphere?

It really needs to be kind of like a mercator projection of an Earth map being textured onto a sphere to make a globe of the Earth. I’ve done this many times, but creating that type of projection with the pano setting seems tricky!

Well, after messing around with panoramas and cameras, I’m admitting defeat. Looks like I’m going to be billboarding my own damn halos! Oh well, I already wrote a script to create billboarded galaxies, and it won’t take much to modify it. Thankfully we’re getting a supercomputer to do all our rendering :slight_smile:

Just in case anyone else stumbles on this thread looking for Planetarium or fulldome related stuff, don’t hesitate to contact me if you have some ideas you want to kick around. I’ve tried a lot more experiments than I’ve posted here. I think in the long run it would really pay to get a bunch of low budget fulldome creators together to work on Blender’s code and try to knock it into shape so we can use it a bit more easily.