New UV-Map-Node - useless???

Thanks RamboBaby. Funny stuffs.

I also thought antialiasing to be a problem so i simply tried it without osa. The seams got better but were still there. I also thought about splitting the 5 faces and extending them ever-so-slightly so as to get some overlap. Problem it I think it would get messy quick and not really provide a solution.

To create a domemaster you need about 5 cameras set up in similar fashion for creating skyboxes or quicktime VR. That’s why the seams are where they are I guess. There’s 3-camera, and 4-camera methods but again, using this method, it appears, gives you seams no matter what. :frowning: In terms of re-texturing models, where you can better controls seams, this is a great feature. For remapping images adjacent to each other, not so great. Not blenders fault, It’s just not the intended use of this node.

Ideally we would have the ability to specify a “fisheye” camera in blender somehow and get real-time feedback in a camera viewport. That would be sweet but I do not know how to approach this, not being a developer and all, nor do I wish to rattle any developer’s cages. I suppose something could be done with GLSL …somehow, so when support for that matures…

well, i think it’s a bug, and secundar, you should submit it. The reason I think it’s a bug with the Map UV node is that in secundar’s example, it is the single edge pixel that does not map the corresponding image XY pixel; it chooses the first (last) pixel for like that row only. Traditionally, bugs like this is because the code starts off with offset 1 when it should start with 0, or ends too early. This results in the smeared edge (again only 1 pixel wide) where the UV starts/ends. In this picture, I just used the key for a “side” with the image, and you can see that the image is not being mapped at the border, but only the U border, the V border seems fine.

The alpha is all 1, so I do not think the alpha is the issue. In looking back at the overlay mix original, the single edge pixel seam is there, but not visible because that example used gray and an alpha mixing to mix in the new image with the old. and at creases who cares…but with this example, you do want a smooth edge-to-edge match-up.

secundar, in submitting the bug report, i would simplify it down to just remapping a single image over blackness, like my image, so that the dev can see the issue simply. Also post a link to this thread.

The workaround I suggest is to Scale the UV mapped image portion up by 1.01 before doing the alphaover…this pushes that edge pixel out of the way.

Attachments


Could you please post a screen shot of that. No matter what combination of your work around I try I can’t get an acceptable result. The only thing I’ve ever found to be acceptable is to create a seperate sequence with only the UV pass enabled, scale it up to 4X, import the new UV pass, map this and the image with the Map UV node, then scale the result back down to normal size for output to composite. The result isn’t quite as good as a real UV map but it’s close enough for most applications. The sequence has to be at least 16BPP which means OpenExr. 8BPP shatters the gradient doing the same to the remapped image.
UV re-maps of this size are painfully slow on winXP.

Thanks Papa. Bug submitted: https://projects.blender.org/tracker/index.php?func=detail&aid=6957&group_id=9&atid=125

Ah, the chemist once again passes out bad dope to unsuspecting guinea pigs.

sometimes, Rambo, I wish I knew what you were talking about. And other times, I am glad I do not. Did you want ME to post a screenshot, or Secundar? My image was just one of the intermediate pass viewer node from Secundar’s blend file he posted sans one of the images.

Hey Secundar…

I’m doing full-dome with blender too…I can’t seem to download your file, I’d be interested in your technique.

If you want a quick preview with no stitching, try putting a sphere in front of an orthographic camera - turn on ray tracing and ray transparency, and turn the IOR and Fresnel up full:

http://blog.domejunky.com/~pete/fresnel-fisheye.blend

Its ok but gives you a messy domemaster - and is useless when the objects get between the sphere and the camera…

Other than that I’m using the pineappleware tri-stitch approach, or Paul Bourke’s stitcher…you?

Hi itinerant,

Sorry for missing your post. PM me your email and I can try to get you that file. I should still have it handy.

I’m aware of using raytracing for the fisheye distortion but there are some limitations to using that technique. One being longer render times which can add up on 2k or even 4k masters. Getting fast feedback is what we’re after so our animators can go to work and not have to wait hours for a playblast to render.

We did purchase Paul Bourke’s stitcher. I also tested all the free ones and we ended up using our modified dome shader in XSI so no stitching was required. I still have my eyes open for a Blender solution though.