Saw this posted on Slashdot today.
http://www.glassner.com/andrew/cg/research/cubism/cubism.htm
The actual CG implementation looks pretty sucky IMO - but I really like the sketches of the monorail and the elevated tube walkway. Interesting idea.
Saw this posted on Slashdot today.
http://www.glassner.com/andrew/cg/research/cubism/cubism.htm
The actual CG implementation looks pretty sucky IMO - but I really like the sketches of the monorail and the elevated tube walkway. Interesting idea.
Agreed…the CG version is pretty ugly. However, the concepts he illustrates on his site seem to be pretty awesome. If you look at the rest of his work he’s come up with things that are far more interesting (and practical) than this cubism business IMO.
Yeah, the glow-in-the-dark stuff he has seems pretty neat to me. Can you sort of do that in Blender with Radiosity? I haven’t really messed with that yet.
i have a siggraph paper somewhere about “pre-rendered multi-perspective panoramas”. ie, using a 3d scene with a moving camera to render a complex backplate for traditional 2D animation, like the first sketch shown on that page.
very interesting stuff, and i think entirely doable in blender, if somebody inclination.
later
BEAT
The sketches are better then the renders. Why not try to make a scene in blender. Then set the camera like you want to render the scene, then rotate it 180 degrees, so it will turn it’s back to the scene. When done, add a sphere in front of the camera, delete 1 half of the sphere, so the camera looks into the sphere. Then go to Yafray, and now set reflection to the sphere. the sphere will reflect the scene you made, and the camera renders the reflection which is in the half sphere. Now you’ve got a great image I think. I’ll hope you understand what I mean, I can make a drawing of it, if you don’t understand? So the sphere fuctions like a mirror does.
personally I liked the lightning / thunder tool most, not sure why!
i liked the idea of doing the process backwards and generating lightning which would produce a given thunder… I’d love to try creating lightning from recorded thunder and seeing how close to the original it managed to get!
I don’t think that the CG image is meant to be an end-all. He is pioneering the technique, and first attempts always suck. I’m sure that image quality (and felaxability of the tools) will improve over time… if it ever catches on.
It is a pretty nifty idea, but I’m wondering if it might not make sense to simply cut together multiple camera images side-by-side. You don’t need to have the warping in between (in fact, I would argue that having the warped transitions between them actually make the images more confusing, and less effective at communicating something to the audience).
But still, I’m sure it has useful artistic application.
i was thinking about this: it might make interesting transitions between traditional scenes. Instead of panning the camera from one scene to another, you could slowly zoom out from one normal scene to the warped view, then back in to the second scene…
Not sure about having whole scenes in the warped view though…
Did anyone pick up on the exact method he used? It seemed as though he switched out the simple plane of the camera with a curved surface. Is anyone out there familiar enough with the guts of Blender to know how hard that would be to achieve? Is it “simply” a matter of changing the two planes that make the camera into two curved, malleable surfaces? Anyone wanna hack that in the tuhopuu?
The basic idea is that the camera could be stretched and distorted. I suppose the most general way to look at it would be to change the view-point to an arbitrary curved surface (with UV surface-coordinates corresponding–with transformation–to XY image-coordinates) and the projection plane would be changed to an arbitrary curved surface as well (again, with UV coordinates corresponding to XY coodinates).
For raytracing, that is simple to render: when casting the ray for a given point on the image, simply cast the ray through the corresponding points on the two curved surfaces.
Blender’s renderer is scan-line based, and thus to render things as a single image, the projection transformation has to be linear (i.e. straight lines stay straight).
Thus, it could potentially be done (in a limited way) by rendering the image in seperate pieces (like with panorama), but if the distortions are severe, you might end up having to render very, very small image pieces to get good results (which would be very slow to render).