I know blender wasn’t exactly built for this, but is there any way I can do an orthographic render of multiple surfaces moving around with image textures applied to them, such that a pixel in an image = a pixel in the final render? I could do this with 2D animation software I’m sure, but Blender is more versatile and I don’t want to have to learn another (less intuitive, most likely) animation system.
First time I’ve heard Blender referred to as “intuitive.” Though I agree it is, as long as your “intuition” stems from a lot of previous 3D experience.
In any case. you can do what you want within certain limits by setting up your images on planes and insuring that the texure image sources have at least the same resolution as your rendered screen area. I say at least because in my experience (I just used something like this for a lot of my show reel) it’s better to have a greater resolution for the sources than for the screen. The correspondence will not then be “pixel-to-pixel,” but it insures that the the maximum quality presentation of the art is achieved. Also, you can then zoom or dolly in on the art if need be and not lose noticeable image quality.
Your image textures will be sampled and might be filtered for interpolation – if this causes probs with your art (it didn’t with mine) you can adjust certain parameters in the Texture context. Here’s an example, original rendered image on the left, as it reproduced in an animation as an image on a plane (perspective camera and in motion):
The repro is a little softer overall as a result of sampling & the motion in the scnee, but perfectly adequate for my purpose. Conversion to video format tends to reduce quality also, so in the long run a 100% pixel-to pixel match is not feasible in this context.
Blender’s Orthographic camera setting is completely perspective-less, so you can use it like a super-rostrum camera to do multiplane, pan & scan, or animated planes graphic work, though the latter is probably better done with a perspective camera.