There is a camera setting called shift, it can be animated and basically pans the camera view without changing the pivot point of the camera itself, which doesn’t distort the perspective. Is it possible to have the “shift” track an object in the game engine, I want to use it to create a pre rendered background effect, like you see in games like final fantasy. You can animate the shift, so if it can’t track an object can an animation of the camera shift value react to where an object is in the scene? I hope you see what I mean, thanks.
I don’t fully get it, but try vertex parenting your objects, CTRL+P>Vertex
The camera to the left is the normal camera, nothing done to it. To camera to the right has been “shifted”, the origin point, the little dot is the same, it’s from that point that the perspective is distorted. When you pan the camera by using that shift feature, the perspective isn’t distorted, which makes it possible to create pre rendered backgrounds, like in old roleplaying and adventure games. What I want to do is animate the shift, have it track a player.
Attachments
I’m not sure if Shift works, or if it can be animated in the BGE. If you try it and it doesn’t work, then you’ll know for sure. Check the Blender API - there might be bindings to allow you to customize the shift.
For the effect you’re trying to achieve, you could just actually render out pre-rendered backgrounds to textures, put them on a plane, and then move the (either orthographic or perspective) camera as necessary.
Just using an orthographic camera with a 3D scene would work, as well, though the perspective would be lost.
You could also use the render to texture module to render out the game scene to a texture live, and then pan the camera over that.
This is what I’m trying to do, as you can see the models in the video are in perspective. It is possible to animate the shift, but what I want to do is have the shift follow the model in the scene. So if it’s not possible to have the shift track an object, is it possible to have an animation through a scene change frames depending on where the character is in the scene. Just look at the video and you’ll get what I mean.
The video you refer to shows a “2D-Render”.
This is nothing magic. They simply move the camera. I suggest to go with SolarLune’s solution
It seems simple and it probably is but I think you’re missing something, try it yourself in the game engine. This is what we got:
Scene 1:
Camera
Walkmesh
Background plane
The background plane consists of several planes that overlay each other, like photoshop layers. The walkmesh is in between the layers and the camera and is invisible. The camera is lined up to the background and the character that stands upon the walkmesh. It looks like the character is in the scene but the character has to be rendered with perspective on. When the character starts to walk towards the camera the camera will zoom out, this will break the illusion of depth right away. If the character walks to the right or to the left, because the camera is in perspective all the 3d elements, the walkmesh, the planes will distort and not align with the background.
The problem boils down to the fact that when the camera moves the perspective is ditorted. This would be fixed with an ortographic camera, but the character needs to be in perspective, the character needs to be able to walk towards us and away from us. Shift is what solves this, to keep the camera static but move it’s view without distorting the perspective. It’s what the shift does as illustrated in the post above. To make the shift work in the game engine there are two potential options:
- Animate the shift with keyframes, this works but the animation would have to track the character that we control. So then the question is how do you make a keyframed animation’s frames correspond to where in the picture the character is.
- Have the shift simply track the character with some kind of script, I’m not a programmer so I wouldn’t know anything about this.
I’m struggling explaining this to you, hope this makes it clear what the issue is. Maybe this picture I found online helps you see what I mean.
You forget, that your character would be rendered with the same distortion. It will get worse as far away from center it is.
If you really need a perspective rendering of the character and orthogonal on background and foreground, you can use several scenes at the same time.
Uh no I don’t think so, the camera in the first scene would be parented to the camera, the camera in the second scene which would be the overlay background wouldn’t react to the position of the parented camera in the first scene. It would remain static at all times, so no that’s not possible unless you’ve got some fancy scripts for it. Would probably need scripts for it anyway since the camera would have to be parented to the cube on one axis, just pan not follow the character into the picture. At first glance this might pass for a simple problem, but I think you’re wrong.
I tried to create two scenes two identical cameras, and two identical cubes. In the first scene the camera tracks the cube as it moves around the scene, in the second scene the cube is invisible but the camera in that scene also tracks it, when you push S in the first scene the character moves towards the camera and a message is sent to the camera cube in the second scene and moves the cube just like the cube is moved in scene 1. In the first scene the second scene is a background scene, the camera in the second scene is ortographic and the camera in the first scene is in perspective.
Doesn’t work, it seems that the ortographic camera turns the camera in scene one into an ortographic camera as well when it’s called to be a background scene in scene 1. So it’s not as simple as that, seems like the blender game engine can’t do something this complicated.
Yeah I have been falling back on this issue for some time and I think your blend is missing the point. It’s not layers of geometry, it’s layers of textures. If you try to redo what you made in that blend with a single photo you’ll see the problem. If you cut out parts of a photo and align it to the camera and you pan the camera the parts of the photo that is cut out will because of perspective be distorted. I’ll show you what I mean, one second.
Here are three pictures that have been cut out from a single picture and layered on top of each other in blender.
Here is what happens when the camera that is in perspective pans, the layers won’t align. The camera has to be in perspective to render the character that would be running on invisible geometry in between these planes. If the camera would be ortographic and pan the background layers would perfectly align. Shift also keeps the camera that is in perspective stationary while just panning the camera without distorting the perspective. I hope you understand what I’m trying to do now.
Obviously you need to add the missing parts of the texture.
I did that with the background texture.
I’m 100% sure this was done in in the game of the above video (post #5).
If you do not want to fill the missing parts, you can use orthogonal projection (camera) as mentioned before.
Mixing perspective and orthogonal projection is a strange approach, as it would look strange to the audience. We are used to get a single perspective only.
When you still think you want that, you can (as mentioned in #8) use multiple scenes, to get different cameras at the same time. Yes, you need some Python code to synchronize the cameras.
Another idea: Have one scene. The main camera gets orthogonal projection. Your “perspective” object will be applied via VideoTexture to a texture of a plane. The source camera can be in projection mode. This might allow really strange illusions to the audience, including what you want.
Mixing perspective and orthogonal projection is a strange approach, as it would look strange to the audience. We are used to get a single perspective only.
Just to make it even more clear what I’m going for:
As for your suggestions:
When you still think you want that, you can (as mentioned in #8) use multiple scenes, to get different cameras at the same time. Yes, you need some Python code to synchronize the cameras.
I tried this using the camera in the second scene as a background scene in the first scene. The camera in the second scene that is in ortographic perspective forces the camera in the first scene to be in ortographic perspective too, it’s very odd. So this doesn’t seem to be working.
Another idea: Have one scene. The main camera gets orthogonal projection. Your “perspective” object will be applied via VideoTexture to a texture of a plane. The source camera can be in projection mode. This might allow really strange illusions to the audience, including what you want.
I don’t understand what you mean by this, I’m not familiar with video textures or projection mode on cameras. Would it be easy to setup or would programming be required?