Non-Euclidean Portals (New "FEZ-like" example)

This is a frequently asked question, however what I am looking for hasn’t been covered in any threads as far as I can tell from searching through the threads.

I am currently trying to create non-euclidean or “4D” levels. I mostly understand how they work in games like Portal;
Visually, a surface has a shader on it which represents what the other portal “sees”, however the camera effects the perspective according to its orientation and world position.
In terms of logic, it either creates a duplicate “fake” mesh with the same properties as the last state of the “not fake” mesh (i.e its orientation, position and velocity when entering the portal). Another way is to teleport the object to the position of the other portal with those same properties.

What I don’t understand is how exactly in Portal, you see yourself or other objects transitioning through the portals, and they can keep their constraints through the portal. How exactly is this achieved, and can it be replicated in Blender?

An idea I had was to create ghost meshes at either portal, however you can’t add an object in the same scene any more… Unless there is a workaround.

Thanks

EDIT: Here’s an example of physics in non-euclidean spaces where the objects can collide with itself through portals and its gravity can even change.
http://www.reubenfriesen.com/?page_id=16

you mean something like this: Seamless transition between scenes?

I made a portal demo in the BGE, some time ago: http://www.youtube.com/watch?v=GWyM-HAyc4w

However, it was largely a video texture hack that didn’t include proper physics interaction.

Later, I actually had a personal project where I tried to “do it right”, and I wrote about that on my old google site: https://sites.google.com/site/goranstore/home#TOC-Now-you-re-thinking-with-portals

You basically need a system that will allow you to collect collision data from two shapes, which will generate contact points for a single physics body.

It’s not something that can be done in the BGE, because you don’t have access to contact data, or the Bullet filtering functions.

I attempted to do something like portals over a network before. For transferring objects over, I copied the physics properties on contact. I made sure the portal (plane) would not physically react with the object. I then LibLoaded the object in the other game engine and applied the physics properties (with some tweaking) so it looked seamless. Ofc, this works well for simpler objects.

The second part was supposed to be something like Goran’s video, except there were limitations to the BGE. For example, creating bgl.Buffer was too slow. It was kind of like that security camera demo except I had to take that image data/buffer and transfer it over a network and reapply it.

One more thing I wanted to do was have it able to appear in both worlds when transferring over. So if 50% was transferred, then 50% would be sticking through the portal, but then now you have to coordinate both of them. The easiest way would be to use a cube to represent the object. Another option would be to use a streaming protocol, like Verse. This is further complicated with parented objects etc.

However, I do plan on giving it another shot, since I now have a better understanding of the BGE codebase.

While playing Portal, I was always amazed at the ‘portal’ mechanics. If you really fancy getting your hands dirty, then there is an opengl project which aims at recreating the Portal mechanics. I don’t think it would be easy to implement in the blender game engine, but it might help you understand the concepts behind it more. https://github.com/lpuglia/Open-Portal

Well since I got stuck on 3D portals for now, since they are seriously complicated (not that I was expecting them to be simple at all).

So, I decided to work with some 2D. I created the same effect as FEZ. This is incredibly simple and works the same way that FEZ does, by using orthographic cameras to create the illusion that the depth of the player is constant when it is not – The Z world position (which is the Y world position in Blender) of the player in Blender is set to that of the world position of the nearest object to the camera, therefore eliminating the Z (Y) axis.

It is by no means near as good as fez. It has a few problems and needs more work, however I just made this in a day and found it interesting.

The file will be attached to the post. The scripts can probably be really optimized, as can a lot of things.

Controls:
Left arrow – move left
Right arrow – move right
Z – Jump
1 – Rotate left (clockwise)
2 – Rotate right (anticlockwise)

If you look in the script “player.py”, line 41 & 42, I try to use AlignAxisToVect, however it seems to stop the gravity from working (I am not too sure why :/)
Also, the reason I don’t use the world gravity is because I want to be able to change the direction of it for later projects.

There are some glitches still and I am trying to find a way to set the position of the player to the nearest object in its x or y grid position according to the camera orientation without having to constantly find the position to each object in the scene every frame.

Have fun, and please feel free to critique on anything.

Attachments

fezLike.zip (74.7 KB)

Nice little start there MrPutuLips (with the most disturbing avatar on this forum). Fez was one of my favorite games of last year. The mechanic of switching the dimensions captured me throughout the game. Are you aiming for a fez type game, or is this just a test? A brain exercise?

They are very short scripts, so there’s not much space for optimization. I didn’t spend any time looking into it, but I wonder why the AlignAxisVect is stopping the gravity from affecting the player? Strange.

Haha, it’s not that disturbing :wink:
I’m working on creating some interesting non-euclidean mechanics for my game just to add a but of a twist to it (maybe some mind-boggling puzzles or suddenly switching dimensions or never ending levels etc.). Thought I’d try reproduce a mechanic that people liked just to see how it works.
The scripts are short, however creating a mesh for every ray isn’t very nice for creating maps, and rays are quite taxing on the system, plus it increases the poly count by a bit. Also some small things like the keyboard events could be stored in an external module and loaded rather than written in every script that needs it.

Something I realised I need to change is disabling player movement while the camera is rotating, because the player tracks to the
camera which has a slow parent, thus it could move x and y at the same time. An alternative is to track to the world empty, but then it ruins the smooth rotation transition.