Changing camera WATCH_DOGS Style

I was playing watch_dogs, then i tried to hack a camera,when the camera is switched to another camera, i saw that awesome transition effect,so i tried to replicate it in blender, and it looks awesomeeee:).

use the arrow key to look, point the crosshair to another camera,then press “E”:wink:
I made the sound effect in FL studio, and the “servo.wav” is from half-life 2 (490 KB)

Very cool,

This actually got me to think about a proper way to render wireframes using drawLine.
What’s bugging me is that I can’t find a way to detect if each 2 verts share the same edge/are in same polygon or not.

Edit: nevermind, just found it in KX PolyProxy. Positing a blend soon

And… here it is:
However logic goes up so badly even with not-so-complex meshes.


PythonWireframe.blend (484 KB)

cool,thanks fo sharing that, i was actualy looking for something like that,but the logic goes up very bad

You can improve the performance by moving calculations out of the innermost loop. You’ve also calculated the inverse of the transform, but the only reason you need to do that is because you’re multiplying on the wrong side of the vector (to transform X by matrix Y, you would calculate Y * X).

import bge

WireframeColor = [1,0,0]

own = bge.logic.getCurrentController().owner
mesh = own.meshes[0]

transform = own.worldTransform
draw_line = bge.render.drawLine
meshes = own.meshes
get_vertex = mesh.getVertex

for i in range(mesh.numPolygons):
    poly = mesh.getPolygon(i)
    for i in range(1, poly.getNumVertex()):
        vertIndex1 = poly.getVertexIndex(i)
        vertIndex2 = poly.getVertexIndex(i - 1)
        vert1 = get_vertex(0, vertIndex1)
        vert2 = get_vertex(0, vertIndex2)
        vertpos1 = transform * vert1.XYZ
        vertpos2 = transform * vert2.XYZ
        draw_line(vertpos1, vertpos2, WireframeColor)
own.visible = False

You’re correct in saying this is slow - openGL draw calls with the Python wrapper seem to be slow,

Drawline etc seem like they are implimented wrong, is there any way to use python to send info to a shader?

I recall watching a box2d benchmark using around 10k rays visualised by 2d lines running at like 30 fps.
In bge, I once tried collision position detection using raycasting from vertices normals and I got like 6 fps with a monkey subdivided once.
How’s it so different? only optimisation issues?

would storing the polydata you need make it faster?

Box 2D is in 2D, while Bullet is 3D. There could be optimization things that could be done for the BGE or Bullet under the hood, but you have to realize you’re dealing with more complex shapes and mathematics, which means, as far as I know, completely different physics applications.