Hello,
I’m working in a French robotics lab (LAAS-CNRS), and we are currently surveying several technologies to develop a new simulation platform.
I use Blender since a while for other purposes and I’m now investigating the BGE for this simulation project.
2 technical questions first: my first attempt was to simulate a “laser scanner” (a device which casts laser rays and get back distances of objects around the robot). First a simple Python script generates a mesh (an half disc made of, let say 20 vertices). Then, I use an “Always” sensor linked to another script to update the mesh according to collisions with surrounding objects (I use the KX_GameObject.rayCast() method of Blender 2.47).
Here’s the code:
def updateLaser():
global owner
# I get the laser beam mesh
laser = Blender.Object.Get('RobotLaserScanner')
mesh = laser.getData()
v = [[0.,0.,0.]]
# Update the mesh's vertices
for v in mesh.verts:
rayDirection = [v.co[0] + owner.getPosition()[0],v.co[1] + owner.getPosition()[1],owner.getPosition()[2]]
hit = owner.rayCast(rayDirection , owner, 20.0, "")
if hit[1]: # -> smthg collided
v.co[0] = hit[1][0] - owner.getPosition()[0]
v.co[1] = hit[1][1] - owner.getPosition()[1]
mesh.update()
This works perfectly, but, once in the Game Engine Mode (“P”), the mesh is not updated. I’ve to quit it (“Esc”) to see the updated mesh.
Do you know a way to dynamically update the mesh?
Another question: I’d like to user the modular GUI of Blender to visualize the various sensors data and robot’s cameras.
But, if I launch the simulation (“P”), only one viewport starts the simulation. Is there a way to globally start the simulation? (ie, on all viewports)?
Then, I’ve more general questions:
- is it possible, with Bullet, to closely follow the real time (I mean, the physical time)? We want to be able to do hybrid simulation (with both simulated and real robots), and it requires the simulator to be able to skip simulation steps in order to keep synchronised with the physical world.
- do you already have a nice IPC (over network) set of tools? or should I start to implement something (for instance based on Google’s very efficient Protocol Buffer) We’ll need, amongst others, to send images from Blender to clients (robots or simulated robots).
- last (more technical question): how can I store on a disk images from a Blender camera via Python?
Thanks a lot for your answers,
Severin Lemaignan