Prototyping a robotics simulator: some questions

Hello,

I’m working in a French robotics lab (LAAS-CNRS), and we are currently surveying several technologies to develop a new simulation platform.

I use Blender since a while for other purposes and I’m now investigating the BGE for this simulation project.

2 technical questions first: my first attempt was to simulate a “laser scanner” (a device which casts laser rays and get back distances of objects around the robot). First a simple Python script generates a mesh (an half disc made of, let say 20 vertices). Then, I use an “Always” sensor linked to another script to update the mesh according to collisions with surrounding objects (I use the KX_GameObject.rayCast() method of Blender 2.47).

Here’s the code:


def updateLaser():
   global owner

        # I get the laser beam mesh
        laser = Blender.Object.Get('RobotLaserScanner')
        mesh = laser.getData()
       
        v = [[0.,0.,0.]]
       
   # Update the mesh's vertices
   for v in mesh.verts:
      rayDirection = [v.co[0] + owner.getPosition()[0],v.co[1] + owner.getPosition()[1],owner.getPosition()[2]]
      hit = owner.rayCast(rayDirection , owner, 20.0, "")
      if hit[1]: # -> smthg collided
         v.co[0] = hit[1][0] - owner.getPosition()[0]               
         v.co[1] = hit[1][1] - owner.getPosition()[1]
   mesh.update()

This works perfectly, but, once in the Game Engine Mode (“P”), the mesh is not updated. I’ve to quit it (“Esc”) to see the updated mesh.
Do you know a way to dynamically update the mesh?

Another question: I’d like to user the modular GUI of Blender to visualize the various sensors data and robot’s cameras.
But, if I launch the simulation (“P”), only one viewport starts the simulation. Is there a way to globally start the simulation? (ie, on all viewports)?

Then, I’ve more general questions:

  • is it possible, with Bullet, to closely follow the real time (I mean, the physical time)? We want to be able to do hybrid simulation (with both simulated and real robots), and it requires the simulator to be able to skip simulation steps in order to keep synchronised with the physical world.
  • do you already have a nice IPC (over network) set of tools? or should I start to implement something (for instance based on Google’s very efficient Protocol Buffer) We’ll need, amongst others, to send images from Blender to clients (robots or simulated robots).
  • last (more technical question): how can I store on a disk images from a Blender camera via Python?

Thanks a lot for your answers,
Severin Lemaignan

welcome to elysiun , i will have Turin Turambar making a festival for you :slight_smile:

I’m working in a French robotics lab (LAAS-CNRS),

interesting :slight_smile:

2 technical questions first: my first attempt was to simulate a “laser scanner” (a device which casts laser rays and get back distances of objects around the robot).

have a look at my pre-made 3D Scanner Simulation …
http://blenderartists.org/forum/showthread.php?t=105127&highlight=3d+scanner
http://www.khayma.com/cgwoods/forumser/scanned.png

Is there a way to globally start the simulation? (ie, on all viewports)?

split view tut on www.tutorialsforblender3d.com
ok , here …
http://www.tutorialsforblender3d.com/Game_Engine/Viewports/Viewport_1.html

how can I store on a disk images from a Blender camera via Python?

have a look at my python examples …
one of them have a script based on another guy here … which makes a screenshot as a file
here …

EDIT :- this might help
http://blenderartists.org/forum/showthread.php?t=91324&highlight=3d+scanner

I admire your desire to create a simulator in Blender. It would be awesome if you could create a template for laser scanners, cameras, and all sorts of things.

If you are willing to look at other options for a robotics simulator, check out player, stage, and gazebo (link). They are very powerful, open-source robotics software, which I used over the summer, working at a robotics lab.

The idea is that player (and the driver you write for it) will translate a general command (setSpeed, GoTo) into a device specific command. Stage allows you to run the same programs which control your real robot, in a virtual 2D enviornment. Gazebo is the 3D simulator. I don’t feel like I’ve done a good job explaining what these programs do, but if you have interest in hearing more, let me know.

Hello!

3DGURU: Thanks a lot! I’ll investigate your links.

J09: I know quite well Gazebo, the 3D simulator of the Player/Stage project.
And actually we are currently hesitating between starting a whole new thing based on BGE or contributing to Gazebo.

But Blender is definitively a more powerful platform.

I’ll let you know what we are doing (and if we eventually choose Blender, I’ll probably post a job position for a senior Python programmer on this forum :slight_smile: )

Concerning the viewports management, the tutorial which explain how to setup a split screen is fine and I could use that. But is it possible to activate several viewports in the base Blender interface when pressing the P key? I mean, I don’t really need to go full screen like in a game. On the contrary, it would be useful to have my various Blender viewport during the simulation (especially to visualise some Python script outputs).