Virtual-Reality-Glove + GameEngine

Hi there,

I could really use some good advice from a blender-pro, please help :o:

I’m still building this hand-model in Blender, that is supposed to be controlled by a virtual reality-glove in the end. The glove I use is a tuned up P5 (http://www.vrealities.com/P5.html) and has sensors for the hands x-y-z coordinates, yaw, pitch and roll and a bending-sensor for each finger.
That makes a total of 11 degrees of freedom that I want to transfer to my blender-model.

I modeled the hand so far and added an armature with all necessary bones (blendfile: http://lsr.ei.tum.de/team/wolff/files/hand.blend). Then I created vertex-groups for every finger segment and attached them to the corresponding bone. (http://lsr.ei.tum.de/team/wolff/files/Hand.jpg)

The thing is, that I have problems actuating the bones seperatly in the gameengine. I only find the option of manipulating the parent bone (H_0), but not the children.
My plan was to use IK for each finger and to just position the fingertip-bones according to the P5-glove data (fingerbending etc). I hoped the gameengine would alter my hand-mesh in real-time based on the actual bone-configuration. Is this possible with the gameengine?

I know that usually one sets up animations first and just triggers them later on in the gameengine. But thats almost impossible for 11 DoF, all with analog input-data.

Is there any way I can get my hand-model moving, without relying on fixed animations? Or am I missing something? :confused:

Please help, I counld’t figure this out with any tutorial or manual I found :confused:

Thanks a lot,

-zapman

If you need more info, pls let me know.

I don’t think you can do this without modifying the Blender source code. But there could be workaraounds:

  1. You need a way to get the sensor data recognised by the game engine. You could use the joystick/mouse sensor for this. I have no idea how you currently connect the glove to the PC.

  2. You can’t manipulate single bones directly. There is a Python function setChannel() but I never got it working. What you could do is following:
    Set up a bone for each “freedom”. Define an action only for this bone with the appropriate animation. Have an action actuator for each of this bones controlled by a property. then you can manipulate the properties.
    This will only work if the actions are complete separate (no keys for other bones).
    Chuzzy06 was showing how it works with mouse input:
    http://blenderartists.org/forum/showthread.php?t=74405

I hope this helps

thx for your reply!

The setup is kind of complicated: the P5-glove is connected to Computer A (gentoo linux) via USB and MatLab (www.mathworks.com) reads out the current position data. The PC is connected to a second PC B (WinXP) which runs blender and the VR-handmodel. The positiondata is transfered through a c++/python port from PC A to PC B.
(the reason for this is that I run some other necessary haptic simulations on PC A which need lots of cpu-time and mustn’t interfere with Blender on PC B)

I already managed to control an earlier version of the vr-hand with the P5-glove ( http://lsr.ei.tum.de/team/wolff/files/hand_objects.blend). The problem with that is that I had to build the hand out of different rigid objects (each finger-segment seperatly) that each had their own armature (not bone!). The position data coming from the P5-glove was used to controle these segments:

for example:

  • you read out the bending value (angle) of the real index finger
  • transfer the data via c++/ python gateway into blender
  • convert this angle into positionchanges of each phalanx of the virtual index finger
  • accordingly move each phalanx --> the finger moves in total (bends)all this happens in real time (blender gameengine), with python scripts doing all the math (the thumb was tricky due to different x-y-z matrix)

Problem with that: The different segments aren’t joined (exept for the parented armatures), so there will always be small holes and gaps inbetween. Also the texturing is difficult with that many rigid object that are supposed to look like one single human hand.

The advantage I hoped by modeling the hand as a whole is that these gaps disappear and the joints flex the mesh smoothly. But I have no idea if it is possible. I have all the position data in blender, but I have no clue how to set up an armature-set that can be controlled directly (realtime) with this data. As I said: I don’t think (hope) that pre-defined animations are the solution.

I will take a closer look at the .blend file you posted, but it looks to me, that the guy also just plays “pre-recorded” animations?

I should mention:
sadly, it’s impossible to parent more than 1 armature to the hand-mesh; otherwise I could control the new (joined) hand-mesh the same way I did in the first place ;(

zapman, I am not exactly sure how you get those properties, but check out your .blend I modified:Hand

thank you very much!!

I can move two fingers with the space and f key now :slight_smile: very nice! I should be able to replace the input method with my P5 glove and set up the remaining fingers… :smiley:

although I don’t understand quite what you did at the moment… could you pleeease write a few catchwords explaining what you did?

thanks again!!

Yeah no problem.
What I did was make 2 actions, one for the index and one for the thumb. The actions, as you can see, are just the extremes (although not quite) for the 2. I then had the actions always going with a property to define what frame they are on (i.e. the “property”). I then used the property for each respective action (finger1 for the action for the finger). Now this is where you would have to adjust things. I then used an add on the properties ( in your case it would have to be an assign) that then changed the frame of the respective action.

What you will need to do is figure out the values for the P5 and make it so that the frames for the action and the values coincide.

Hey Zapman what exactly does Matlab do? Do you need it to record the movements… or can you do it in other ways? I’m interested in using a VR glove one a few projects I’m developing…

Hello zapman,
I realy don’t have any answers for you but I am very interested in your project. If I could interject an application idea for you all…
I have been thinking of using BGE to make a sort of VoIP for the deaf with sign language. My ability and knowledge is way to limited to do this myself at this time but I have been watching for things that might work with it.
As you continue with this project of yours maybe you and others could also run with this app-idea? Or at least keep it in mind? It would be useful as a teaching aid also. (I’m sure I havn’t thought of all its possable uses, or even most). anyways … just a thought.
(typing and subtitles would seam to be an answer on the surface but an interactive ‘speach’ system realy is far nicer I would think).
In any case I’m learning from you folks (slowly), so Thanks, I’ll keep watching.

You Could Make Youre Own Keyboard. Not On Blender Then Design It To Fit With The Controls So They Are Working Like Like Make A Cavern And Add A Tube So When You Put You Hand In The Cavern And Turn The Tube The Farther You Move It The Faster The Motorcycle Goes But…
It Is Very Complex.

@pH_alanx: thanks again! you really make me happy :smiley:

Well, I use MatLab to read out the position data of the P5. But furthermore there are thermodynamical and vibrotactile simulations that run on MatLab/ Simulink. The goal is to build a data-glove that gives feedback of the thermal and surface-roughness characteristics of virtual objects. I myself am a psychologist and I’m especially interested in finding out the best combinations of different kinds of feedback-channels (-possibilities). So the technical aspect is neccesary and important but my focus lies in psychophysics.
I’m sure you could get the position-data from the P5 in another way, without using MatLab, try http://www.robotgroup.net/index.cgi/P5Glove for example :wink: The advantage is that the P5 is a really cheap device (~60$), whereas professional gloves are really really expensive.

@3D Ghost: you’re right. that might be an interesting application. But the glove I use is cheap an doesn’t get all the important gestures that are necessary for a decent sign language. Immersions Cyber-Glove (http://www.immersion.com/3d/products/cyber_glove.php) would fit better but is extremely expensive.
As far as I know sign language does not only rely on gestures but on facial expressions as well? Would you also design a virtual avatar? Maybe video conferencing does the job already, I don’t know. I agree that someone should think this thing through :]

@BlendMe: sorry, I have problems getting the sense out of your post. I’m not a native english speaker as well :slight_smile:

Zapman
Thanks for the info… what your doing is very interesting. I’m trying to build a realtime time 3d system using gestures maybe including feet so you can make realtime graphics based on body movement for video projection accompanying electronic bands.

I’m going to be be building a library of realtime effects that can be comped over each other in realtime while synching them to different realtime beats… basically you get to chose which effects play on which beats and the like.

Your English is very good… wouldn’t have noticed you weren’t a first language speaker unless you had mentioned it.

Thank You for the link zapman,
I followed it to their ‘case-study gallery’ and found that they have been doing work with this as a translation medium from signing to spoken word. If you travel down that page a little way, (you probably already know this), they have a presentation about tactile feedback (tittled; Grounded Force Feedback). Also some interesting 3rd party related topics at the bottom of that page.

Not to make this to long in your thread but;
I was thinking, loosely, of a two way translation system between the deaf and hearing speach to 3D biometrix using an avitar, biometrix to speach using a glove (though it would have to be reasonably price), translation between various signing languages (each country uses a unique sign language), and an interaction based sign language teaching aid (with type to sign, sign to the computer avitar (as in tests and practice).

Being a psychologist you know better than most people the different areas of the mind that are engaged in speach, writting, and to some extent typing. All forms of comunication. The assumption that I am making is that signing, and reading signs, are more closely related to vocal speach and hearing in the mind than typing, writting or reading subtittles. I am sure that in your studies you also have noticed the various bariers in societies that comunication differences cause.
To make this shorter, (lol, I do get going on and on),
Yes, a vidio conference does do the job between people who know the same sign language. But, it doesn’t integrate us in the same society.

LOL(*laughing at self), Thats my thoughts, (and curent refinement), on that part of the subject.
I am realy interested in hearing more of your work in psychophysics. Thanks,
AHDRL