Sorry if this is a bad question, but as I don’t know as much as I’d like about programing myself I thought I’d ask and see if anybody knew. The reason I’m asking is I was reading over at blender.org and saw that for 2.5 they are going to recode how blender handles such basic operations and was wondering if it is or could be done in a way that is friendly to multi-point touch screen interfaces?
(A friend of mine was telling me how you could use a digital, web, or “special” camera to make fairly high resolution inferred sensor on a rear projected display, thus making software the only hangup to having a cheap multi-point touch screen. (hope that made sense) what do you knowledgeable guy’s (and gals) think about the blender side of using that?)
And on a side note, what about the head tracking that’s is starting to show up? I would think that is much harder to do programing wise. Just think of it, a head tracking touch driven interface…:eek: that would turn a few heads.