That slightly backwards interface device we all use..

ok, so I have now lost all my dignity here, I’m posting a new thread in off topic…:no: but if this goes the way I want it to;), it should relate to blenders interface.

Anyways, we all use one, and your hand is resting on one right now. yeah I’m talking about the mouse. I’ve got a few thoughts on it, but I thought I’d see if I could get you all to think about it first. (I know, that’s probably wishful thinking to get an inteligent thread in off-topic, but it never hurts to try right.)

I have used two fingers + the thumb for buttons, I’m now trying to switch myself to three, what do you use and why?

How many buttons does your mouse have? The one I’m using right now only has five, and the scroll wheel, which has side-to-side movement and can be clicked as well. (ten outputs in total, and yes I use all of them, I even have more on my own mouse.)

But the real question is; why are we still using a mouse to interact with a computer? why not just use all touch screen interfaces? we have the technology and it isn’t really anymore expensive then a small camera.

Because it’s more expensive then a mouse and keyboard?

edit; also, my mouse has four buttons, a scroll wheel that can be tilted sidways.

i keep asking, why do we create 3D content using 2D screens? isn’t that kinda trivial?

Because we are too lazy to imitate depth ourselves so we have software do it for us.

And because 3d screens aren’t available.

well, i’m happy with mouse+keyboard+wacom combination.
i’ve tried some touchscreens, but don’t find them that nice actually. well, cintiq tablet would be nice, but have to wait for prices to come down.

and the idea of me touching my monitors with my fingers just sends shiver down my spine… would be only matter of minutes until it would be impossible to see any picture underneath all those fingerprints.



basse has summed it rather well. Good job!


Functional: seven buttons and a scroll/button. LMB, RMB, two on the side of the mouse for scrolling(this only works in some apps), and three volume buttons, one for higher volume, one lower and one mute. Not counting the scroll/MMB.

Behavioral: two fingers, one on the RMB, one for the LMB and Scroll/MMB; and my thumb for the side buttons.

Basse is right, touching your monitor is like poking your computer in the eye. Ouch!! Poor baby!

If your only complaint against a touch screen is fingerprints then you don’t need to be that worried. I’ve been looking around and have found more then one way of eliminating them almost entirely. (The most common I found is to coat the screen so that it has the same index of refraction in oil as in air, and as a side bonus it makes the screen have a low index of refraction as a whole.)

basse, no wonder your happy, you have a Wacom. So your already halfway to where I think we should be as a whole.

Here are some reasons I have for and against mice. (this is more conceptual, some of it may not be very practical as we lack the technology to do it… yet. So try not to knock down only the stawmen, but please do point holes and weaknesses out when you see them.)

The small amount of motion is nice because of the laziness factor, but unless you have some medical condition or are really old, the slight bit extra movement of a touch screen wont hurt you in the long run.

Buttons, no where (that I know of) in nature can you find a single “button” that is ether on or off with no in between. Everywhere you look there is going to be a whole scale of pressure sensitivity, I think we should have more of that control when interacting with a computer.

Movement, our brains, and body’s are really good at picking up patterns and using them to our advantage. An example of this is that in order to move the mouse on the screen we have to make a completely abstract movement with our hand on the mouse, but our brains, have learned the pattern of movement well enough that we don’t even have to think about where to move our hand to get that mouse cursor to go where we want it to. I think we could use this “feature” of our brains more effectively then we are with the current mouse design.

I think we have conformed to the limitations of the machines around us so well that I wounder if we even know we are conforming to them anymore. With computers the limit has been raised, but we are still using the old method that was made to conform to the old limit. (qwerty keyboards are an obvious example here.)

Hope that makes what I’m getting at clearer, and I hope there are people on here willing to respond and add to what I’m trying to get at.

Well, there are these controllers for your off hand.It says that they work with blender although I wouldn’t know. I think there is a plug in or something. Check out the supported applications .
As for poking my screen(s), no way. That would end up messy one way or another.

There are… sorta. They create the illusion of 3d.
It is possible still with today’s technology to make screens that… are like a window, you can see different perspectives of something while you move your head around…
while still being a 2d screen. That could be made putting pixels on different angles. It’s been done already…

The mouse I use is the simple default mouse that came with my computer. 3 buttons: left, right, and only verticle scroll wheel which also clicks. I write and use the mouse with my left hand (I’m a lefty!) and I click the left mouse button with my index and my middle finger together, and I scroll and right click with my index finger.

I use a Logitech MX 518. it’s technically a ‘gaming’ mouse, but i find it to be a really good, all-around useful mouse. There are the standard two buttons + wheel that acts as a middle mouse button. There are also two programmable buttons by the thumb and two buttons that change the resolution (sensitivity) of the mouse on the fly.

There’s another button below the wheel that looks like it’s supposed to launch a file browser or something, but I’ve never used it, and because I don’t have the drivers installed (and never did :P) I’ve no idea how useful that button actually is. Overall, tho, it’s a very nice little mouse, and feels nice in the hand. It’s also lasted me quite some time. I’ve only ever owned two of these things, and the only reason I had to buy a second one was because I left my previous one at a LAN (which I had to travel out of state to get to… that mouse is long gone…).

Mice, tho, are actually quite brilliant little devices for computers. The GUI on screen is generally 2D (the screen itself is 2D), and navigating a mouse on a flat surface will instantly become comfortable to people because of this. You can stick lots of button on it, and you’ve generally got a number of potential fingers to hit those buttons at any given time.

I think touchscreens will be nice, but they won’t be as instantly useable to people unless they get off this notion of multitouching and “press here and here to make something zoom in or out, press here and drag like this to make something move, press here and here to make something align like this…”. That’s all too confusing. The workspace needs to be instantly identifiable, and the commands and use of the interface should be immediately obvious. Mice make that possible with a minimum of experience. Touchscreens and a lot of other navigation devices are not as obvious. Not that a good touchscreen system wouldn’t work or become popular, just that the current systems I’ve seen actually require that people learn them (like how to scale stuff, how to open and close stuff… with a mouse it’s obvious… but drawing a spiral or a scribble or something on a picture to open or close it… that’s not going to be understood automatically)

=[ no more eating french fries and blending

BD300: I think that putting pixels at angles would be very difficult and the results would be fair at best. The better option I see right now is just to track the head of the user, only downer is it only works for one person at a time. I’m still thinking it over, but how can we combine head tracking with a multi-point touch screen, such that when you move your head while holding an object in 3D the object doesn’t move with your head, but that is probably just a programing issue, not hardware.

Squiggly_P: The way I’m looking at it right now is our mouse is like a guitar hero controller, it simulates a fully interactive touching process. Where as I’m trying to come up with an interface that is more like a real guitar, At first a real guitar is many, many times harder to use, but with a little practice and knowledge you will be able to do a lot more a lot faster and more naturally. It’s like the difference of poking out the melody one note at a time, verses playing the whole cords. Both will give you complete songs, and sometimes the single note is actually more powerful for the effect you want, but most of the time the cord is better.

I’m pretty sure fingerprints are a minor issue, between a surface that doesn’t show them readily, just making it water tight so you can clean it easily, or even waring finger caps/gloves I think it won’t be a problem.
Actually, finger tips would be cool because you could make the computer track your fingers, like a Wacom does, even when not actually touching the screen.

i think you’re forgetting the fact that your finger is quite large in the amount of pixels it takes up onscreen. in order to use your idea (which is a good one, i think) screens would have to be enormous or resolutions would have to be reduced in order to be able to work accurately. if you take into account something like detailed modelling, in order to be able to move verts around without selecting the wrong one you’d have to zoom in further too. result - greatly reduced “screen real estate” especially when you consider that any interface elements would also have to be larger.

talking of wacoms, have you seen this?:

$1699. Cool. Gimme a sec. I may have some change in my pocket…

If it doesn’t come with a floating hand, I’ll be VERY disappointed.

Pearsonally I try to use the mouse as little as possible. it gives me hand ache, and is very inefishent as you are constantly moving a pointer around the screen. the same is true for tutch screens. CLI+keybord is the only good option.

Just a two button and scroll wheel mice with three additional buttons but using mostly for back browsing optical with red LED and on ordinary cardboard but still not really good on aiming with it at all.

I use mine as little as possible too. I know all kinds of shortcuts, but I prefer the keyboard, because my mouse is extra sensitive and it often clicks twice instead of once if you bump it.

Atemporalskill, I’m sorry, but I don’t think that is very practical in the long run. I often try to control my mouse down to clicking on individual pixels, and i don’t want one pixel on my screen to be the size of my finger.

For things like mobile devices, touch screens are fine. But for a full computer workstation, the only thing that would be fine is something that can click on individual pixels ACCURATELY.

No, I think the REAL future of computer interfaces is being able to control the computer with your brain. Just think, “Open home folder”, and Dolphin (or Nautilus, or Explorer, or whatever OS X uses) will pop up with /home/username/ automatically.