Hey Blender heads,
This is something I’ve wondered about for a long time now. Is it possible to change Blender’s interactivity model?
I’ll explain:
I’ve been working in 3D for many, many years now. I started back in the early 90’s with LightWave. While working as a freelance 3D artist/animator I’ve had the chance to work in most other 3D program over the years since. Max, Maya, Softimage, Modo and C4D in particular. They are all surprisingly similar; you just have to learn what a particular feature is called in each program and they will work the same for the most part. Especially if you are just focused on a specific aspect of the 3D process like Modeling or animating.
Ever since the UI changed in 2.5 I’ve become more interested in Blender. I’ve had my eye on since long before but it seemed more like a toy with really strange user interaction and UI. As Blender’s features became more robust and mature, my interest increased too. But, every time I tried to use it, it’s crazy view/object/element manipulation has repelled me. I’d love to know if there is anything that can be done about it?
What bothers me is that the user interactivity model of every other 3D program that I have used works in a specific way. Even though the names of tools, key commands and user interface are different, they all have the same user interaction model: Activate the tool, click and drag a mouse button and watch the change, let go when you are done. This is the same whether it’s view navigation, dragging points, using tools like extrude or tweaking keyframes in the graph editor. Every single 3D program I’ve used works this way.
The issue is that obviously this isn’t how Blender works. Instead, you activate the tool and it instantly puts you in that tool’s mode then you move the mouse and watch the change and finally click the mouse the confirm. It may not seem like such a big deal to some but it’s a huge thing for me. And, I have a feeling that it has been a major stumbling block for a lot of potential professional users too. Because, if you use Maya or Max or Modo all day and then come home ti use Blender, the last thing you want to do is fight your built in muscle memory of tool interaction just to learn a new program. I have no issue learning what new keys I have to push to get the same functionality out of a new program. But if the user interaction is totally different then it’s going to be making it even harder.
I realize that for most of you (especially those who have been using Blender for a long time or have just started out using 3D programs), it seems fine and “Right” to you. What I’m proposing is not to change the current working model to a new one. Rather, allow users to choose and have it propagate down to all levels of the program. For instance, there is a Maya preset for viewport navigation in Blender. That’s great. I use it all the time. But guess what? it doesn’t work in all windows. The Maya keys for W,E,and R are also very useful but again, only in the 3D view. I noticed too that this way of working breaks the usual way of interaction in Blender so there must be a way to do this for all tools. Right?
Am I totally alone in this? I’d love to hear your input on the subject.