I have been wandering why blender uses Z axis up … does it not cause a problem when going back and forth between programs. And is that not going to ba an added minus against Blender being actively used in pipelines.
Whenever you go back and forth you have to rotate. imagine animating in another app and bringing it into blender via say collada.
I think Z-up is used more in CAD based (right?).
It would be nice if it was possible to change the coordinate system to Y-up.(as an option)
I dont know the problems in pipelines, but for me it makes more sense to have z-up. Its like looking to a 2d drawing from the top, witch is most logical for me.
why is that? I mean why should Blender follow commercial apps, as K-land mention, it is kind of logical if you go through the dimensions. 2D land has only back/forth-left/right plane (X:Y), not up/down also…so the 3rd (3D) dimension should then be Z added, which is just up/down. (3D space -> X:Y:Z)
“Different disciplines use different variations of the coordinate systems. For example, mathematicians typically use a right-handed coordinate system with the y-axis pointing up, while engineers typically use a left-handed coordinate system with the z-axis pointing up. This has the potential to lead to confusion when engineers and mathematicians work on the same project.”
It’s something you get used to, but it does lead to an internal inconsistency.
The terminology used in Blender defines ‘front’ as:
X (left, right)
Y (in, out)
Z (up, down)(as defined by the way the coordinate system lays out through the ‘front’ camera view)
However, when viewing through the camera - tools that work on depth are all labeled with ‘Z’ (like the composite nodes).
These things are all commonly referred to as Z-whatever, but it makes sense in apps where Z is depth. It is just linguistically odd in Blender because you would expect it to be Y to remain consistent with the way the coord system is used - which would then mean that nobody would know what it is because they’re used to a Z-buffer meaning a depth map and would have no idea what a Y-buffer is.
Personally, I have always assumed that X describes width (ie, horizontal), Y describes height (vertical) and Z describes depth (inny outy ;)). But I have no problem adjusting my assumptions to work within blender. It did cause problems for me when exporting to the .x directx format. But, hey, standardization of everything would just be plain boring.
And that’s definitely not written by an engineer. Shows again how much you can trust wikipedia…
Every self-respecting university will teach its future engineers always to use a right-handed coordinate system, to avoid confusion.
And for (engineering) simulation purposes: the inertial axis system is defined such that the x-axis points north, the z-axis points to the centre of the earth and the y-axis complements the right-handed axis system.
I agree with Mike. The third dimension is an extension of the standard two dimensional coordinate system. The universal coordinates of x and y in two dimensions is for x to be horizontal (left, right-- width) and y to be vertical (up,down-- height) so when you add a third dimension Z it would be toward-away from you and depth. The fact that you are looking down on a piece of paper with an x-y plot does not change the fact that z would be toward-away from the viewer, not up-down relative to the viewer. So where you start (front, top) doesn’t make any difference.
If Blender gave us a choice it would not be just copying commercial products it would be making things easier for the user. As an example, if you use POV-Ray a lot you could set Blender to those coordinates and avoid confusion when you switch between applications.
POV-Ray calls its coordinate system “left-handed”. It has +z away from you, +y up, +x to the right.
sten and k-land: the argument of paper being xy so blender’s Z-up is logical , totally falls apart. You are assuming that the paper is being looked at on a table top, what if you take the same paper and paste it on a wall… or look at it upside down, the coordinates dont change do they?
Also the argument about why should Blender be like commercial apps that I read about over and over just doesnt fly.
One does not try to be different just to be different, so I am sure there is a good reason behind this coordianate system. And I am not advocating Blender becomes like anyone else. I just would like to understand the reasoning behind some of the choices that are made.
Do you know how hard Softimage is working to improve workflow so studios can fit it into existing pipelines ?
That doesnt mean they are inferior. Just making users more comfortable. And attracting other users. In 3D even the best tool is going to be left in a vacuum if it doesnt play well with others.
Rotating the object 90 is not so big a deal, but if you go back and forth and say work on multiples objects/ scenes , it can become annoying. If all you do is start to finish in Blender , then its not a problem.
Since the coordinate system affects the whole blender, I would assume it could be major work, right, or then again, it might not be so hard… I dont know. Can any coder comment on that. Does that have to be done in C or can python do that.
You are assuming that the paper is being looked at on a table top, what if you take the same paper and paste it on a wall… or look at it upside down, the coordinates dont change do they?
I can understand both arguments, if you dont, its up to you.
It doesn’t make sense to have the co-ordinate system with z-up and the rendering system conceptually use z for depth (hence the name z-buffer), which is in and out but I’ve grown comfortable with z for up. It really needs to be an option though even if it’s only for export so people don’t have to rotate stuff in other software. As someone said, how do you rotate an entire animated scene?
Yes, I know 90 degrees of rotation puts a lot of strain on your finger tips, but I guess we’ll just somehow have to learn to live with it.
This doesn’t really seem like much of a problem to me… Rotating doesn’t take that much effort, plus, as Alltaken said, there is a rotate option in the obj exporter.
Its not a hassle for the single user or hobbyist, but if you do enough production work with multiple apps, getting standards set across a pipeline is a big headache.
XSI has recognised this and now ships with a maya ‘emulation’ mode. Modo reads and writes maya scenes directly. Wings lets you choose a mouse interaction mode from mirai/maya/blender/max. Zbrush on the other hand refuses to play nice, and as such is getting dropped in favour of more pipeline friendly tools like mudbox.
Its these sort of simple yet fundamental improvements that’ll help blenders acceptance in the wider 3d scene, which from what I understand is a key point Ton and co are aiming for.
“while engineers typically use a left-handed coordinate system with the z-axis pointing up”
Excuse me? I´d like to see who wrote that…
Its a problem only for certain apps, for other programs the left-handed coordinate system would be the problem.
For me the current system seems logical, but if there are others who prefer the other variation, then there should indeed be an option to switch it.
But anyways, everyone knows there are actually 4 dimensions: width, height, length and depth