Currently, there are geometry and normal vector nodes.
Geometry input, unless I’m mistaken, takes into account all of the mesh vectors originating from a point or face (in the case of nor) relative to the defined coordinate system. Essentially, each surface coordinate corresponds to a vector (x,y,z = i+j+k). The problem is that this includes all of the vectors in the defined coordinate system, not just one vector.
The normal node, by default, is a single vector parallel (or antiparallel?) to the camera normal. Basically, it is along the depth of the camera view. This can be changed interactively by rotating the ball to a desired position. Although this can provide a single vector ouput, the vector is constant according to how it is related to the camera.
Here’s what I want: an input node that has the local x, local y, or local z axis of an object as the output. This way, a single vector could be used that would move when the object moves and rotates because it follows the local coordinate system.
Ideally, the node would be structured to select a coordinate system (local for example). Below this would be three sliders (x,y,z or i, j, k) like those used for RGB of values from 0 to 1. So, if we wanted to have the local x vector as an output, we would set the x to 1, y to 0, and z to 0.
I hope this makes sense. In concept, it seems rather simple. However, I would first have to learn Python to tackle even the simplest of scripts–something I plan to do anyways.
I would greatly appreciate if anyone can suggests ways to go about achieving this while I learn Python. Better yet, it would be incredible if someone familiar with Python could write up a node script for this function.
Either way, any advice is most welcome!