Why do we need a vector to specify which side of the face we are on with a polgon?
Because in 3D computing, programs doesn’t know what side of a face is “inside” or “outside”. Without normals there’s no way to know. And just the visible face has normals, the face that doesn’t visible doesn’t, so you have a plane and one of the 2 faces of a plane is visible, talking in terms of 3D software.
This was devised long time ago , when the computers barely could render 4 colors in 2D, and is a rule that have resisted until nowadays because it works, and many render algorithms are based on this basic property.
Are not both sides identical?
Talking from the wiewpoint of 3D math and hardware: NO.
When you define a plane in any 3D software, you are really defining a face of the plane, not the two. and this is done this way because the time taken to render a scene will depends on how many “faces” you have in the scene, not “planes”. if you have a full scene where all planes have 2 faces, then you have wasted at least 2x the time to render it (not counting all hidden faces that need to be discarded). Any 3D artist must know at least that one plane has only one face and when problems arise for whatever reason, the fist thing to verify is if the faces are “looking” to the right side. Usually all 3D software nowadays manages to do the right thing and the user normally doesn’t worry about these things, except when dealing with complicated geometry or shapes. Of course also depends on the render engine you use…
My follow up question is why are back faces culled to begin with?
You don’t need to render what you don’t see, and this makes posible that the scene you are working render fast without the software need to make useless calculations that end discarded (read: working for nothing). As i write before, has to do with make things go faster. Do this excersise: take a scene, duplicate the scene and invert the normals of all faces in the duplicate, then press “render” and you’ll note that the scene will render slower than the original scene. Now take a scene of… well… 500000-700000 polys and you’ll see what i’m talking about (and i’m being conservative).
This is also true to graphics accelerators: to have one plane with 2 faces, any graphic accelerator need to draw 2 different faces: one with normals inverted respect the other. so the time taken to generate and display them is increased 2x. Not a problem with low numbers, but when using large numbers of polygons, then things will get slower. That’s why if you render a face with the normal pointing to the same wiewpoint that the camera with any game engine, the face will become invisible. The GPU is just ignoring the face. (And this is the right behavior).
In what situations would you need a back face rendered?
If you need to render a still: Never.
If you need to render an animation: Never.
If you need to program a graphics engine that deals with all this: probably many times.
In the bottom, all of this depends on the software/hardware you use. Nowadays you don’t need to worry much about these kind of things, except when problems arise. In general, the only thing you need to know, as an artist, about surface normals is that they need to point to the camera or be visible by the camera, nothing more, nothing less. If you are going to be a software enginner, then you need way more than wikipedia has.